Merge tag 'iio-for-4.6c' of git://git.kernel.org/pub/scm/linux/kernel/git/jic23/iio into staging-next
Jonathan writes:
Third set of IIO new device support, features and cleanups for the 4.6 cycle.
Good to see several new contributors in this set - and more generally a
number of new 'faces' over this whole cycle.
Staging movements
* hmc5843
- out of staging.
* periodic RTC trigger
- driver dropped. This is an ancient driver (brings back some memories ;)
that was always somewhat of a bodge. Originally there was a driver that
never went into mainline that supported large numbers of periodict timers
on the PXA270 via this route. Discussions to have a generic periodic
timer subsystem never went anywhere. At the time RTC periodic
interrupts were real - now they are emulated using high resolution
timers so with the HRT driver this has become pointless.
New device support
* mpu6050 driver
- Add support for the mpu6500.
* TI tpl0102 potentiometer
- new driver.
* Vybrid SoC DAC
- new driver. The ADC on this SoC has been supported for a while, this
adds a separate driver for the DAC.
New Features
* hmc5844
- Attributes to configure the bias current (typically part of a self test)
This could be done before via a somewhat obscure custom interface.
This at least makes it easy to tell what is going on.
- Document all custom attributes.
* mpu6050
- Add support for calibration offset control and readback.
* ms5611
- power regulator support. This is always one that gets added the
first time someone has a board that needs it. Here it was needed,
hence it was added.
Cleanups / minor fixes
* tree wide
- clean up all the myriad different return values in response to a
failure of i2c_check_functionality. After discussions everyone seemed
happy wiht -EOPNOTSUPP which seems to describe the situation well.
I encouraged a tree wide cleanup to set a good example in future for
this.
* core
- Typos in the iio_event_spec documentation in iio.h
* afe4403
- select REGMAP_SPI to avoid dependency issues
- mark suspend/resume as __maybe_unused to avoid warnings
* afe4404
- mark suspend/resume as __maybe_unused to avoid warnings
* atlas-ph-sensor
- switch the regmap cache type from linear to rbtree to gain reading of
registers on initial startup. It's not immediately obvious, but
regmap flat is meant for high performances cases so doesn't read these
registers.
- use regmap_bulk_read in one case where it was using
i2c_smbus_read_i2c_block_data directly (unlike everything else that was
through regmap).
* ina2xx
- stype cleanups (lots of them!)
* isl29018
- Get the struct device back from regmap rather than storing another
copy of it in the private data. This cleanup makes sense in a number
of other drivers so patches may well follow.
* mpu6050
- style cleanups (lots of them!)
- improved return value handling
- use usleep_range to avoid the usual issues with very short msleeps.
- add some missing documentation.
* ms5611
- use the probed device name for the device rather than the driver name.
- select IIO_BUFFER to avoid dependency issues
* palmas
- drop IRQF_EARLY_RESUME as no longer needed after genirq changes.
diff --git a/Documentation/ABI/stable/sysfs-bus-vmbus b/Documentation/ABI/stable/sysfs-bus-vmbus
index 636e938..5d0125f 100644
--- a/Documentation/ABI/stable/sysfs-bus-vmbus
+++ b/Documentation/ABI/stable/sysfs-bus-vmbus
@@ -27,3 +27,17 @@
Virtual Processors.
Format: <channel's child_relid:the bound cpu's number>
Users: tools/hv/lsvmbus
+
+What: /sys/bus/vmbus/devices/vmbus_*/device
+Date: Dec. 2015
+KernelVersion: 4.5
+Contact: K. Y. Srinivasan <kys@microsoft.com>
+Description: The 16 bit device ID of the device
+Users: tools/hv/lsvmbus and user level RDMA libraries
+
+What: /sys/bus/vmbus/devices/vmbus_*/vendor
+Date: Dec. 2015
+KernelVersion: 4.5
+Contact: K. Y. Srinivasan <kys@microsoft.com>
+Description: The 16 bit vendor ID of the device
+Users: tools/hv/lsvmbus and user level RDMA libraries
diff --git a/Documentation/cgroup-v2.txt b/Documentation/cgroup-v2.txt
index e8d25e7..ff49cf9 100644
--- a/Documentation/cgroup-v2.txt
+++ b/Documentation/cgroup-v2.txt
@@ -7,7 +7,7 @@
conventions of cgroup v2. It describes all userland-visible aspects
of cgroup including core and specific controller behaviors. All
future changes must be reflected in this document. Documentation for
-v1 is available under Documentation/cgroup-legacy/.
+v1 is available under Documentation/cgroup-v1/.
CONTENTS
diff --git a/Documentation/devicetree/bindings/clock/rockchip,rk3036-cru.txt b/Documentation/devicetree/bindings/clock/rockchip,rk3036-cru.txt
index ace0599..20df350 100644
--- a/Documentation/devicetree/bindings/clock/rockchip,rk3036-cru.txt
+++ b/Documentation/devicetree/bindings/clock/rockchip,rk3036-cru.txt
@@ -30,7 +30,7 @@
clock-output-names:
- "xin24m" - crystal input - required,
- "ext_i2s" - external I2S clock - optional,
- - "ext_gmac" - external GMAC clock - optional
+ - "rmii_clkin" - external EMAC clock - optional
Example: Clock controller node:
diff --git a/Documentation/devicetree/bindings/goldfish/pipe.txt b/Documentation/devicetree/bindings/goldfish/pipe.txt
new file mode 100644
index 0000000..e417a31
--- /dev/null
+++ b/Documentation/devicetree/bindings/goldfish/pipe.txt
@@ -0,0 +1,17 @@
+Android Goldfish QEMU Pipe
+
+Andorid pipe virtual device generated by android emulator.
+
+Required properties:
+
+- compatible : should contain "google,android-pipe" to match emulator
+- reg : <registers mapping>
+- interrupts : <interrupt mapping>
+
+Example:
+
+ android_pipe@a010000 {
+ compatible = "google,android-pipe";
+ reg = <ff018000 0x2000>;
+ interrupts = <0x12>;
+ };
diff --git a/Documentation/devicetree/bindings/interrupt-controller/arm,gic-v3.txt b/Documentation/devicetree/bindings/interrupt-controller/arm,gic-v3.txt
index 7803e77..007a5b4 100644
--- a/Documentation/devicetree/bindings/interrupt-controller/arm,gic-v3.txt
+++ b/Documentation/devicetree/bindings/interrupt-controller/arm,gic-v3.txt
@@ -24,9 +24,8 @@
1 = edge triggered
4 = level triggered
- Cells 4 and beyond are reserved for future use. When the 1st cell
- has a value of 0 or 1, cells 4 and beyond act as padding, and may be
- ignored. It is recommended that padding cells have a value of 0.
+ Cells 4 and beyond are reserved for future use and must have a value
+ of 0 if present.
- reg : Specifies base physical address(s) and size of the GIC
registers, in the following order:
diff --git a/Documentation/devicetree/bindings/misc/eeprom-93xx46.txt b/Documentation/devicetree/bindings/misc/eeprom-93xx46.txt
new file mode 100644
index 0000000..a8ebb46
--- /dev/null
+++ b/Documentation/devicetree/bindings/misc/eeprom-93xx46.txt
@@ -0,0 +1,25 @@
+EEPROMs (SPI) compatible with Microchip Technology 93xx46 family.
+
+Required properties:
+- compatible : shall be one of:
+ "atmel,at93c46d"
+ "eeprom-93xx46"
+- data-size : number of data bits per word (either 8 or 16)
+
+Optional properties:
+- read-only : parameter-less property which disables writes to the EEPROM
+- select-gpios : if present, specifies the GPIO that will be asserted prior to
+ each access to the EEPROM (e.g. for SPI bus multiplexing)
+
+Property rules described in Documentation/devicetree/bindings/spi/spi-bus.txt
+apply. In particular, "reg" and "spi-max-frequency" properties must be given.
+
+Example:
+ eeprom@0 {
+ compatible = "eeprom-93xx46";
+ reg = <0>;
+ spi-max-frequency = <1000000>;
+ spi-cs-high;
+ data-size = <8>;
+ select-gpios = <&gpio4 4 GPIO_ACTIVE_HIGH>;
+ };
diff --git a/Documentation/devicetree/bindings/net/renesas,ravb.txt b/Documentation/devicetree/bindings/net/renesas,ravb.txt
index 81a9f9e..c8ac222 100644
--- a/Documentation/devicetree/bindings/net/renesas,ravb.txt
+++ b/Documentation/devicetree/bindings/net/renesas,ravb.txt
@@ -82,8 +82,8 @@
"ch16", "ch17", "ch18", "ch19",
"ch20", "ch21", "ch22", "ch23",
"ch24";
- clocks = <&mstp8_clks R8A7795_CLK_ETHERAVB>;
- power-domains = <&cpg_clocks>;
+ clocks = <&cpg CPG_MOD 812>;
+ power-domains = <&cpg>;
phy-mode = "rgmii-id";
phy-handle = <&phy0>;
diff --git a/Documentation/devicetree/bindings/nvmem/lpc1857-eeprom.txt b/Documentation/devicetree/bindings/nvmem/lpc1857-eeprom.txt
new file mode 100644
index 0000000..809df68
--- /dev/null
+++ b/Documentation/devicetree/bindings/nvmem/lpc1857-eeprom.txt
@@ -0,0 +1,28 @@
+* NXP LPC18xx EEPROM memory NVMEM driver
+
+Required properties:
+ - compatible: Should be "nxp,lpc1857-eeprom"
+ - reg: Must contain an entry with the physical base address and length
+ for each entry in reg-names.
+ - reg-names: Must include the following entries.
+ - reg: EEPROM registers.
+ - mem: EEPROM address space.
+ - clocks: Must contain an entry for each entry in clock-names.
+ - clock-names: Must include the following entries.
+ - eeprom: EEPROM operating clock.
+ - resets: Should contain a reference to the reset controller asserting
+ the EEPROM in reset.
+ - interrupts: Should contain EEPROM interrupt.
+
+Example:
+
+ eeprom: eeprom@4000e000 {
+ compatible = "nxp,lpc1857-eeprom";
+ reg = <0x4000e000 0x1000>,
+ <0x20040000 0x4000>;
+ reg-names = "reg", "mem";
+ clocks = <&ccu1 CLK_CPU_EEPROM>;
+ clock-names = "eeprom";
+ resets = <&rgu 27>;
+ interrupts = <4>;
+ };
diff --git a/Documentation/devicetree/bindings/nvmem/mtk-efuse.txt b/Documentation/devicetree/bindings/nvmem/mtk-efuse.txt
new file mode 100644
index 0000000..74cf529
--- /dev/null
+++ b/Documentation/devicetree/bindings/nvmem/mtk-efuse.txt
@@ -0,0 +1,36 @@
+= Mediatek MTK-EFUSE device tree bindings =
+
+This binding is intended to represent MTK-EFUSE which is found in most Mediatek SOCs.
+
+Required properties:
+- compatible: should be "mediatek,mt8173-efuse" or "mediatek,efuse"
+- reg: Should contain registers location and length
+
+= Data cells =
+Are child nodes of MTK-EFUSE, bindings of which as described in
+bindings/nvmem/nvmem.txt
+
+Example:
+
+ efuse: efuse@10206000 {
+ compatible = "mediatek,mt8173-efuse";
+ reg = <0 0x10206000 0 0x1000>;
+ #address-cells = <1>;
+ #size-cells = <1>;
+
+ /* Data cells */
+ thermal_calibration: calib@528 {
+ reg = <0x528 0xc>;
+ };
+ };
+
+= Data consumers =
+Are device nodes which consume nvmem data cells.
+
+For example:
+
+ thermal {
+ ...
+ nvmem-cells = <&thermal_calibration>;
+ nvmem-cell-names = "calibration";
+ };
diff --git a/Documentation/devicetree/bindings/pci/pci-rcar-gen2.txt b/Documentation/devicetree/bindings/pci/pci-rcar-gen2.txt
index 4e8b90e..07a7509 100644
--- a/Documentation/devicetree/bindings/pci/pci-rcar-gen2.txt
+++ b/Documentation/devicetree/bindings/pci/pci-rcar-gen2.txt
@@ -8,6 +8,7 @@
Required properties:
- compatible: "renesas,pci-r8a7790" for the R8A7790 SoC;
"renesas,pci-r8a7791" for the R8A7791 SoC;
+ "renesas,pci-r8a7793" for the R8A7793 SoC;
"renesas,pci-r8a7794" for the R8A7794 SoC;
"renesas,pci-rcar-gen2" for a generic R-Car Gen2 compatible device
diff --git a/Documentation/devicetree/bindings/pci/rcar-pci.txt b/Documentation/devicetree/bindings/pci/rcar-pci.txt
index 558fe52..6cf9969 100644
--- a/Documentation/devicetree/bindings/pci/rcar-pci.txt
+++ b/Documentation/devicetree/bindings/pci/rcar-pci.txt
@@ -4,6 +4,7 @@
compatible: "renesas,pcie-r8a7779" for the R8A7779 SoC;
"renesas,pcie-r8a7790" for the R8A7790 SoC;
"renesas,pcie-r8a7791" for the R8A7791 SoC;
+ "renesas,pcie-r8a7793" for the R8A7793 SoC;
"renesas,pcie-r8a7795" for the R8A7795 SoC;
"renesas,pcie-rcar-gen2" for a generic R-Car Gen2 compatible device.
diff --git a/Documentation/devicetree/bindings/regulator/tps65217.txt b/Documentation/devicetree/bindings/regulator/tps65217.txt
index d181096..4f05d20 100644
--- a/Documentation/devicetree/bindings/regulator/tps65217.txt
+++ b/Documentation/devicetree/bindings/regulator/tps65217.txt
@@ -26,11 +26,7 @@
ti,pmic-shutdown-controller;
regulators {
- #address-cells = <1>;
- #size-cells = <0>;
-
dcdc1_reg: dcdc1 {
- reg = <0>;
regulator-min-microvolt = <900000>;
regulator-max-microvolt = <1800000>;
regulator-boot-on;
@@ -38,7 +34,6 @@
};
dcdc2_reg: dcdc2 {
- reg = <1>;
regulator-min-microvolt = <900000>;
regulator-max-microvolt = <3300000>;
regulator-boot-on;
@@ -46,7 +41,6 @@
};
dcdc3_reg: dcc3 {
- reg = <2>;
regulator-min-microvolt = <900000>;
regulator-max-microvolt = <1500000>;
regulator-boot-on;
@@ -54,7 +48,6 @@
};
ldo1_reg: ldo1 {
- reg = <3>;
regulator-min-microvolt = <1000000>;
regulator-max-microvolt = <3300000>;
regulator-boot-on;
@@ -62,7 +55,6 @@
};
ldo2_reg: ldo2 {
- reg = <4>;
regulator-min-microvolt = <900000>;
regulator-max-microvolt = <3300000>;
regulator-boot-on;
@@ -70,7 +62,6 @@
};
ldo3_reg: ldo3 {
- reg = <5>;
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <3300000>;
regulator-boot-on;
@@ -78,7 +69,6 @@
};
ldo4_reg: ldo4 {
- reg = <6>;
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <3300000>;
regulator-boot-on;
diff --git a/Documentation/devicetree/bindings/rtc/s3c-rtc.txt b/Documentation/devicetree/bindings/rtc/s3c-rtc.txt
index ac2fcd6..1068ffc 100644
--- a/Documentation/devicetree/bindings/rtc/s3c-rtc.txt
+++ b/Documentation/devicetree/bindings/rtc/s3c-rtc.txt
@@ -14,6 +14,10 @@
interrupt number is the rtc alarm interrupt and second interrupt number
is the rtc tick interrupt. The number of cells representing a interrupt
depends on the parent interrupt controller.
+- clocks: Must contain a list of phandle and clock specifier for the rtc
+ and source clocks.
+- clock-names: Must contain "rtc" and "rtc_src" entries sorted in the
+ same order as the clocks property.
Example:
@@ -21,4 +25,6 @@
compatible = "samsung,s3c6410-rtc";
reg = <0x10070000 0x100>;
interrupts = <44 0 45 0>;
+ clocks = <&clock CLK_RTC>, <&s2mps11_osc S2MPS11_CLK_AP>;
+ clock-names = "rtc", "rtc_src";
};
diff --git a/Documentation/devicetree/bindings/serial/fsl-imx-uart.txt b/Documentation/devicetree/bindings/serial/fsl-imx-uart.txt
index 35ae1fb..ed94c21 100644
--- a/Documentation/devicetree/bindings/serial/fsl-imx-uart.txt
+++ b/Documentation/devicetree/bindings/serial/fsl-imx-uart.txt
@@ -9,7 +9,7 @@
- fsl,uart-has-rtscts : Indicate the uart has rts and cts
- fsl,irda-mode : Indicate the uart supports irda mode
- fsl,dte-mode : Indicate the uart works in DTE mode. The uart works
- is DCE mode by default.
+ in DCE mode by default.
Note: Each uart controller should have an alias correctly numbered
in "aliases" node.
diff --git a/Documentation/devicetree/bindings/sound/fsl-asoc-card.txt b/Documentation/devicetree/bindings/sound/fsl-asoc-card.txt
index ce55c0a..4da41bf 100644
--- a/Documentation/devicetree/bindings/sound/fsl-asoc-card.txt
+++ b/Documentation/devicetree/bindings/sound/fsl-asoc-card.txt
@@ -30,6 +30,8 @@
"fsl,imx-audio-sgtl5000"
(compatible with Documentation/devicetree/bindings/sound/imx-audio-sgtl5000.txt)
+ "fsl,imx-audio-wm8960"
+
Required properties:
- compatible : Contains one of entries in the compatible list.
diff --git a/Documentation/devicetree/bindings/thermal/rcar-thermal.txt b/Documentation/devicetree/bindings/thermal/rcar-thermal.txt
index 332e625..e5ee3f1 100644
--- a/Documentation/devicetree/bindings/thermal/rcar-thermal.txt
+++ b/Documentation/devicetree/bindings/thermal/rcar-thermal.txt
@@ -1,8 +1,9 @@
* Renesas R-Car Thermal
Required properties:
-- compatible : "renesas,thermal-<soctype>", "renesas,rcar-thermal"
- as fallback.
+- compatible : "renesas,thermal-<soctype>",
+ "renesas,rcar-gen2-thermal" (with thermal-zone) or
+ "renesas,rcar-thermal" (without thermal-zone) as fallback.
Examples with soctypes are:
- "renesas,thermal-r8a73a4" (R-Mobile APE6)
- "renesas,thermal-r8a7779" (R-Car H1)
@@ -36,3 +37,35 @@
0xe61f0300 0x38>;
interrupts = <0 69 IRQ_TYPE_LEVEL_HIGH>;
};
+
+Example (with thermal-zone):
+
+thermal-zones {
+ cpu_thermal: cpu-thermal {
+ polling-delay-passive = <1000>;
+ polling-delay = <5000>;
+
+ thermal-sensors = <&thermal>;
+
+ trips {
+ cpu-crit {
+ temperature = <115000>;
+ hysteresis = <0>;
+ type = "critical";
+ };
+ };
+ cooling-maps {
+ };
+ };
+};
+
+thermal: thermal@e61f0000 {
+ compatible = "renesas,thermal-r8a7790",
+ "renesas,rcar-gen2-thermal",
+ "renesas,rcar-thermal";
+ reg = <0 0xe61f0000 0 0x14>, <0 0xe61f0100 0 0x38>;
+ interrupts = <0 69 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&mstp5_clks R8A7790_CLK_THERMAL>;
+ power-domains = <&cpg_clocks>;
+ #thermal-sensor-cells = <0>;
+};
diff --git a/Documentation/filesystems/efivarfs.txt b/Documentation/filesystems/efivarfs.txt
index c477af0..686a64b 100644
--- a/Documentation/filesystems/efivarfs.txt
+++ b/Documentation/filesystems/efivarfs.txt
@@ -14,3 +14,10 @@
efivarfs is typically mounted like this,
mount -t efivarfs none /sys/firmware/efi/efivars
+
+Due to the presence of numerous firmware bugs where removing non-standard
+UEFI variables causes the system firmware to fail to POST, efivarfs
+files that are not well-known standardized variables are created
+as immutable files. This doesn't prevent removal - "chattr -i" will work -
+but it does prevent this kind of failure from being accomplished
+accidentally.
diff --git a/Documentation/kernel-parameters.txt b/Documentation/kernel-parameters.txt
index 551ecf0..9a53c92 100644
--- a/Documentation/kernel-parameters.txt
+++ b/Documentation/kernel-parameters.txt
@@ -4235,6 +4235,17 @@
The default value of this parameter is determined by
the config option CONFIG_WQ_POWER_EFFICIENT_DEFAULT.
+ workqueue.debug_force_rr_cpu
+ Workqueue used to implicitly guarantee that work
+ items queued without explicit CPU specified are put
+ on the local CPU. This guarantee is no longer true
+ and while local CPU is still preferred work items
+ may be put on foreign CPUs. This debug option
+ forces round-robin CPU selection to flush out
+ usages which depend on the now broken guarantee.
+ When enabled, memory and cache locality will be
+ impacted.
+
x2apic_phys [X86-64,APIC] Use x2apic physical mode instead of
default x2apic cluster mode on platforms
supporting x2apic.
diff --git a/Documentation/mic/mic_overview.txt b/Documentation/mic/mic_overview.txt
index 73f44fc..074adbd 100644
--- a/Documentation/mic/mic_overview.txt
+++ b/Documentation/mic/mic_overview.txt
@@ -12,10 +12,19 @@
Since it is a PCIe card, it does not have the ability to host hardware
devices for networking, storage and console. We provide these devices
-on X100 coprocessors thus enabling a self-bootable equivalent environment
-for applications. A key benefit of our solution is that it leverages
-the standard virtio framework for network, disk and console devices,
-though in our case the virtio framework is used across a PCIe bus.
+on X100 coprocessors thus enabling a self-bootable equivalent
+environment for applications. A key benefit of our solution is that it
+leverages the standard virtio framework for network, disk and console
+devices, though in our case the virtio framework is used across a PCIe
+bus. A Virtio Over PCIe (VOP) driver allows creating user space
+backends or devices on the host which are used to probe virtio drivers
+for these devices on the MIC card. The existing VRINGH infrastructure
+in the kernel is used to access virtio rings from the host. The card
+VOP driver allows card virtio drivers to communicate with their user
+space backends on the host via a device page. Ring 3 apps on the host
+can add, remove and configure virtio devices. A thin MIC specific
+virtio_config_ops is implemented which is borrowed heavily from
+previous similar implementations in lguest and s390.
MIC PCIe card has a dma controller with 8 channels. These channels are
shared between the host s/w and the card s/w. 0 to 3 are used by host
@@ -38,7 +47,6 @@
the host to initiate DMA's to/from the card using the MIC DMA engine and
the fact that the virtio block storage backend can only be on the host.
- |
+----------+ | +----------+
| Card OS | | | Host OS |
+----------+ | +----------+
@@ -47,27 +55,25 @@
| Virtio| |Virtio | |Virtio| | |Virtio | |Virtio | |Virtio |
| Net | |Console | |Block | | |Net | |Console | |Block |
| Driver| |Driver | |Driver| | |backend | |backend | |backend |
- +-------+ +--------+ +------+ | +---------+ +--------+ +--------+
+ +---+---+ +---+----+ +--+---+ | +---------+ +----+---+ +--------+
| | | | | | |
| | | |User | | |
- | | | |------|------------|---------|-------
- +-------------------+ |Kernel +--------------------------+
- | | | Virtio over PCIe IOCTLs |
- | | +--------------------------+
-+-----------+ | | | +-----------+
-| MIC DMA | | +------+ | +------+ +------+ | | MIC DMA |
-| Driver | | | SCIF | | | SCIF | | COSM | | | Driver |
-+-----------+ | +------+ | +------+ +--+---+ | +-----------+
- | | | | | | | |
-+---------------+ | +------+ | +--+---+ +--+---+ | +----------------+
-|MIC virtual Bus| | |SCIF | | |SCIF | | COSM | | |MIC virtual Bus |
-+---------------+ | |HW Bus| | |HW Bus| | Bus | | +----------------+
- | | +------+ | +--+---+ +------+ | |
- | | | | | | | |
- | +-----------+---+ | | | +---------------+ |
- | |Intel MIC | | | | |Intel MIC | |
- +---|Card Driver | | | | |Host Driver | |
- +------------+--------+ | +----+---------------+-----+
+ | | | |------|------------|--+------|-------
+ +---------+---------+ |Kernel |
+ | | |
+ +---------+ +---+----+ +------+ | +------+ +------+ +--+---+ +-------+
+ |MIC DMA | | VOP | | SCIF | | | SCIF | | COSM | | VOP | |MIC DMA|
+ +---+-----+ +---+----+ +--+---+ | +--+---+ +--+---+ +------+ +----+--+
+ | | | | | | |
+ +---+-----+ +---+----+ +--+---+ | +--+---+ +--+---+ +------+ +----+--+
+ |MIC | | VOP | |SCIF | | |SCIF | | COSM | | VOP | | MIC |
+ |HW Bus | | HW Bus| |HW Bus| | |HW Bus| | Bus | |HW Bus| |HW Bus |
+ +---------+ +--------+ +--+---+ | +--+---+ +------+ +------+ +-------+
+ | | | | | | |
+ | +-----------+--+ | | | +---------------+ |
+ | |Intel MIC | | | | |Intel MIC | |
+ | |Card Driver | | | | |Host Driver | |
+ +---+--------------+------+ | +----+---------------+-----+
| | |
+-------------------------------------------------------------+
| |
diff --git a/Documentation/mic/mpssd/mpss b/Documentation/mic/mpssd/mpss
index 09ea9093..5fcf9fa 100755
--- a/Documentation/mic/mpssd/mpss
+++ b/Documentation/mic/mpssd/mpss
@@ -35,7 +35,7 @@
exec=/usr/sbin/mpssd
sysfs="/sys/class/mic"
-mic_modules="mic_host mic_x100_dma scif"
+mic_modules="mic_host mic_x100_dma scif vop"
start()
{
diff --git a/Documentation/mic/mpssd/mpssd.c b/Documentation/mic/mpssd/mpssd.c
index aaeafa1..518dece 100644
--- a/Documentation/mic/mpssd/mpssd.c
+++ b/Documentation/mic/mpssd/mpssd.c
@@ -926,7 +926,7 @@
char path[PATH_MAX];
int fd, err;
- snprintf(path, PATH_MAX, "/dev/mic%d", mic->id);
+ snprintf(path, PATH_MAX, "/dev/vop_virtio%d", mic->id);
fd = open(path, O_RDWR);
if (fd < 0) {
mpsslog("Could not open %s %s\n", path, strerror(errno));
diff --git a/Documentation/misc-devices/mei/mei.txt b/Documentation/misc-devices/mei/mei.txt
index 91c1fa3..2b80a0c 100644
--- a/Documentation/misc-devices/mei/mei.txt
+++ b/Documentation/misc-devices/mei/mei.txt
@@ -231,15 +231,15 @@
The Intel AMT Watchdog is composed of two parts:
1) Firmware feature - receives the heartbeats
and sends an event when the heartbeats stop.
- 2) Intel MEI driver - connects to the watchdog feature, configures the
- watchdog and sends the heartbeats.
+ 2) Intel MEI iAMT watchdog driver - connects to the watchdog feature,
+ configures the watchdog and sends the heartbeats.
-The Intel MEI driver uses the kernel watchdog API to configure the Intel AMT
-Watchdog and to send heartbeats to it. The default timeout of the
+The Intel iAMT watchdog MEI driver uses the kernel watchdog API to configure
+the Intel AMT Watchdog and to send heartbeats to it. The default timeout of the
watchdog is 120 seconds.
-If the Intel AMT Watchdog feature does not exist (i.e. the connection failed),
-the Intel MEI driver will disable the sending of heartbeats.
+If the Intel AMT is not enabled in the firmware then the watchdog client won't enumerate
+on the me client bus and watchdog devices won't be exposed.
Supported Chipsets
diff --git a/Documentation/timers/hpet.txt b/Documentation/timers/hpet.txt
index 767392f..a484d2c 100644
--- a/Documentation/timers/hpet.txt
+++ b/Documentation/timers/hpet.txt
@@ -1,9 +1,7 @@
High Precision Event Timer Driver for Linux
The High Precision Event Timer (HPET) hardware follows a specification
-by Intel and Microsoft which can be found at
-
- http://www.intel.com/hardwaredesign/hpetspec_1.pdf
+by Intel and Microsoft, revision 1.
Each HPET has one fixed-rate counter (at 10+ MHz, hence "High Precision")
and up to 32 comparators. Normally three or more comparators are provided,
diff --git a/MAINTAINERS b/MAINTAINERS
index fa6af01..dd9ee367d 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -926,17 +926,24 @@
S: Maintained
F: drivers/clk/sunxi/
-ARM/Amlogic MesonX SoC support
+ARM/Amlogic Meson SoC support
M: Carlo Caione <carlo@caione.org>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
+L: linux-meson@googlegroups.com
+W: http://linux-meson.com/
S: Maintained
-F: drivers/media/rc/meson-ir.c
-N: meson[x68]
+F: arch/arm/mach-meson/
+F: arch/arm/boot/dts/meson*
+N: meson
ARM/Annapurna Labs ALPINE ARCHITECTURE
M: Tsahee Zidenberg <tsahee@annapurnalabs.com>
+M: Antoine Tenart <antoine.tenart@free-electrons.com>
S: Maintained
F: arch/arm/mach-alpine/
+F: arch/arm/boot/dts/alpine*
+F: arch/arm64/boot/dts/al/
+F: drivers/*/*alpine*
ARM/ATMEL AT91RM9200, AT91SAM9 AND SAMA5 SOC SUPPORT
M: Nicolas Ferre <nicolas.ferre@atmel.com>
@@ -1448,8 +1455,8 @@
ARM/RENESAS ARM64 ARCHITECTURE
M: Simon Horman <horms@verge.net.au>
M: Magnus Damm <magnus.damm@gmail.com>
-L: linux-sh@vger.kernel.org
-Q: http://patchwork.kernel.org/project/linux-sh/list/
+L: linux-renesas-soc@vger.kernel.org
+Q: http://patchwork.kernel.org/project/linux-renesas-soc/list/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/horms/renesas.git next
S: Supported
F: arch/arm64/boot/dts/renesas/
@@ -2374,14 +2381,6 @@
S: Maintained
N: bcm2835
-BROADCOM BCM33XX MIPS ARCHITECTURE
-M: Kevin Cernekee <cernekee@gmail.com>
-L: linux-mips@linux-mips.org
-S: Maintained
-F: arch/mips/bcm3384/*
-F: arch/mips/include/asm/mach-bcm3384/*
-F: arch/mips/kernel/*bmips*
-
BROADCOM BCM47XX MIPS ARCHITECTURE
M: Hauke Mehrtens <hauke@hauke-m.de>
M: Rafał Miłecki <zajec5@gmail.com>
@@ -3464,7 +3463,6 @@
DESIGNWARE USB3 DRD IP DRIVER
M: Felipe Balbi <balbi@kernel.org>
L: linux-usb@vger.kernel.org
-L: linux-omap@vger.kernel.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/balbi/usb.git
S: Maintained
F: drivers/usb/dwc3/
@@ -5754,6 +5752,7 @@
F: include/uapi/linux/mei.h
F: include/linux/mei_cl_bus.h
F: drivers/misc/mei/*
+F: drivers/watchdog/mei_wdt.c
F: Documentation/misc-devices/mei/*
INTEL MIC DRIVERS (mic)
@@ -6141,7 +6140,7 @@
KERNEL SELFTEST FRAMEWORK
M: Shuah Khan <shuahkh@osg.samsung.com>
-L: linux-api@vger.kernel.org
+L: linux-kselftest@vger.kernel.org
T: git git://git.kernel.org/pub/scm/shuah/linux-kselftest
S: Maintained
F: tools/testing/selftests
@@ -7367,7 +7366,7 @@
F: include/linux/isicom.h
MUSB MULTIPOINT HIGH SPEED DUAL-ROLE CONTROLLER
-M: Felipe Balbi <balbi@kernel.org>
+M: Bin Liu <b-liu@ti.com>
L: linux-usb@vger.kernel.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/balbi/usb.git
S: Maintained
@@ -7699,13 +7698,13 @@
F: arch/nios2/
NOKIA N900 POWER SUPPLY DRIVERS
-M: Pali Rohár <pali.rohar@gmail.com>
-S: Maintained
+R: Pali Rohár <pali.rohar@gmail.com>
F: include/linux/power/bq2415x_charger.h
F: include/linux/power/bq27xxx_battery.h
F: include/linux/power/isp1704_charger.h
F: drivers/power/bq2415x_charger.c
F: drivers/power/bq27xxx_battery.c
+F: drivers/power/bq27xxx_battery_i2c.c
F: drivers/power/isp1704_charger.c
F: drivers/power/rx51_battery.c
@@ -7936,11 +7935,9 @@
F: drivers/staging/media/omap4iss/
OMAP USB SUPPORT
-M: Felipe Balbi <balbi@kernel.org>
L: linux-usb@vger.kernel.org
L: linux-omap@vger.kernel.org
-T: git git://git.kernel.org/pub/scm/linux/kernel/git/balbi/usb.git
-S: Maintained
+S: Orphan
F: drivers/usb/*/*omap*
F: arch/arm/*omap*/usb*
@@ -9578,6 +9575,12 @@
S: Maintained
F: drivers/thunderbolt/
+TI BQ27XXX POWER SUPPLY DRIVER
+R: Andrew F. Davis <afd@ti.com>
+F: include/linux/power/bq27xxx_battery.h
+F: drivers/power/bq27xxx_battery.c
+F: drivers/power/bq27xxx_battery_i2c.c
+
TIMEKEEPING, CLOCKSOURCE CORE, NTP, ALARMTIMER
M: John Stultz <john.stultz@linaro.org>
M: Thomas Gleixner <tglx@linutronix.de>
@@ -9799,10 +9802,11 @@
F: drivers/scsi/be2iscsi/
Emulex 10Gbps NIC BE2, BE3-R, Lancer, Skyhawk-R DRIVER
-M: Sathya Perla <sathya.perla@avagotech.com>
-M: Ajit Khaparde <ajit.khaparde@avagotech.com>
-M: Padmanabh Ratnakar <padmanabh.ratnakar@avagotech.com>
-M: Sriharsha Basavapatna <sriharsha.basavapatna@avagotech.com>
+M: Sathya Perla <sathya.perla@broadcom.com>
+M: Ajit Khaparde <ajit.khaparde@broadcom.com>
+M: Padmanabh Ratnakar <padmanabh.ratnakar@broadcom.com>
+M: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
+M: Somnath Kotur <somnath.kotur@broadcom.com>
L: netdev@vger.kernel.org
W: http://www.emulex.com
S: Supported
@@ -12019,7 +12023,6 @@
F: arch/arm64/include/asm/xen/
XEN NETWORK BACKEND DRIVER
-M: Ian Campbell <ian.campbell@citrix.com>
M: Wei Liu <wei.liu2@citrix.com>
L: xen-devel@lists.xenproject.org (moderated for non-subscribers)
L: netdev@vger.kernel.org
diff --git a/Makefile b/Makefile
index 6828408..af6e5f8 100644
--- a/Makefile
+++ b/Makefile
@@ -1,7 +1,7 @@
VERSION = 4
PATCHLEVEL = 5
SUBLEVEL = 0
-EXTRAVERSION = -rc3
+EXTRAVERSION = -rc6
NAME = Blurry Fish Butt
# *DOCUMENTATION*
diff --git a/arch/arc/Kconfig b/arch/arc/Kconfig
index 76dde9d..8a188bc 100644
--- a/arch/arc/Kconfig
+++ b/arch/arc/Kconfig
@@ -12,8 +12,6 @@
select BUILDTIME_EXTABLE_SORT
select COMMON_CLK
select CLONE_BACKWARDS
- # ARC Busybox based initramfs absolutely relies on DEVTMPFS for /dev
- select DEVTMPFS if !INITRAMFS_SOURCE=""
select GENERIC_ATOMIC64
select GENERIC_CLOCKEVENTS
select GENERIC_FIND_FIRST_BIT
@@ -275,14 +273,6 @@
default "0xA0000000"
depends on ARC_HAS_DCCM
-config ARC_HAS_HW_MPY
- bool "Use Hardware Multiplier (Normal or Faster XMAC)"
- default y
- help
- Influences how gcc generates code for MPY operations.
- If enabled, MPYxx insns are generated, provided by Standard/XMAC
- Multipler. Otherwise software multipy lib is used
-
choice
prompt "MMU Version"
default ARC_MMU_V3 if ARC_CPU_770
@@ -338,6 +328,19 @@
endchoice
+choice
+ prompt "MMU Super Page Size"
+ depends on ISA_ARCV2 && TRANSPARENT_HUGEPAGE
+ default ARC_HUGEPAGE_2M
+
+config ARC_HUGEPAGE_2M
+ bool "2MB"
+
+config ARC_HUGEPAGE_16M
+ bool "16MB"
+
+endchoice
+
if ISA_ARCOMPACT
config ARC_COMPACT_IRQ_LEVELS
@@ -410,7 +413,7 @@
default n
depends on !SMP
-config ARC_HAS_GRTC
+config ARC_HAS_GFRC
bool "SMP synchronized 64-bit cycle counter"
default y
depends on SMP
@@ -529,14 +532,6 @@
Counts number of I and D TLB Misses and exports them via Debugfs
The counters can be cleared via Debugfs as well
-if SMP
-
-config ARC_IPI_DBG
- bool "Debug Inter Core interrupts"
- default n
-
-endif
-
endif
config ARC_UBOOT_SUPPORT
@@ -566,6 +561,12 @@
endmenu # "ARC Architecture Configuration"
source "mm/Kconfig"
+
+config FORCE_MAX_ZONEORDER
+ int "Maximum zone order"
+ default "12" if ARC_HUGEPAGE_16M
+ default "11"
+
source "net/Kconfig"
source "drivers/Kconfig"
source "fs/Kconfig"
diff --git a/arch/arc/Makefile b/arch/arc/Makefile
index aeb1902..c8230f3 100644
--- a/arch/arc/Makefile
+++ b/arch/arc/Makefile
@@ -74,10 +74,6 @@
# --build-id w/o "-marclinux". Default arc-elf32-ld is OK
ldflags-$(upto_gcc44) += -marclinux
-ifndef CONFIG_ARC_HAS_HW_MPY
- cflags-y += -mno-mpy
-endif
-
LIBGCC := $(shell $(CC) $(cflags-y) --print-libgcc-file-name)
# Modules with short calls might break for calls into builtin-kernel
diff --git a/arch/arc/configs/axs101_defconfig b/arch/arc/configs/axs101_defconfig
index f1ac981..5d4e2a0 100644
--- a/arch/arc/configs/axs101_defconfig
+++ b/arch/arc/configs/axs101_defconfig
@@ -39,6 +39,7 @@
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
# CONFIG_IPV6 is not set
+CONFIG_DEVTMPFS=y
# CONFIG_STANDALONE is not set
# CONFIG_PREVENT_FIRMWARE_BUILD is not set
# CONFIG_FIRMWARE_IN_KERNEL is not set
@@ -73,7 +74,6 @@
CONFIG_I2C_DESIGNWARE_PLATFORM=y
# CONFIG_HWMON is not set
CONFIG_FB=y
-# CONFIG_VGA_CONSOLE is not set
CONFIG_FRAMEBUFFER_CONSOLE=y
CONFIG_FRAMEBUFFER_CONSOLE_DETECT_PRIMARY=y
CONFIG_LOGO=y
@@ -91,12 +91,10 @@
CONFIG_MMC_DW=y
# CONFIG_IOMMU_SUPPORT is not set
CONFIG_EXT3_FS=y
-CONFIG_EXT4_FS=y
CONFIG_MSDOS_FS=y
CONFIG_VFAT_FS=y
CONFIG_NTFS_FS=y
CONFIG_TMPFS=y
-CONFIG_JFFS2_FS=y
CONFIG_NFS_FS=y
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_ISO8859_1=y
diff --git a/arch/arc/configs/axs103_defconfig b/arch/arc/configs/axs103_defconfig
index 323486d..87ee46b 100644
--- a/arch/arc/configs/axs103_defconfig
+++ b/arch/arc/configs/axs103_defconfig
@@ -39,14 +39,10 @@
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
# CONFIG_IPV6 is not set
+CONFIG_DEVTMPFS=y
# CONFIG_STANDALONE is not set
# CONFIG_PREVENT_FIRMWARE_BUILD is not set
# CONFIG_FIRMWARE_IN_KERNEL is not set
-CONFIG_MTD=y
-CONFIG_MTD_CMDLINE_PARTS=y
-CONFIG_MTD_BLOCK=y
-CONFIG_MTD_NAND=y
-CONFIG_MTD_NAND_AXS=y
CONFIG_SCSI=y
CONFIG_BLK_DEV_SD=y
CONFIG_NETDEVICES=y
@@ -78,14 +74,12 @@
CONFIG_I2C_DESIGNWARE_PLATFORM=y
# CONFIG_HWMON is not set
CONFIG_FB=y
-# CONFIG_VGA_CONSOLE is not set
CONFIG_FRAMEBUFFER_CONSOLE=y
CONFIG_FRAMEBUFFER_CONSOLE_DETECT_PRIMARY=y
CONFIG_LOGO=y
# CONFIG_LOGO_LINUX_MONO is not set
# CONFIG_LOGO_LINUX_VGA16 is not set
# CONFIG_LOGO_LINUX_CLUT224 is not set
-CONFIG_USB=y
CONFIG_USB_EHCI_HCD=y
CONFIG_USB_EHCI_HCD_PLATFORM=y
CONFIG_USB_OHCI_HCD=y
@@ -97,12 +91,10 @@
CONFIG_MMC_DW=y
# CONFIG_IOMMU_SUPPORT is not set
CONFIG_EXT3_FS=y
-CONFIG_EXT4_FS=y
CONFIG_MSDOS_FS=y
CONFIG_VFAT_FS=y
CONFIG_NTFS_FS=y
CONFIG_TMPFS=y
-CONFIG_JFFS2_FS=y
CONFIG_NFS_FS=y
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_ISO8859_1=y
diff --git a/arch/arc/configs/axs103_smp_defconfig b/arch/arc/configs/axs103_smp_defconfig
index 66191cd..d80daf4 100644
--- a/arch/arc/configs/axs103_smp_defconfig
+++ b/arch/arc/configs/axs103_smp_defconfig
@@ -40,14 +40,10 @@
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
# CONFIG_IPV6 is not set
+CONFIG_DEVTMPFS=y
# CONFIG_STANDALONE is not set
# CONFIG_PREVENT_FIRMWARE_BUILD is not set
# CONFIG_FIRMWARE_IN_KERNEL is not set
-CONFIG_MTD=y
-CONFIG_MTD_CMDLINE_PARTS=y
-CONFIG_MTD_BLOCK=y
-CONFIG_MTD_NAND=y
-CONFIG_MTD_NAND_AXS=y
CONFIG_SCSI=y
CONFIG_BLK_DEV_SD=y
CONFIG_NETDEVICES=y
@@ -79,14 +75,12 @@
CONFIG_I2C_DESIGNWARE_PLATFORM=y
# CONFIG_HWMON is not set
CONFIG_FB=y
-# CONFIG_VGA_CONSOLE is not set
CONFIG_FRAMEBUFFER_CONSOLE=y
CONFIG_FRAMEBUFFER_CONSOLE_DETECT_PRIMARY=y
CONFIG_LOGO=y
# CONFIG_LOGO_LINUX_MONO is not set
# CONFIG_LOGO_LINUX_VGA16 is not set
# CONFIG_LOGO_LINUX_CLUT224 is not set
-CONFIG_USB=y
CONFIG_USB_EHCI_HCD=y
CONFIG_USB_EHCI_HCD_PLATFORM=y
CONFIG_USB_OHCI_HCD=y
@@ -98,12 +92,10 @@
CONFIG_MMC_DW=y
# CONFIG_IOMMU_SUPPORT is not set
CONFIG_EXT3_FS=y
-CONFIG_EXT4_FS=y
CONFIG_MSDOS_FS=y
CONFIG_VFAT_FS=y
CONFIG_NTFS_FS=y
CONFIG_TMPFS=y
-CONFIG_JFFS2_FS=y
CONFIG_NFS_FS=y
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_ISO8859_1=y
diff --git a/arch/arc/configs/nsim_700_defconfig b/arch/arc/configs/nsim_700_defconfig
index 138f9d8..f410953 100644
--- a/arch/arc/configs/nsim_700_defconfig
+++ b/arch/arc/configs/nsim_700_defconfig
@@ -4,6 +4,7 @@
# CONFIG_SWAP is not set
CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y
+# CONFIG_CROSS_MEMORY_ATTACH is not set
CONFIG_HIGH_RES_TIMERS=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
@@ -26,7 +27,6 @@
CONFIG_ARC_BUILTIN_DTB_NAME="nsim_700"
CONFIG_PREEMPT=y
# CONFIG_COMPACTION is not set
-# CONFIG_CROSS_MEMORY_ATTACH is not set
CONFIG_NET=y
CONFIG_PACKET=y
CONFIG_UNIX=y
@@ -34,6 +34,7 @@
CONFIG_NET_KEY=y
CONFIG_INET=y
# CONFIG_IPV6 is not set
+CONFIG_DEVTMPFS=y
# CONFIG_STANDALONE is not set
# CONFIG_PREVENT_FIRMWARE_BUILD is not set
# CONFIG_FIRMWARE_IN_KERNEL is not set
@@ -51,7 +52,6 @@
CONFIG_SERIAL_ARC_CONSOLE=y
# CONFIG_HW_RANDOM is not set
# CONFIG_HWMON is not set
-# CONFIG_VGA_CONSOLE is not set
# CONFIG_HID is not set
# CONFIG_USB_SUPPORT is not set
# CONFIG_IOMMU_SUPPORT is not set
@@ -63,4 +63,3 @@
# CONFIG_ENABLE_WARN_DEPRECATED is not set
# CONFIG_ENABLE_MUST_CHECK is not set
# CONFIG_DEBUG_PREEMPT is not set
-CONFIG_XZ_DEC=y
diff --git a/arch/arc/configs/nsim_hs_defconfig b/arch/arc/configs/nsim_hs_defconfig
index f68838e..cfaa33c 100644
--- a/arch/arc/configs/nsim_hs_defconfig
+++ b/arch/arc/configs/nsim_hs_defconfig
@@ -35,6 +35,7 @@
CONFIG_NET_KEY=y
CONFIG_INET=y
# CONFIG_IPV6 is not set
+CONFIG_DEVTMPFS=y
# CONFIG_STANDALONE is not set
# CONFIG_PREVENT_FIRMWARE_BUILD is not set
# CONFIG_FIRMWARE_IN_KERNEL is not set
@@ -49,7 +50,6 @@
CONFIG_SERIAL_ARC_CONSOLE=y
# CONFIG_HW_RANDOM is not set
# CONFIG_HWMON is not set
-# CONFIG_VGA_CONSOLE is not set
# CONFIG_HID is not set
# CONFIG_USB_SUPPORT is not set
# CONFIG_IOMMU_SUPPORT is not set
@@ -61,4 +61,3 @@
# CONFIG_ENABLE_WARN_DEPRECATED is not set
# CONFIG_ENABLE_MUST_CHECK is not set
# CONFIG_DEBUG_PREEMPT is not set
-CONFIG_XZ_DEC=y
diff --git a/arch/arc/configs/nsim_hs_smp_defconfig b/arch/arc/configs/nsim_hs_smp_defconfig
index 96bd1c2..bb2a8dc 100644
--- a/arch/arc/configs/nsim_hs_smp_defconfig
+++ b/arch/arc/configs/nsim_hs_smp_defconfig
@@ -2,6 +2,7 @@
# CONFIG_LOCALVERSION_AUTO is not set
CONFIG_DEFAULT_HOSTNAME="ARCLinux"
# CONFIG_SWAP is not set
+# CONFIG_CROSS_MEMORY_ATTACH is not set
CONFIG_HIGH_RES_TIMERS=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
@@ -21,13 +22,11 @@
# CONFIG_IOSCHED_DEADLINE is not set
# CONFIG_IOSCHED_CFQ is not set
CONFIG_ARC_PLAT_SIM=y
-CONFIG_ARC_BOARD_ML509=y
CONFIG_ISA_ARCV2=y
CONFIG_SMP=y
CONFIG_ARC_BUILTIN_DTB_NAME="nsim_hs_idu"
CONFIG_PREEMPT=y
# CONFIG_COMPACTION is not set
-# CONFIG_CROSS_MEMORY_ATTACH is not set
CONFIG_NET=y
CONFIG_PACKET=y
CONFIG_UNIX=y
@@ -35,6 +34,7 @@
CONFIG_NET_KEY=y
CONFIG_INET=y
# CONFIG_IPV6 is not set
+CONFIG_DEVTMPFS=y
# CONFIG_STANDALONE is not set
# CONFIG_PREVENT_FIRMWARE_BUILD is not set
# CONFIG_FIRMWARE_IN_KERNEL is not set
@@ -49,7 +49,6 @@
CONFIG_SERIAL_ARC_CONSOLE=y
# CONFIG_HW_RANDOM is not set
# CONFIG_HWMON is not set
-# CONFIG_VGA_CONSOLE is not set
# CONFIG_HID is not set
# CONFIG_USB_SUPPORT is not set
# CONFIG_IOMMU_SUPPORT is not set
@@ -60,4 +59,3 @@
CONFIG_NFS_FS=y
# CONFIG_ENABLE_WARN_DEPRECATED is not set
# CONFIG_ENABLE_MUST_CHECK is not set
-CONFIG_XZ_DEC=y
diff --git a/arch/arc/configs/nsimosci_defconfig b/arch/arc/configs/nsimosci_defconfig
index 31e1d95..646182e 100644
--- a/arch/arc/configs/nsimosci_defconfig
+++ b/arch/arc/configs/nsimosci_defconfig
@@ -33,6 +33,7 @@
CONFIG_NET_KEY=y
CONFIG_INET=y
# CONFIG_IPV6 is not set
+CONFIG_DEVTMPFS=y
# CONFIG_STANDALONE is not set
# CONFIG_PREVENT_FIRMWARE_BUILD is not set
# CONFIG_FIRMWARE_IN_KERNEL is not set
@@ -58,7 +59,6 @@
# CONFIG_HW_RANDOM is not set
# CONFIG_HWMON is not set
CONFIG_FB=y
-# CONFIG_VGA_CONSOLE is not set
CONFIG_FRAMEBUFFER_CONSOLE=y
CONFIG_LOGO=y
# CONFIG_HID is not set
diff --git a/arch/arc/configs/nsimosci_hs_defconfig b/arch/arc/configs/nsimosci_hs_defconfig
index fcae666..ceca254 100644
--- a/arch/arc/configs/nsimosci_hs_defconfig
+++ b/arch/arc/configs/nsimosci_hs_defconfig
@@ -34,12 +34,12 @@
CONFIG_NET_KEY=y
CONFIG_INET=y
# CONFIG_IPV6 is not set
+CONFIG_DEVTMPFS=y
# CONFIG_STANDALONE is not set
# CONFIG_PREVENT_FIRMWARE_BUILD is not set
# CONFIG_FIRMWARE_IN_KERNEL is not set
# CONFIG_BLK_DEV is not set
CONFIG_NETDEVICES=y
-CONFIG_NET_OSCI_LAN=y
CONFIG_INPUT_EVDEV=y
# CONFIG_MOUSE_PS2_ALPS is not set
# CONFIG_MOUSE_PS2_LOGIPS2PP is not set
@@ -58,7 +58,6 @@
# CONFIG_HW_RANDOM is not set
# CONFIG_HWMON is not set
CONFIG_FB=y
-# CONFIG_VGA_CONSOLE is not set
CONFIG_FRAMEBUFFER_CONSOLE=y
CONFIG_LOGO=y
# CONFIG_HID is not set
diff --git a/arch/arc/configs/nsimosci_hs_smp_defconfig b/arch/arc/configs/nsimosci_hs_smp_defconfig
index b01b659..4b6da90 100644
--- a/arch/arc/configs/nsimosci_hs_smp_defconfig
+++ b/arch/arc/configs/nsimosci_hs_smp_defconfig
@@ -2,6 +2,7 @@
CONFIG_DEFAULT_HOSTNAME="ARCLinux"
# CONFIG_SWAP is not set
CONFIG_SYSVIPC=y
+# CONFIG_CROSS_MEMORY_ATTACH is not set
CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y
CONFIG_IKCONFIG=y
@@ -18,15 +19,11 @@
# CONFIG_IOSCHED_DEADLINE is not set
# CONFIG_IOSCHED_CFQ is not set
CONFIG_ARC_PLAT_SIM=y
-CONFIG_ARC_BOARD_ML509=y
CONFIG_ISA_ARCV2=y
CONFIG_SMP=y
-CONFIG_ARC_HAS_LL64=y
-# CONFIG_ARC_HAS_RTSC is not set
CONFIG_ARC_BUILTIN_DTB_NAME="nsimosci_hs_idu"
CONFIG_PREEMPT=y
# CONFIG_COMPACTION is not set
-# CONFIG_CROSS_MEMORY_ATTACH is not set
CONFIG_NET=y
CONFIG_PACKET=y
CONFIG_PACKET_DIAG=y
@@ -40,6 +37,7 @@
# CONFIG_INET_LRO is not set
# CONFIG_IPV6 is not set
# CONFIG_WIRELESS is not set
+CONFIG_DEVTMPFS=y
# CONFIG_STANDALONE is not set
# CONFIG_PREVENT_FIRMWARE_BUILD is not set
# CONFIG_FIRMWARE_IN_KERNEL is not set
@@ -56,14 +54,11 @@
# CONFIG_NET_VENDOR_STMICRO is not set
# CONFIG_NET_VENDOR_VIA is not set
# CONFIG_NET_VENDOR_WIZNET is not set
-CONFIG_NET_OSCI_LAN=y
# CONFIG_WLAN is not set
CONFIG_INPUT_EVDEV=y
CONFIG_MOUSE_PS2_TOUCHKIT=y
# CONFIG_SERIO_SERPORT is not set
-CONFIG_SERIO_LIBPS2=y
CONFIG_SERIO_ARC_PS2=y
-CONFIG_VT_HW_CONSOLE_BINDING=y
# CONFIG_LEGACY_PTYS is not set
# CONFIG_DEVKMEM is not set
CONFIG_SERIAL_8250=y
@@ -75,9 +70,6 @@
# CONFIG_HW_RANDOM is not set
# CONFIG_HWMON is not set
CONFIG_FB=y
-CONFIG_ARCPGU_RGB888=y
-CONFIG_ARCPGU_DISPTYPE=0
-# CONFIG_VGA_CONSOLE is not set
CONFIG_FRAMEBUFFER_CONSOLE=y
CONFIG_LOGO=y
# CONFIG_HID is not set
diff --git a/arch/arc/configs/tb10x_defconfig b/arch/arc/configs/tb10x_defconfig
index 3b4dc9c..9b342ea 100644
--- a/arch/arc/configs/tb10x_defconfig
+++ b/arch/arc/configs/tb10x_defconfig
@@ -3,6 +3,7 @@
CONFIG_DEFAULT_HOSTNAME="tb10x"
CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y
+# CONFIG_CROSS_MEMORY_ATTACH is not set
CONFIG_HIGH_RES_TIMERS=y
CONFIG_BSD_PROCESS_ACCT=y
CONFIG_BSD_PROCESS_ACCT_V3=y
@@ -26,12 +27,10 @@
# CONFIG_BLOCK is not set
CONFIG_ARC_PLAT_TB10X=y
CONFIG_ARC_CACHE_LINE_SHIFT=5
-CONFIG_ARC_STACK_NONEXEC=y
CONFIG_HZ=250
CONFIG_ARC_BUILTIN_DTB_NAME="abilis_tb100_dvk"
CONFIG_PREEMPT_VOLUNTARY=y
# CONFIG_COMPACTION is not set
-# CONFIG_CROSS_MEMORY_ATTACH is not set
CONFIG_NET=y
CONFIG_PACKET=y
CONFIG_UNIX=y
@@ -44,8 +43,8 @@
# CONFIG_INET_DIAG is not set
# CONFIG_IPV6 is not set
# CONFIG_WIRELESS is not set
+CONFIG_DEVTMPFS=y
# CONFIG_FIRMWARE_IN_KERNEL is not set
-CONFIG_PROC_DEVICETREE=y
CONFIG_NETDEVICES=y
# CONFIG_NET_CADENCE is not set
# CONFIG_NET_VENDOR_BROADCOM is not set
@@ -55,9 +54,6 @@
# CONFIG_NET_VENDOR_NATSEMI is not set
# CONFIG_NET_VENDOR_SEEQ is not set
CONFIG_STMMAC_ETH=y
-CONFIG_STMMAC_DEBUG_FS=y
-CONFIG_STMMAC_DA=y
-CONFIG_STMMAC_CHAINED=y
# CONFIG_NET_VENDOR_WIZNET is not set
# CONFIG_WLAN is not set
# CONFIG_INPUT is not set
@@ -91,7 +87,6 @@
CONFIG_LEDS_TRIGGER_TRANSIENT=y
CONFIG_DMADEVICES=y
CONFIG_DW_DMAC=y
-CONFIG_NET_DMA=y
CONFIG_ASYNC_TX_DMA=y
# CONFIG_IOMMU_SUPPORT is not set
# CONFIG_DNOTIFY is not set
@@ -100,17 +95,16 @@
CONFIG_CONFIGFS_FS=y
# CONFIG_MISC_FILESYSTEMS is not set
# CONFIG_NETWORK_FILESYSTEMS is not set
+CONFIG_DEBUG_INFO=y
# CONFIG_ENABLE_WARN_DEPRECATED is not set
-CONFIG_MAGIC_SYSRQ=y
CONFIG_STRIP_ASM_SYMS=y
CONFIG_DEBUG_FS=y
CONFIG_HEADERS_CHECK=y
CONFIG_DEBUG_SECTION_MISMATCH=y
+CONFIG_MAGIC_SYSRQ=y
+CONFIG_DEBUG_MEMORY_INIT=y
+CONFIG_DEBUG_STACKOVERFLOW=y
CONFIG_DETECT_HUNG_TASK=y
CONFIG_SCHEDSTATS=y
CONFIG_TIMER_STATS=y
-CONFIG_DEBUG_INFO=y
-CONFIG_DEBUG_MEMORY_INIT=y
-CONFIG_DEBUG_STACKOVERFLOW=y
-# CONFIG_CRYPTO_ANSI_CPRNG is not set
# CONFIG_CRYPTO_HW is not set
diff --git a/arch/arc/configs/vdk_hs38_smp_defconfig b/arch/arc/configs/vdk_hs38_smp_defconfig
index f36c047..7359859 100644
--- a/arch/arc/configs/vdk_hs38_smp_defconfig
+++ b/arch/arc/configs/vdk_hs38_smp_defconfig
@@ -16,7 +16,7 @@
CONFIG_AXS103=y
CONFIG_ISA_ARCV2=y
CONFIG_SMP=y
-# CONFIG_ARC_HAS_GRTC is not set
+# CONFIG_ARC_HAS_GFRC is not set
CONFIG_ARC_UBOOT_SUPPORT=y
CONFIG_ARC_BUILTIN_DTB_NAME="vdk_hs38_smp"
CONFIG_PREEMPT=y
diff --git a/arch/arc/include/asm/arcregs.h b/arch/arc/include/asm/arcregs.h
index 7fac7d8..f9f4c6f 100644
--- a/arch/arc/include/asm/arcregs.h
+++ b/arch/arc/include/asm/arcregs.h
@@ -10,7 +10,8 @@
#define _ASM_ARC_ARCREGS_H
/* Build Configuration Registers */
-#define ARC_REG_DCCMBASE_BCR 0x61 /* DCCM Base Addr */
+#define ARC_REG_AUX_DCCM 0x18 /* DCCM Base Addr ARCv2 */
+#define ARC_REG_DCCM_BASE_BUILD 0x61 /* DCCM Base Addr ARCompact */
#define ARC_REG_CRC_BCR 0x62
#define ARC_REG_VECBASE_BCR 0x68
#define ARC_REG_PERIBASE_BCR 0x69
@@ -18,10 +19,10 @@
#define ARC_REG_DPFP_BCR 0x6C /* ARCompact: Dbl Precision FPU */
#define ARC_REG_FP_V2_BCR 0xc8 /* ARCv2 FPU */
#define ARC_REG_SLC_BCR 0xce
-#define ARC_REG_DCCM_BCR 0x74 /* DCCM Present + SZ */
+#define ARC_REG_DCCM_BUILD 0x74 /* DCCM size (common) */
#define ARC_REG_TIMERS_BCR 0x75
#define ARC_REG_AP_BCR 0x76
-#define ARC_REG_ICCM_BCR 0x78
+#define ARC_REG_ICCM_BUILD 0x78 /* ICCM size (common) */
#define ARC_REG_XY_MEM_BCR 0x79
#define ARC_REG_MAC_BCR 0x7a
#define ARC_REG_MUL_BCR 0x7b
@@ -36,6 +37,7 @@
#define ARC_REG_IRQ_BCR 0xF3
#define ARC_REG_SMART_BCR 0xFF
#define ARC_REG_CLUSTER_BCR 0xcf
+#define ARC_REG_AUX_ICCM 0x208 /* ICCM Base Addr (ARCv2) */
/* status32 Bits Positions */
#define STATUS_AE_BIT 5 /* Exception active */
@@ -246,7 +248,7 @@
#endif
};
-struct bcr_iccm {
+struct bcr_iccm_arcompact {
#ifdef CONFIG_CPU_BIG_ENDIAN
unsigned int base:16, pad:5, sz:3, ver:8;
#else
@@ -254,17 +256,15 @@
#endif
};
-/* DCCM Base Address Register: ARC_REG_DCCMBASE_BCR */
-struct bcr_dccm_base {
+struct bcr_iccm_arcv2 {
#ifdef CONFIG_CPU_BIG_ENDIAN
- unsigned int addr:24, ver:8;
+ unsigned int pad:8, sz11:4, sz01:4, sz10:4, sz00:4, ver:8;
#else
- unsigned int ver:8, addr:24;
+ unsigned int ver:8, sz00:4, sz10:4, sz01:4, sz11:4, pad:8;
#endif
};
-/* DCCM RAM Configuration Register: ARC_REG_DCCM_BCR */
-struct bcr_dccm {
+struct bcr_dccm_arcompact {
#ifdef CONFIG_CPU_BIG_ENDIAN
unsigned int res:21, sz:3, ver:8;
#else
@@ -272,6 +272,14 @@
#endif
};
+struct bcr_dccm_arcv2 {
+#ifdef CONFIG_CPU_BIG_ENDIAN
+ unsigned int pad2:12, cyc:3, pad1:1, sz1:4, sz0:4, ver:8;
+#else
+ unsigned int ver:8, sz0:4, sz1:4, pad1:1, cyc:3, pad2:12;
+#endif
+};
+
/* ARCompact: Both SP and DP FPU BCRs have same format */
struct bcr_fp_arcompact {
#ifdef CONFIG_CPU_BIG_ENDIAN
@@ -315,9 +323,9 @@
struct bcr_generic {
#ifdef CONFIG_CPU_BIG_ENDIAN
- unsigned int pad:24, ver:8;
+ unsigned int info:24, ver:8;
#else
- unsigned int ver:8, pad:24;
+ unsigned int ver:8, info:24;
#endif
};
@@ -349,14 +357,13 @@
struct cpuinfo_arc_bpu bpu;
struct bcr_identity core;
struct bcr_isa isa;
- struct bcr_timer timers;
unsigned int vec_base;
struct cpuinfo_arc_ccm iccm, dccm;
struct {
unsigned int swap:1, norm:1, minmax:1, barrel:1, crc:1, pad1:3,
fpu_sp:1, fpu_dp:1, pad2:6,
debug:1, ap:1, smart:1, rtt:1, pad3:4,
- pad4:8;
+ timer0:1, timer1:1, rtc:1, gfrc:1, pad4:4;
} extn;
struct bcr_mpy extn_mpy;
struct bcr_extn_xymem extn_xymem;
diff --git a/arch/arc/include/asm/irq.h b/arch/arc/include/asm/irq.h
index 4fd7d62..49014f0 100644
--- a/arch/arc/include/asm/irq.h
+++ b/arch/arc/include/asm/irq.h
@@ -16,11 +16,9 @@
#ifdef CONFIG_ISA_ARCOMPACT
#define TIMER0_IRQ 3
#define TIMER1_IRQ 4
-#define IPI_IRQ (NR_CPU_IRQS-1) /* dummy to enable SMP build for up hardware */
#else
#define TIMER0_IRQ 16
#define TIMER1_IRQ 17
-#define IPI_IRQ 19
#endif
#include <linux/interrupt.h>
diff --git a/arch/arc/include/asm/irqflags-arcv2.h b/arch/arc/include/asm/irqflags-arcv2.h
index 258b0e5..37c2f75 100644
--- a/arch/arc/include/asm/irqflags-arcv2.h
+++ b/arch/arc/include/asm/irqflags-arcv2.h
@@ -22,6 +22,7 @@
#define AUX_IRQ_CTRL 0x00E
#define AUX_IRQ_ACT 0x043 /* Active Intr across all levels */
#define AUX_IRQ_LVL_PEND 0x200 /* Pending Intr across all levels */
+#define AUX_IRQ_HINT 0x201 /* For generating Soft Interrupts */
#define AUX_IRQ_PRIORITY 0x206
#define ICAUSE 0x40a
#define AUX_IRQ_SELECT 0x40b
@@ -30,8 +31,11 @@
/* Was Intr taken in User Mode */
#define AUX_IRQ_ACT_BIT_U 31
-/* 0 is highest level, but taken by FIRQs, if present in design */
-#define ARCV2_IRQ_DEF_PRIO 0
+/*
+ * User space should be interruptable even by lowest prio interrupt
+ * Safe even if actual interrupt priorities is fewer or even one
+ */
+#define ARCV2_IRQ_DEF_PRIO 15
/* seed value for status register */
#define ISA_INIT_STATUS_BITS (STATUS_IE_MASK | STATUS_AD_MASK | \
@@ -112,6 +116,16 @@
return arch_irqs_disabled_flags(arch_local_save_flags());
}
+static inline void arc_softirq_trigger(int irq)
+{
+ write_aux_reg(AUX_IRQ_HINT, irq);
+}
+
+static inline void arc_softirq_clear(int irq)
+{
+ write_aux_reg(AUX_IRQ_HINT, 0);
+}
+
#else
.macro IRQ_DISABLE scratch
diff --git a/arch/arc/include/asm/mcip.h b/arch/arc/include/asm/mcip.h
index 46f4e53..847e3bb 100644
--- a/arch/arc/include/asm/mcip.h
+++ b/arch/arc/include/asm/mcip.h
@@ -39,8 +39,8 @@
#define CMD_DEBUG_SET_MASK 0x34
#define CMD_DEBUG_SET_SELECT 0x36
-#define CMD_GRTC_READ_LO 0x42
-#define CMD_GRTC_READ_HI 0x43
+#define CMD_GFRC_READ_LO 0x42
+#define CMD_GFRC_READ_HI 0x43
#define CMD_IDU_ENABLE 0x71
#define CMD_IDU_DISABLE 0x72
diff --git a/arch/arc/include/asm/pgtable.h b/arch/arc/include/asm/pgtable.h
index 57af2f0..d426d42 100644
--- a/arch/arc/include/asm/pgtable.h
+++ b/arch/arc/include/asm/pgtable.h
@@ -179,37 +179,44 @@
#define __S111 PAGE_U_X_W_R
/****************************************************************
- * Page Table Lookup split
+ * 2 tier (PGD:PTE) software page walker
*
- * We implement 2 tier paging and since this is all software, we are free
- * to customize the span of a PGD / PTE entry to suit us
- *
- * 32 bit virtual address
+ * [31] 32 bit virtual address [0]
* -------------------------------------------------------
- * | BITS_FOR_PGD | BITS_FOR_PTE | BITS_IN_PAGE |
+ * | | <------------ PGDIR_SHIFT ----------> |
+ * | | |
+ * | BITS_FOR_PGD | BITS_FOR_PTE | <-- PAGE_SHIFT --> |
* -------------------------------------------------------
* | | |
* | | --> off in page frame
- * | |
* | ---> index into Page Table
- * |
* ----> index into Page Directory
+ *
+ * In a single page size configuration, only PAGE_SHIFT is fixed
+ * So both PGD and PTE sizing can be tweaked
+ * e.g. 8K page (PAGE_SHIFT 13) can have
+ * - PGDIR_SHIFT 21 -> 11:8:13 address split
+ * - PGDIR_SHIFT 24 -> 8:11:13 address split
+ *
+ * If Super Page is configured, PGDIR_SHIFT becomes fixed too,
+ * so the sizing flexibility is gone.
*/
-#define BITS_IN_PAGE PAGE_SHIFT
-
-/* Optimal Sizing of Pg Tbl - based on MMU page size */
-#if defined(CONFIG_ARC_PAGE_SIZE_8K)
-#define BITS_FOR_PTE 8 /* 11:8:13 */
-#elif defined(CONFIG_ARC_PAGE_SIZE_16K)
-#define BITS_FOR_PTE 8 /* 10:8:14 */
-#elif defined(CONFIG_ARC_PAGE_SIZE_4K)
-#define BITS_FOR_PTE 9 /* 11:9:12 */
+#if defined(CONFIG_ARC_HUGEPAGE_16M)
+#define PGDIR_SHIFT 24
+#elif defined(CONFIG_ARC_HUGEPAGE_2M)
+#define PGDIR_SHIFT 21
+#else
+/*
+ * Only Normal page support so "hackable" (see comment above)
+ * Default value provides 11:8:13 (8K), 11:9:12 (4K)
+ */
+#define PGDIR_SHIFT 21
#endif
-#define BITS_FOR_PGD (32 - BITS_FOR_PTE - BITS_IN_PAGE)
+#define BITS_FOR_PTE (PGDIR_SHIFT - PAGE_SHIFT)
+#define BITS_FOR_PGD (32 - PGDIR_SHIFT)
-#define PGDIR_SHIFT (32 - BITS_FOR_PGD)
#define PGDIR_SIZE (1UL << PGDIR_SHIFT) /* vaddr span, not PDG sz */
#define PGDIR_MASK (~(PGDIR_SIZE-1))
diff --git a/arch/arc/kernel/entry-arcv2.S b/arch/arc/kernel/entry-arcv2.S
index cbfec79..c126460 100644
--- a/arch/arc/kernel/entry-arcv2.S
+++ b/arch/arc/kernel/entry-arcv2.S
@@ -45,11 +45,12 @@
VECTOR handle_interrupt ; (16) Timer0
VECTOR handle_interrupt ; unused (Timer1)
VECTOR handle_interrupt ; unused (WDT)
-VECTOR handle_interrupt ; (19) ICI (inter core interrupt)
-VECTOR handle_interrupt
-VECTOR handle_interrupt
-VECTOR handle_interrupt
-VECTOR handle_interrupt ; (23) End of fixed IRQs
+VECTOR handle_interrupt ; (19) Inter core Interrupt (IPI)
+VECTOR handle_interrupt ; (20) perf Interrupt
+VECTOR handle_interrupt ; (21) Software Triggered Intr (Self IPI)
+VECTOR handle_interrupt ; unused
+VECTOR handle_interrupt ; (23) unused
+# End of fixed IRQs
.rept CONFIG_ARC_NUMBER_OF_INTERRUPTS - 8
VECTOR handle_interrupt
@@ -211,7 +212,11 @@
; (since IRQ NOT allowed in DS in ARCv2, this can only happen if orig
; entry was via Exception in DS which got preempted in kernel).
;
-; IRQ RTIE won't reliably restore DE bit and/or BTA, needs handling
+; IRQ RTIE won't reliably restore DE bit and/or BTA, needs workaround
+;
+; Solution is return from Intr w/o any delay slot quirks into a kernel trampoline
+; and from pure kernel mode return to delay slot which handles DS bit/BTA correctly
+
.Lintr_ret_to_delay_slot:
debug_marker_ds:
@@ -222,18 +227,23 @@
ld r2, [sp, PT_ret]
ld r3, [sp, PT_status32]
+ ; STAT32 for Int return created from scratch
+ ; (No delay dlot, disable Further intr in trampoline)
+
bic r0, r3, STATUS_U_MASK|STATUS_DE_MASK|STATUS_IE_MASK|STATUS_L_MASK
st r0, [sp, PT_status32]
mov r1, .Lintr_ret_to_delay_slot_2
st r1, [sp, PT_ret]
+ ; Orig exception PC/STAT32 safekept @orig_r0 and @event stack slots
st r2, [sp, 0]
st r3, [sp, 4]
b .Lisr_ret_fast_path
.Lintr_ret_to_delay_slot_2:
+ ; Trampoline to restore orig exception PC/STAT32/BTA/AUX_USER_SP
sub sp, sp, SZ_PT_REGS
st r9, [sp, -4]
@@ -243,11 +253,19 @@
ld r9, [sp, 4]
sr r9, [erstatus]
+ ; restore AUX_USER_SP if returning to U mode
+ bbit0 r9, STATUS_U_BIT, 1f
+ ld r9, [sp, PT_sp]
+ sr r9, [AUX_USER_SP]
+
+1:
ld r9, [sp, 8]
sr r9, [erbta]
ld r9, [sp, -4]
add sp, sp, SZ_PT_REGS
+
+ ; return from pure kernel mode to delay slot
rtie
END(ret_from_exception)
diff --git a/arch/arc/kernel/intc-arcv2.c b/arch/arc/kernel/intc-arcv2.c
index 0394f9f..9425263 100644
--- a/arch/arc/kernel/intc-arcv2.c
+++ b/arch/arc/kernel/intc-arcv2.c
@@ -14,6 +14,8 @@
#include <linux/irqchip.h>
#include <asm/irq.h>
+static int irq_prio;
+
/*
* Early Hardware specific Interrupt setup
* -Called very early (start_kernel -> setup_arch -> setup_processor)
@@ -24,6 +26,14 @@
{
unsigned int tmp;
+ struct irq_build {
+#ifdef CONFIG_CPU_BIG_ENDIAN
+ unsigned int pad:3, firq:1, prio:4, exts:8, irqs:8, ver:8;
+#else
+ unsigned int ver:8, irqs:8, exts:8, prio:4, firq:1, pad:3;
+#endif
+ } irq_bcr;
+
struct aux_irq_ctrl {
#ifdef CONFIG_CPU_BIG_ENDIAN
unsigned int res3:18, save_idx_regs:1, res2:1,
@@ -46,28 +56,25 @@
WRITE_AUX(AUX_IRQ_CTRL, ictrl);
- /* setup status32, don't enable intr yet as kernel doesn't want */
- tmp = read_aux_reg(0xa);
- tmp |= ISA_INIT_STATUS_BITS;
- tmp &= ~STATUS_IE_MASK;
- asm volatile("flag %0 \n"::"r"(tmp));
-
/*
* ARCv2 core intc provides multiple interrupt priorities (upto 16).
* Typical builds though have only two levels (0-high, 1-low)
* Linux by default uses lower prio 1 for most irqs, reserving 0 for
* NMI style interrupts in future (say perf)
- *
- * Read the intc BCR to confirm that Linux default priority is avail
- * in h/w
- *
- * Note:
- * IRQ_BCR[27..24] contains N-1 (for N priority levels) and prio level
- * is 0 based.
*/
- tmp = (read_aux_reg(ARC_REG_IRQ_BCR) >> 24 ) & 0xF;
- if (ARCV2_IRQ_DEF_PRIO > tmp)
- panic("Linux default irq prio incorrect\n");
+
+ READ_BCR(ARC_REG_IRQ_BCR, irq_bcr);
+
+ irq_prio = irq_bcr.prio; /* Encoded as N-1 for N levels */
+ pr_info("archs-intc\t: %d priority levels (default %d)%s\n",
+ irq_prio + 1, irq_prio,
+ irq_bcr.firq ? " FIRQ (not used)":"");
+
+ /* setup status32, don't enable intr yet as kernel doesn't want */
+ tmp = read_aux_reg(0xa);
+ tmp |= STATUS_AD_MASK | (irq_prio << 1);
+ tmp &= ~STATUS_IE_MASK;
+ asm volatile("flag %0 \n"::"r"(tmp));
}
static void arcv2_irq_mask(struct irq_data *data)
@@ -86,7 +93,7 @@
{
/* set default priority */
write_aux_reg(AUX_IRQ_SELECT, data->irq);
- write_aux_reg(AUX_IRQ_PRIORITY, ARCV2_IRQ_DEF_PRIO);
+ write_aux_reg(AUX_IRQ_PRIORITY, irq_prio);
/*
* hw auto enables (linux unmask) all by default
diff --git a/arch/arc/kernel/intc-compact.c b/arch/arc/kernel/intc-compact.c
index 06bcedf..224d1c3 100644
--- a/arch/arc/kernel/intc-compact.c
+++ b/arch/arc/kernel/intc-compact.c
@@ -81,9 +81,6 @@
{
switch (irq) {
case TIMER0_IRQ:
-#ifdef CONFIG_SMP
- case IPI_IRQ:
-#endif
irq_set_chip_and_handler(irq, &onchip_intc, handle_percpu_irq);
break;
default:
diff --git a/arch/arc/kernel/mcip.c b/arch/arc/kernel/mcip.c
index bd237ac..c41c364 100644
--- a/arch/arc/kernel/mcip.c
+++ b/arch/arc/kernel/mcip.c
@@ -11,9 +11,13 @@
#include <linux/smp.h>
#include <linux/irq.h>
#include <linux/spinlock.h>
+#include <asm/irqflags-arcv2.h>
#include <asm/mcip.h>
#include <asm/setup.h>
+#define IPI_IRQ 19
+#define SOFTIRQ_IRQ 21
+
static char smp_cpuinfo_buf[128];
static int idu_detected;
@@ -22,6 +26,7 @@
static void mcip_setup_per_cpu(int cpu)
{
smp_ipi_irq_setup(cpu, IPI_IRQ);
+ smp_ipi_irq_setup(cpu, SOFTIRQ_IRQ);
}
static void mcip_ipi_send(int cpu)
@@ -29,46 +34,44 @@
unsigned long flags;
int ipi_was_pending;
+ /* ARConnect can only send IPI to others */
+ if (unlikely(cpu == raw_smp_processor_id())) {
+ arc_softirq_trigger(SOFTIRQ_IRQ);
+ return;
+ }
+
+ raw_spin_lock_irqsave(&mcip_lock, flags);
+
/*
- * NOTE: We must spin here if the other cpu hasn't yet
- * serviced a previous message. This can burn lots
- * of time, but we MUST follows this protocol or
- * ipi messages can be lost!!!
- * Also, we must release the lock in this loop because
- * the other side may get to this same loop and not
- * be able to ack -- thus causing deadlock.
+ * If receiver already has a pending interrupt, elide sending this one.
+ * Linux cross core calling works well with concurrent IPIs
+ * coalesced into one
+ * see arch/arc/kernel/smp.c: ipi_send_msg_one()
*/
+ __mcip_cmd(CMD_INTRPT_READ_STATUS, cpu);
+ ipi_was_pending = read_aux_reg(ARC_REG_MCIP_READBACK);
+ if (!ipi_was_pending)
+ __mcip_cmd(CMD_INTRPT_GENERATE_IRQ, cpu);
- do {
- raw_spin_lock_irqsave(&mcip_lock, flags);
- __mcip_cmd(CMD_INTRPT_READ_STATUS, cpu);
- ipi_was_pending = read_aux_reg(ARC_REG_MCIP_READBACK);
- if (ipi_was_pending == 0)
- break; /* break out but keep lock */
- raw_spin_unlock_irqrestore(&mcip_lock, flags);
- } while (1);
-
- __mcip_cmd(CMD_INTRPT_GENERATE_IRQ, cpu);
raw_spin_unlock_irqrestore(&mcip_lock, flags);
-
-#ifdef CONFIG_ARC_IPI_DBG
- if (ipi_was_pending)
- pr_info("IPI ACK delayed from cpu %d\n", cpu);
-#endif
}
static void mcip_ipi_clear(int irq)
{
unsigned int cpu, c;
unsigned long flags;
- unsigned int __maybe_unused copy;
+
+ if (unlikely(irq == SOFTIRQ_IRQ)) {
+ arc_softirq_clear(irq);
+ return;
+ }
raw_spin_lock_irqsave(&mcip_lock, flags);
/* Who sent the IPI */
__mcip_cmd(CMD_INTRPT_CHECK_SOURCE, 0);
- copy = cpu = read_aux_reg(ARC_REG_MCIP_READBACK); /* 1,2,4,8... */
+ cpu = read_aux_reg(ARC_REG_MCIP_READBACK); /* 1,2,4,8... */
/*
* In rare case, multiple concurrent IPIs sent to same target can
@@ -82,12 +85,6 @@
} while (cpu);
raw_spin_unlock_irqrestore(&mcip_lock, flags);
-
-#ifdef CONFIG_ARC_IPI_DBG
- if (c != __ffs(copy))
- pr_info("IPIs from %x coalesced to %x\n",
- copy, raw_smp_processor_id());
-#endif
}
static void mcip_probe_n_setup(void)
@@ -96,13 +93,13 @@
#ifdef CONFIG_CPU_BIG_ENDIAN
unsigned int pad3:8,
idu:1, llm:1, num_cores:6,
- iocoh:1, grtc:1, dbg:1, pad2:1,
+ iocoh:1, gfrc:1, dbg:1, pad2:1,
msg:1, sem:1, ipi:1, pad:1,
ver:8;
#else
unsigned int ver:8,
pad:1, ipi:1, sem:1, msg:1,
- pad2:1, dbg:1, grtc:1, iocoh:1,
+ pad2:1, dbg:1, gfrc:1, iocoh:1,
num_cores:6, llm:1, idu:1,
pad3:8;
#endif
@@ -111,12 +108,13 @@
READ_BCR(ARC_REG_MCIP_BCR, mp);
sprintf(smp_cpuinfo_buf,
- "Extn [SMP]\t: ARConnect (v%d): %d cores with %s%s%s%s\n",
+ "Extn [SMP]\t: ARConnect (v%d): %d cores with %s%s%s%s%s\n",
mp.ver, mp.num_cores,
IS_AVAIL1(mp.ipi, "IPI "),
IS_AVAIL1(mp.idu, "IDU "),
+ IS_AVAIL1(mp.llm, "LLM "),
IS_AVAIL1(mp.dbg, "DEBUG "),
- IS_AVAIL1(mp.grtc, "GRTC"));
+ IS_AVAIL1(mp.gfrc, "GFRC"));
idu_detected = mp.idu;
@@ -125,8 +123,8 @@
__mcip_cmd_data(CMD_DEBUG_SET_MASK, 0xf, 0xf);
}
- if (IS_ENABLED(CONFIG_ARC_HAS_GRTC) && !mp.grtc)
- panic("kernel trying to use non-existent GRTC\n");
+ if (IS_ENABLED(CONFIG_ARC_HAS_GFRC) && !mp.gfrc)
+ panic("kernel trying to use non-existent GFRC\n");
}
struct plat_smp_ops plat_smp_ops = {
diff --git a/arch/arc/kernel/setup.c b/arch/arc/kernel/setup.c
index e1b8744..cdc821d 100644
--- a/arch/arc/kernel/setup.c
+++ b/arch/arc/kernel/setup.c
@@ -42,9 +42,57 @@
struct cpuinfo_arc cpuinfo_arc700[NR_CPUS];
+static void read_decode_ccm_bcr(struct cpuinfo_arc *cpu)
+{
+ if (is_isa_arcompact()) {
+ struct bcr_iccm_arcompact iccm;
+ struct bcr_dccm_arcompact dccm;
+
+ READ_BCR(ARC_REG_ICCM_BUILD, iccm);
+ if (iccm.ver) {
+ cpu->iccm.sz = 4096 << iccm.sz; /* 8K to 512K */
+ cpu->iccm.base_addr = iccm.base << 16;
+ }
+
+ READ_BCR(ARC_REG_DCCM_BUILD, dccm);
+ if (dccm.ver) {
+ unsigned long base;
+ cpu->dccm.sz = 2048 << dccm.sz; /* 2K to 256K */
+
+ base = read_aux_reg(ARC_REG_DCCM_BASE_BUILD);
+ cpu->dccm.base_addr = base & ~0xF;
+ }
+ } else {
+ struct bcr_iccm_arcv2 iccm;
+ struct bcr_dccm_arcv2 dccm;
+ unsigned long region;
+
+ READ_BCR(ARC_REG_ICCM_BUILD, iccm);
+ if (iccm.ver) {
+ cpu->iccm.sz = 256 << iccm.sz00; /* 512B to 16M */
+ if (iccm.sz00 == 0xF && iccm.sz01 > 0)
+ cpu->iccm.sz <<= iccm.sz01;
+
+ region = read_aux_reg(ARC_REG_AUX_ICCM);
+ cpu->iccm.base_addr = region & 0xF0000000;
+ }
+
+ READ_BCR(ARC_REG_DCCM_BUILD, dccm);
+ if (dccm.ver) {
+ cpu->dccm.sz = 256 << dccm.sz0;
+ if (dccm.sz0 == 0xF && dccm.sz1 > 0)
+ cpu->dccm.sz <<= dccm.sz1;
+
+ region = read_aux_reg(ARC_REG_AUX_DCCM);
+ cpu->dccm.base_addr = region & 0xF0000000;
+ }
+ }
+}
+
static void read_arc_build_cfg_regs(void)
{
struct bcr_perip uncached_space;
+ struct bcr_timer timer;
struct bcr_generic bcr;
struct cpuinfo_arc *cpu = &cpuinfo_arc700[smp_processor_id()];
unsigned long perip_space;
@@ -53,7 +101,11 @@
READ_BCR(AUX_IDENTITY, cpu->core);
READ_BCR(ARC_REG_ISA_CFG_BCR, cpu->isa);
- READ_BCR(ARC_REG_TIMERS_BCR, cpu->timers);
+ READ_BCR(ARC_REG_TIMERS_BCR, timer);
+ cpu->extn.timer0 = timer.t0;
+ cpu->extn.timer1 = timer.t1;
+ cpu->extn.rtc = timer.rtc;
+
cpu->vec_base = read_aux_reg(AUX_INTR_VEC_BASE);
READ_BCR(ARC_REG_D_UNCACH_BCR, uncached_space);
@@ -71,36 +123,11 @@
cpu->extn.swap = read_aux_reg(ARC_REG_SWAP_BCR) ? 1 : 0; /* 1,3 */
cpu->extn.crc = read_aux_reg(ARC_REG_CRC_BCR) ? 1 : 0;
cpu->extn.minmax = read_aux_reg(ARC_REG_MIXMAX_BCR) > 1 ? 1 : 0; /* 2 */
-
- /* Note that we read the CCM BCRs independent of kernel config
- * This is to catch the cases where user doesn't know that
- * CCMs are present in hardware build
- */
- {
- struct bcr_iccm iccm;
- struct bcr_dccm dccm;
- struct bcr_dccm_base dccm_base;
- unsigned int bcr_32bit_val;
-
- bcr_32bit_val = read_aux_reg(ARC_REG_ICCM_BCR);
- if (bcr_32bit_val) {
- iccm = *((struct bcr_iccm *)&bcr_32bit_val);
- cpu->iccm.base_addr = iccm.base << 16;
- cpu->iccm.sz = 0x2000 << (iccm.sz - 1);
- }
-
- bcr_32bit_val = read_aux_reg(ARC_REG_DCCM_BCR);
- if (bcr_32bit_val) {
- dccm = *((struct bcr_dccm *)&bcr_32bit_val);
- cpu->dccm.sz = 0x800 << (dccm.sz);
-
- READ_BCR(ARC_REG_DCCMBASE_BCR, dccm_base);
- cpu->dccm.base_addr = dccm_base.addr << 8;
- }
- }
-
READ_BCR(ARC_REG_XY_MEM_BCR, cpu->extn_xymem);
+ /* Read CCM BCRs for boot reporting even if not enabled in Kconfig */
+ read_decode_ccm_bcr(cpu);
+
read_decode_mmu_bcr();
read_decode_cache_bcr();
@@ -208,9 +235,9 @@
(unsigned int)(arc_get_core_freq() / 10000) % 100);
n += scnprintf(buf + n, len - n, "Timers\t\t: %s%s%s%s\nISA Extn\t: ",
- IS_AVAIL1(cpu->timers.t0, "Timer0 "),
- IS_AVAIL1(cpu->timers.t1, "Timer1 "),
- IS_AVAIL2(cpu->timers.rtc, "64-bit RTC ",
+ IS_AVAIL1(cpu->extn.timer0, "Timer0 "),
+ IS_AVAIL1(cpu->extn.timer1, "Timer1 "),
+ IS_AVAIL2(cpu->extn.rtc, "Local-64-bit-Ctr ",
CONFIG_ARC_HAS_RTC));
n += i = scnprintf(buf + n, len - n, "%s%s%s%s%s",
@@ -232,8 +259,6 @@
n += scnprintf(buf + n, len - n, "mpy[opt %d] ", opt);
}
- n += scnprintf(buf + n, len - n, "%s",
- IS_USED_CFG(CONFIG_ARC_HAS_HW_MPY));
}
n += scnprintf(buf + n, len - n, "%s%s%s%s%s%s%s%s\n",
@@ -293,13 +318,13 @@
struct cpuinfo_arc *cpu = &cpuinfo_arc700[smp_processor_id()];
int fpu_enabled;
- if (!cpu->timers.t0)
+ if (!cpu->extn.timer0)
panic("Timer0 is not present!\n");
- if (!cpu->timers.t1)
+ if (!cpu->extn.timer1)
panic("Timer1 is not present!\n");
- if (IS_ENABLED(CONFIG_ARC_HAS_RTC) && !cpu->timers.rtc)
+ if (IS_ENABLED(CONFIG_ARC_HAS_RTC) && !cpu->extn.rtc)
panic("RTC is not present\n");
#ifdef CONFIG_ARC_HAS_DCCM
@@ -334,6 +359,7 @@
panic("FPU non-existent, disable CONFIG_ARC_FPU_SAVE_RESTORE\n");
if (is_isa_arcv2() && IS_ENABLED(CONFIG_SMP) && cpu->isa.atomic &&
+ IS_ENABLED(CONFIG_ARC_HAS_LLSC) &&
!IS_ENABLED(CONFIG_ARC_STAR_9000923308))
panic("llock/scond livelock workaround missing\n");
}
diff --git a/arch/arc/kernel/smp.c b/arch/arc/kernel/smp.c
index ef6e9e1..424e937 100644
--- a/arch/arc/kernel/smp.c
+++ b/arch/arc/kernel/smp.c
@@ -336,11 +336,8 @@
int rc;
rc = __do_IPI(msg);
-#ifdef CONFIG_ARC_IPI_DBG
- /* IPI received but no valid @msg */
if (rc)
pr_info("IPI with bogus msg %ld in %ld\n", msg, copy);
-#endif
pending &= ~(1U << msg);
} while (pending);
diff --git a/arch/arc/kernel/time.c b/arch/arc/kernel/time.c
index dfad287..156d983 100644
--- a/arch/arc/kernel/time.c
+++ b/arch/arc/kernel/time.c
@@ -62,7 +62,7 @@
/********** Clock Source Device *********/
-#ifdef CONFIG_ARC_HAS_GRTC
+#ifdef CONFIG_ARC_HAS_GFRC
static int arc_counter_setup(void)
{
@@ -83,10 +83,10 @@
local_irq_save(flags);
- __mcip_cmd(CMD_GRTC_READ_LO, 0);
+ __mcip_cmd(CMD_GFRC_READ_LO, 0);
stamp.l = read_aux_reg(ARC_REG_MCIP_READBACK);
- __mcip_cmd(CMD_GRTC_READ_HI, 0);
+ __mcip_cmd(CMD_GFRC_READ_HI, 0);
stamp.h = read_aux_reg(ARC_REG_MCIP_READBACK);
local_irq_restore(flags);
@@ -95,7 +95,7 @@
}
static struct clocksource arc_counter = {
- .name = "ARConnect GRTC",
+ .name = "ARConnect GFRC",
.rating = 400,
.read = arc_counter_read,
.mask = CLOCKSOURCE_MASK(64),
diff --git a/arch/arm/boot/dts/am335x-bone-common.dtsi b/arch/arm/boot/dts/am335x-bone-common.dtsi
index f3db13d..0cc150b 100644
--- a/arch/arm/boot/dts/am335x-bone-common.dtsi
+++ b/arch/arm/boot/dts/am335x-bone-common.dtsi
@@ -285,8 +285,10 @@
};
};
+
+/include/ "tps65217.dtsi"
+
&tps {
- compatible = "ti,tps65217";
/*
* Configure pmic to enter OFF-state instead of SLEEP-state ("RTC-only
* mode") at poweroff. Most BeagleBone versions do not support RTC-only
@@ -307,17 +309,12 @@
ti,pmic-shutdown-controller;
regulators {
- #address-cells = <1>;
- #size-cells = <0>;
-
dcdc1_reg: regulator@0 {
- reg = <0>;
regulator-name = "vdds_dpr";
regulator-always-on;
};
dcdc2_reg: regulator@1 {
- reg = <1>;
/* VDD_MPU voltage limits 0.95V - 1.26V with +/-4% tolerance */
regulator-name = "vdd_mpu";
regulator-min-microvolt = <925000>;
@@ -327,7 +324,6 @@
};
dcdc3_reg: regulator@2 {
- reg = <2>;
/* VDD_CORE voltage limits 0.95V - 1.1V with +/-4% tolerance */
regulator-name = "vdd_core";
regulator-min-microvolt = <925000>;
@@ -337,25 +333,21 @@
};
ldo1_reg: regulator@3 {
- reg = <3>;
regulator-name = "vio,vrtc,vdds";
regulator-always-on;
};
ldo2_reg: regulator@4 {
- reg = <4>;
regulator-name = "vdd_3v3aux";
regulator-always-on;
};
ldo3_reg: regulator@5 {
- reg = <5>;
regulator-name = "vdd_1v8";
regulator-always-on;
};
ldo4_reg: regulator@6 {
- reg = <6>;
regulator-name = "vdd_3v3a";
regulator-always-on;
};
diff --git a/arch/arm/boot/dts/am335x-chilisom.dtsi b/arch/arm/boot/dts/am335x-chilisom.dtsi
index fda457b..857d989 100644
--- a/arch/arm/boot/dts/am335x-chilisom.dtsi
+++ b/arch/arm/boot/dts/am335x-chilisom.dtsi
@@ -128,21 +128,16 @@
};
+/include/ "tps65217.dtsi"
+
&tps {
- compatible = "ti,tps65217";
-
regulators {
- #address-cells = <1>;
- #size-cells = <0>;
-
dcdc1_reg: regulator@0 {
- reg = <0>;
regulator-name = "vdds_dpr";
regulator-always-on;
};
dcdc2_reg: regulator@1 {
- reg = <1>;
/* VDD_MPU voltage limits 0.95V - 1.26V with +/-4% tolerance */
regulator-name = "vdd_mpu";
regulator-min-microvolt = <925000>;
@@ -152,7 +147,6 @@
};
dcdc3_reg: regulator@2 {
- reg = <2>;
/* VDD_CORE voltage limits 0.95V - 1.1V with +/-4% tolerance */
regulator-name = "vdd_core";
regulator-min-microvolt = <925000>;
@@ -162,28 +156,24 @@
};
ldo1_reg: regulator@3 {
- reg = <3>;
regulator-name = "vio,vrtc,vdds";
regulator-boot-on;
regulator-always-on;
};
ldo2_reg: regulator@4 {
- reg = <4>;
regulator-name = "vdd_3v3aux";
regulator-boot-on;
regulator-always-on;
};
ldo3_reg: regulator@5 {
- reg = <5>;
regulator-name = "vdd_1v8";
regulator-boot-on;
regulator-always-on;
};
ldo4_reg: regulator@6 {
- reg = <6>;
regulator-name = "vdd_3v3d";
regulator-boot-on;
regulator-always-on;
diff --git a/arch/arm/boot/dts/am335x-nano.dts b/arch/arm/boot/dts/am335x-nano.dts
index 77559a1..f313999 100644
--- a/arch/arm/boot/dts/am335x-nano.dts
+++ b/arch/arm/boot/dts/am335x-nano.dts
@@ -375,15 +375,11 @@
wp-gpios = <&gpio3 18 0>;
};
+#include "tps65217.dtsi"
+
&tps {
- compatible = "ti,tps65217";
-
regulators {
- #address-cells = <1>;
- #size-cells = <0>;
-
dcdc1_reg: regulator@0 {
- reg = <0>;
/* +1.5V voltage with ±4% tolerance */
regulator-min-microvolt = <1450000>;
regulator-max-microvolt = <1550000>;
@@ -392,7 +388,6 @@
};
dcdc2_reg: regulator@1 {
- reg = <1>;
/* VDD_MPU voltage limits 0.95V - 1.1V with ±4% tolerance */
regulator-name = "vdd_mpu";
regulator-min-microvolt = <915000>;
@@ -402,7 +397,6 @@
};
dcdc3_reg: regulator@2 {
- reg = <2>;
/* VDD_CORE voltage limits 0.95V - 1.1V with ±4% tolerance */
regulator-name = "vdd_core";
regulator-min-microvolt = <915000>;
@@ -412,7 +406,6 @@
};
ldo1_reg: regulator@3 {
- reg = <3>;
/* +1.8V voltage with ±4% tolerance */
regulator-min-microvolt = <1750000>;
regulator-max-microvolt = <1870000>;
@@ -421,7 +414,6 @@
};
ldo2_reg: regulator@4 {
- reg = <4>;
/* +3.3V voltage with ±4% tolerance */
regulator-min-microvolt = <3175000>;
regulator-max-microvolt = <3430000>;
@@ -430,7 +422,6 @@
};
ldo3_reg: regulator@5 {
- reg = <5>;
/* +1.8V voltage with ±4% tolerance */
regulator-min-microvolt = <1750000>;
regulator-max-microvolt = <1870000>;
@@ -439,7 +430,6 @@
};
ldo4_reg: regulator@6 {
- reg = <6>;
/* +3.3V voltage with ±4% tolerance */
regulator-min-microvolt = <3175000>;
regulator-max-microvolt = <3430000>;
diff --git a/arch/arm/boot/dts/am335x-pepper.dts b/arch/arm/boot/dts/am335x-pepper.dts
index 471a3a7..8867aaa 100644
--- a/arch/arm/boot/dts/am335x-pepper.dts
+++ b/arch/arm/boot/dts/am335x-pepper.dts
@@ -420,9 +420,9 @@
vin-supply = <&vbat>;
};
-&tps {
- compatible = "ti,tps65217";
+/include/ "tps65217.dtsi"
+&tps {
backlight {
isel = <1>; /* ISET1 */
fdim = <200>; /* TPS65217_BL_FDIM_200HZ */
@@ -430,17 +430,12 @@
};
regulators {
- #address-cells = <1>;
- #size-cells = <0>;
-
dcdc1_reg: regulator@0 {
- reg = <0>;
/* VDD_1V8 system supply */
regulator-always-on;
};
dcdc2_reg: regulator@1 {
- reg = <1>;
/* VDD_CORE voltage limits 0.95V - 1.26V with +/-4% tolerance */
regulator-name = "vdd_core";
regulator-min-microvolt = <925000>;
@@ -450,7 +445,6 @@
};
dcdc3_reg: regulator@2 {
- reg = <2>;
/* VDD_MPU voltage limits 0.95V - 1.1V with +/-4% tolerance */
regulator-name = "vdd_mpu";
regulator-min-microvolt = <925000>;
@@ -460,21 +454,18 @@
};
ldo1_reg: regulator@3 {
- reg = <3>;
/* VRTC 1.8V always-on supply */
regulator-name = "vrtc,vdds";
regulator-always-on;
};
ldo2_reg: regulator@4 {
- reg = <4>;
/* 3.3V rail */
regulator-name = "vdd_3v3aux";
regulator-always-on;
};
ldo3_reg: regulator@5 {
- reg = <5>;
/* VDD_3V3A 3.3V rail */
regulator-name = "vdd_3v3a";
regulator-min-microvolt = <3300000>;
@@ -482,7 +473,6 @@
};
ldo4_reg: regulator@6 {
- reg = <6>;
/* VDD_3V3B 3.3V rail */
regulator-name = "vdd_3v3b";
regulator-always-on;
diff --git a/arch/arm/boot/dts/am335x-shc.dts b/arch/arm/boot/dts/am335x-shc.dts
index 1b5b044..865de85 100644
--- a/arch/arm/boot/dts/am335x-shc.dts
+++ b/arch/arm/boot/dts/am335x-shc.dts
@@ -46,7 +46,7 @@
gpios = <&gpio1 29 GPIO_ACTIVE_HIGH>;
linux,code = <KEY_BACK>;
debounce-interval = <1000>;
- gpio-key,wakeup;
+ wakeup-source;
};
front_button {
@@ -54,7 +54,7 @@
gpios = <&gpio1 25 GPIO_ACTIVE_HIGH>;
linux,code = <KEY_FRONT>;
debounce-interval = <1000>;
- gpio-key,wakeup;
+ wakeup-source;
};
};
diff --git a/arch/arm/boot/dts/am335x-sl50.dts b/arch/arm/boot/dts/am335x-sl50.dts
index d38edfa..3303c28 100644
--- a/arch/arm/boot/dts/am335x-sl50.dts
+++ b/arch/arm/boot/dts/am335x-sl50.dts
@@ -375,19 +375,16 @@
pinctrl-0 = <&uart4_pins>;
};
+#include "tps65217.dtsi"
+
&tps {
- compatible = "ti,tps65217";
ti,pmic-shutdown-controller;
interrupt-parent = <&intc>;
interrupts = <7>; /* NNMI */
regulators {
- #address-cells = <1>;
- #size-cells = <0>;
-
dcdc1_reg: regulator@0 {
- reg = <0>;
/* VDDS_DDR */
regulator-min-microvolt = <1500000>;
regulator-max-microvolt = <1500000>;
@@ -395,7 +392,6 @@
};
dcdc2_reg: regulator@1 {
- reg = <1>;
/* VDD_MPU voltage limits 0.95V - 1.26V with +/-4% tolerance */
regulator-name = "vdd_mpu";
regulator-min-microvolt = <925000>;
@@ -405,7 +401,6 @@
};
dcdc3_reg: regulator@2 {
- reg = <2>;
/* VDD_CORE voltage limits 0.95V - 1.1V with +/-4% tolerance */
regulator-name = "vdd_core";
regulator-min-microvolt = <925000>;
@@ -415,7 +410,6 @@
};
ldo1_reg: regulator@3 {
- reg = <3>;
/* VRTC / VIO / VDDS*/
regulator-always-on;
regulator-min-microvolt = <1800000>;
@@ -423,7 +417,6 @@
};
ldo2_reg: regulator@4 {
- reg = <4>;
/* VDD_3V3AUX */
regulator-always-on;
regulator-min-microvolt = <3300000>;
@@ -431,7 +424,6 @@
};
ldo3_reg: regulator@5 {
- reg = <5>;
/* VDD_1V8 */
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <1800000>;
@@ -439,7 +431,6 @@
};
ldo4_reg: regulator@6 {
- reg = <6>;
/* VDD_3V3A */
regulator-min-microvolt = <3300000>;
regulator-max-microvolt = <3300000>;
diff --git a/arch/arm/boot/dts/am57xx-beagle-x15.dts b/arch/arm/boot/dts/am57xx-beagle-x15.dts
index 36c0fa6..a0986c6 100644
--- a/arch/arm/boot/dts/am57xx-beagle-x15.dts
+++ b/arch/arm/boot/dts/am57xx-beagle-x15.dts
@@ -173,6 +173,8 @@
sound0_master: simple-audio-card,codec {
sound-dai = <&tlv320aic3104>;
+ assigned-clocks = <&clkoutmux2_clk_mux>;
+ assigned-clock-parents = <&sys_clk2_dclk_div>;
clocks = <&clkout2_clk>;
};
};
@@ -796,6 +798,8 @@
pinctrl-names = "default", "sleep";
pinctrl-0 = <&mcasp3_pins_default>;
pinctrl-1 = <&mcasp3_pins_sleep>;
+ assigned-clocks = <&mcasp3_ahclkx_mux>;
+ assigned-clock-parents = <&sys_clkin2>;
status = "okay";
op-mode = <0>; /* MCASP_IIS_MODE */
diff --git a/arch/arm/boot/dts/am57xx-cl-som-am57x.dts b/arch/arm/boot/dts/am57xx-cl-som-am57x.dts
index 8d93882..1c06cb7 100644
--- a/arch/arm/boot/dts/am57xx-cl-som-am57x.dts
+++ b/arch/arm/boot/dts/am57xx-cl-som-am57x.dts
@@ -545,7 +545,7 @@
ti,debounce-tol = /bits/ 16 <10>;
ti,debounce-rep = /bits/ 16 <1>;
- linux,wakeup;
+ wakeup-source;
};
};
diff --git a/arch/arm/boot/dts/imx6qdl.dtsi b/arch/arm/boot/dts/imx6qdl.dtsi
index 4f6ae92..f74d3db 100644
--- a/arch/arm/boot/dts/imx6qdl.dtsi
+++ b/arch/arm/boot/dts/imx6qdl.dtsi
@@ -896,7 +896,6 @@
#size-cells = <1>;
reg = <0x2100000 0x10000>;
ranges = <0 0x2100000 0x10000>;
- interrupt-parent = <&intc>;
clocks = <&clks IMX6QDL_CLK_CAAM_MEM>,
<&clks IMX6QDL_CLK_CAAM_ACLK>,
<&clks IMX6QDL_CLK_CAAM_IPG>,
diff --git a/arch/arm/boot/dts/kirkwood-ds112.dts b/arch/arm/boot/dts/kirkwood-ds112.dts
index bf4143c..b84af3d 100644
--- a/arch/arm/boot/dts/kirkwood-ds112.dts
+++ b/arch/arm/boot/dts/kirkwood-ds112.dts
@@ -14,7 +14,7 @@
#include "kirkwood-synology.dtsi"
/ {
- model = "Synology DS111";
+ model = "Synology DS112";
compatible = "synology,ds111", "marvell,kirkwood";
memory {
diff --git a/arch/arm/boot/dts/orion5x-linkstation-lswtgl.dts b/arch/arm/boot/dts/orion5x-linkstation-lswtgl.dts
index 4207882..aae8a7a 100644
--- a/arch/arm/boot/dts/orion5x-linkstation-lswtgl.dts
+++ b/arch/arm/boot/dts/orion5x-linkstation-lswtgl.dts
@@ -228,6 +228,37 @@
};
};
+&devbus_bootcs {
+ status = "okay";
+ devbus,keep-config;
+
+ flash@0 {
+ compatible = "jedec-flash";
+ reg = <0 0x40000>;
+ bank-width = <1>;
+
+ partitions {
+ compatible = "fixed-partitions";
+ #address-cells = <1>;
+ #size-cells = <1>;
+
+ header@0 {
+ reg = <0 0x30000>;
+ read-only;
+ };
+
+ uboot@30000 {
+ reg = <0x30000 0xF000>;
+ read-only;
+ };
+
+ uboot_env@3F000 {
+ reg = <0x3F000 0x1000>;
+ };
+ };
+ };
+};
+
&mdio {
status = "okay";
diff --git a/arch/arm/boot/dts/sama5d2-pinfunc.h b/arch/arm/boot/dts/sama5d2-pinfunc.h
index 1afe246..b0c912fe 100644
--- a/arch/arm/boot/dts/sama5d2-pinfunc.h
+++ b/arch/arm/boot/dts/sama5d2-pinfunc.h
@@ -90,7 +90,7 @@
#define PIN_PA14__I2SC1_MCK PINMUX_PIN(PIN_PA14, 4, 2)
#define PIN_PA14__FLEXCOM3_IO2 PINMUX_PIN(PIN_PA14, 5, 1)
#define PIN_PA14__D9 PINMUX_PIN(PIN_PA14, 6, 2)
-#define PIN_PA15 14
+#define PIN_PA15 15
#define PIN_PA15__GPIO PINMUX_PIN(PIN_PA15, 0, 0)
#define PIN_PA15__SPI0_MOSI PINMUX_PIN(PIN_PA15, 1, 1)
#define PIN_PA15__TF1 PINMUX_PIN(PIN_PA15, 2, 1)
diff --git a/arch/arm/boot/dts/tps65217.dtsi b/arch/arm/boot/dts/tps65217.dtsi
new file mode 100644
index 0000000..a632724
--- /dev/null
+++ b/arch/arm/boot/dts/tps65217.dtsi
@@ -0,0 +1,56 @@
+/*
+ * Copyright (C) 2012 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+/*
+ * Integrated Power Management Chip
+ * http://www.ti.com/lit/ds/symlink/tps65217.pdf
+ */
+
+&tps {
+ compatible = "ti,tps65217";
+
+ regulators {
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+ dcdc1_reg: regulator@0 {
+ reg = <0>;
+ regulator-compatible = "dcdc1";
+ };
+
+ dcdc2_reg: regulator@1 {
+ reg = <1>;
+ regulator-compatible = "dcdc2";
+ };
+
+ dcdc3_reg: regulator@2 {
+ reg = <2>;
+ regulator-compatible = "dcdc3";
+ };
+
+ ldo1_reg: regulator@3 {
+ reg = <3>;
+ regulator-compatible = "ldo1";
+ };
+
+ ldo2_reg: regulator@4 {
+ reg = <4>;
+ regulator-compatible = "ldo2";
+ };
+
+ ldo3_reg: regulator@5 {
+ reg = <5>;
+ regulator-compatible = "ldo3";
+ };
+
+ ldo4_reg: regulator@6 {
+ reg = <6>;
+ regulator-compatible = "ldo4";
+ };
+ };
+};
diff --git a/arch/arm/common/icst.c b/arch/arm/common/icst.c
index 2dc6da70..d7ed252 100644
--- a/arch/arm/common/icst.c
+++ b/arch/arm/common/icst.c
@@ -16,7 +16,7 @@
*/
#include <linux/module.h>
#include <linux/kernel.h>
-
+#include <asm/div64.h>
#include <asm/hardware/icst.h>
/*
@@ -29,7 +29,11 @@
unsigned long icst_hz(const struct icst_params *p, struct icst_vco vco)
{
- return p->ref * 2 * (vco.v + 8) / ((vco.r + 2) * p->s2div[vco.s]);
+ u64 dividend = p->ref * 2 * (u64)(vco.v + 8);
+ u32 divisor = (vco.r + 2) * p->s2div[vco.s];
+
+ do_div(dividend, divisor);
+ return (unsigned long)dividend;
}
EXPORT_SYMBOL(icst_hz);
@@ -58,6 +62,7 @@
if (f > p->vco_min && f <= p->vco_max)
break;
+ i++;
} while (i < 8);
if (i >= 8)
diff --git a/arch/arm/configs/omap2plus_defconfig b/arch/arm/configs/omap2plus_defconfig
index a715174..d18d6b4 100644
--- a/arch/arm/configs/omap2plus_defconfig
+++ b/arch/arm/configs/omap2plus_defconfig
@@ -292,24 +292,23 @@
CONFIG_FIRMWARE_EDID=y
CONFIG_FB_MODE_HELPERS=y
CONFIG_FB_TILEBLITTING=y
-CONFIG_OMAP2_DSS=m
-CONFIG_OMAP5_DSS_HDMI=y
-CONFIG_OMAP2_DSS_SDI=y
-CONFIG_OMAP2_DSS_DSI=y
+CONFIG_FB_OMAP5_DSS_HDMI=y
+CONFIG_FB_OMAP2_DSS_SDI=y
+CONFIG_FB_OMAP2_DSS_DSI=y
CONFIG_FB_OMAP2=m
-CONFIG_DISPLAY_ENCODER_TFP410=m
-CONFIG_DISPLAY_ENCODER_TPD12S015=m
-CONFIG_DISPLAY_CONNECTOR_DVI=m
-CONFIG_DISPLAY_CONNECTOR_HDMI=m
-CONFIG_DISPLAY_CONNECTOR_ANALOG_TV=m
-CONFIG_DISPLAY_PANEL_DPI=m
-CONFIG_DISPLAY_PANEL_DSI_CM=m
-CONFIG_DISPLAY_PANEL_SONY_ACX565AKM=m
-CONFIG_DISPLAY_PANEL_LGPHILIPS_LB035Q02=m
-CONFIG_DISPLAY_PANEL_SHARP_LS037V7DW01=m
-CONFIG_DISPLAY_PANEL_TPO_TD028TTEC1=m
-CONFIG_DISPLAY_PANEL_TPO_TD043MTEA1=m
-CONFIG_DISPLAY_PANEL_NEC_NL8048HL11=m
+CONFIG_FB_OMAP2_ENCODER_TFP410=m
+CONFIG_FB_OMAP2_ENCODER_TPD12S015=m
+CONFIG_FB_OMAP2_CONNECTOR_DVI=m
+CONFIG_FB_OMAP2_CONNECTOR_HDMI=m
+CONFIG_FB_OMAP2_CONNECTOR_ANALOG_TV=m
+CONFIG_FB_OMAP2_PANEL_DPI=m
+CONFIG_FB_OMAP2_PANEL_DSI_CM=m
+CONFIG_FB_OMAP2_PANEL_SONY_ACX565AKM=m
+CONFIG_FB_OMAP2_PANEL_LGPHILIPS_LB035Q02=m
+CONFIG_FB_OMAP2_PANEL_SHARP_LS037V7DW01=m
+CONFIG_FB_OMAP2_PANEL_TPO_TD028TTEC1=m
+CONFIG_FB_OMAP2_PANEL_TPO_TD043MTEA1=m
+CONFIG_FB_OMAP2_PANEL_NEC_NL8048HL11=m
CONFIG_BACKLIGHT_LCD_SUPPORT=y
CONFIG_LCD_CLASS_DEVICE=y
CONFIG_LCD_PLATFORM=y
diff --git a/arch/arm/crypto/aes-ce-glue.c b/arch/arm/crypto/aes-ce-glue.c
index b445a5d..89a3a3e 100644
--- a/arch/arm/crypto/aes-ce-glue.c
+++ b/arch/arm/crypto/aes-ce-glue.c
@@ -364,7 +364,7 @@
.cra_blkcipher = {
.min_keysize = AES_MIN_KEY_SIZE,
.max_keysize = AES_MAX_KEY_SIZE,
- .ivsize = AES_BLOCK_SIZE,
+ .ivsize = 0,
.setkey = ce_aes_setkey,
.encrypt = ecb_encrypt,
.decrypt = ecb_decrypt,
@@ -441,7 +441,7 @@
.cra_ablkcipher = {
.min_keysize = AES_MIN_KEY_SIZE,
.max_keysize = AES_MAX_KEY_SIZE,
- .ivsize = AES_BLOCK_SIZE,
+ .ivsize = 0,
.setkey = ablk_set_key,
.encrypt = ablk_encrypt,
.decrypt = ablk_decrypt,
diff --git a/arch/arm/include/asm/arch_gicv3.h b/arch/arm/include/asm/arch_gicv3.h
index 7da5503..e08d151 100644
--- a/arch/arm/include/asm/arch_gicv3.h
+++ b/arch/arm/include/asm/arch_gicv3.h
@@ -117,6 +117,7 @@
u32 irqstat;
asm volatile("mrc " __stringify(ICC_IAR1) : "=r" (irqstat));
+ dsb(sy);
return irqstat;
}
diff --git a/arch/arm/include/asm/xen/page-coherent.h b/arch/arm/include/asm/xen/page-coherent.h
index 0375c8c..9408a99 100644
--- a/arch/arm/include/asm/xen/page-coherent.h
+++ b/arch/arm/include/asm/xen/page-coherent.h
@@ -35,14 +35,21 @@
dma_addr_t dev_addr, unsigned long offset, size_t size,
enum dma_data_direction dir, struct dma_attrs *attrs)
{
- bool local = XEN_PFN_DOWN(dev_addr) == page_to_xen_pfn(page);
+ unsigned long page_pfn = page_to_xen_pfn(page);
+ unsigned long dev_pfn = XEN_PFN_DOWN(dev_addr);
+ unsigned long compound_pages =
+ (1<<compound_order(page)) * XEN_PFN_PER_PAGE;
+ bool local = (page_pfn <= dev_pfn) &&
+ (dev_pfn - page_pfn < compound_pages);
+
/*
- * Dom0 is mapped 1:1, while the Linux page can be spanned accross
- * multiple Xen page, it's not possible to have a mix of local and
- * foreign Xen page. So if the first xen_pfn == mfn the page is local
- * otherwise it's a foreign page grant-mapped in dom0. If the page is
- * local we can safely call the native dma_ops function, otherwise we
- * call the xen specific function.
+ * Dom0 is mapped 1:1, while the Linux page can span across
+ * multiple Xen pages, it's not possible for it to contain a
+ * mix of local and foreign Xen pages. So if the first xen_pfn
+ * == mfn the page is local otherwise it's a foreign page
+ * grant-mapped in dom0. If the page is local we can safely
+ * call the native dma_ops function, otherwise we call the xen
+ * specific function.
*/
if (local)
__generic_dma_ops(hwdev)->map_page(hwdev, page, offset, size, dir, attrs);
diff --git a/arch/arm/kvm/mmio.c b/arch/arm/kvm/mmio.c
index 7f33b20..0f6600f 100644
--- a/arch/arm/kvm/mmio.c
+++ b/arch/arm/kvm/mmio.c
@@ -206,7 +206,8 @@
run->mmio.is_write = is_write;
run->mmio.phys_addr = fault_ipa;
run->mmio.len = len;
- memcpy(run->mmio.data, data_buf, len);
+ if (is_write)
+ memcpy(run->mmio.data, data_buf, len);
if (!ret) {
/* We handled the access successfully in the kernel. */
diff --git a/arch/arm/mach-omap2/board-generic.c b/arch/arm/mach-omap2/board-generic.c
index 8098272..bab814d 100644
--- a/arch/arm/mach-omap2/board-generic.c
+++ b/arch/arm/mach-omap2/board-generic.c
@@ -18,6 +18,7 @@
#include <asm/setup.h>
#include <asm/mach/arch.h>
+#include <asm/system_info.h>
#include "common.h"
@@ -77,12 +78,31 @@
NULL,
};
+/* Set system_rev from atags */
+static void __init rx51_set_system_rev(const struct tag *tags)
+{
+ const struct tag *tag;
+
+ if (tags->hdr.tag != ATAG_CORE)
+ return;
+
+ for_each_tag(tag, tags) {
+ if (tag->hdr.tag == ATAG_REVISION) {
+ system_rev = tag->u.revision.rev;
+ break;
+ }
+ }
+}
+
/* Legacy userspace on Nokia N900 needs ATAGS exported in /proc/atags,
* save them while the data is still not overwritten
*/
static void __init rx51_reserve(void)
{
- save_atags((const struct tag *)(PAGE_OFFSET + 0x100));
+ const struct tag *tags = (const struct tag *)(PAGE_OFFSET + 0x100);
+
+ save_atags(tags);
+ rx51_set_system_rev(tags);
omap_reserve();
}
diff --git a/arch/arm/mach-omap2/gpmc-onenand.c b/arch/arm/mach-omap2/gpmc-onenand.c
index 7b76ce0..8633c70 100644
--- a/arch/arm/mach-omap2/gpmc-onenand.c
+++ b/arch/arm/mach-omap2/gpmc-onenand.c
@@ -101,10 +101,8 @@
static void set_onenand_cfg(void __iomem *onenand_base)
{
- u32 reg;
+ u32 reg = ONENAND_SYS_CFG1_RDY | ONENAND_SYS_CFG1_INT;
- reg = readw(onenand_base + ONENAND_REG_SYS_CFG1);
- reg &= ~((0x7 << ONENAND_SYS_CFG1_BRL_SHIFT) | (0x7 << 9));
reg |= (latency << ONENAND_SYS_CFG1_BRL_SHIFT) |
ONENAND_SYS_CFG1_BL_16;
if (onenand_flags & ONENAND_FLAG_SYNCREAD)
@@ -123,6 +121,7 @@
reg |= ONENAND_SYS_CFG1_VHF;
else
reg &= ~ONENAND_SYS_CFG1_VHF;
+
writew(reg, onenand_base + ONENAND_REG_SYS_CFG1);
}
@@ -289,6 +288,7 @@
}
}
+ onenand_async.sync_write = true;
omap2_onenand_calc_async_timings(&t);
ret = gpmc_cs_program_settings(gpmc_onenand_data->cs, &onenand_async);
diff --git a/arch/arm/mach-omap2/omap_device.c b/arch/arm/mach-omap2/omap_device.c
index 0437537..f7ff3b9 100644
--- a/arch/arm/mach-omap2/omap_device.c
+++ b/arch/arm/mach-omap2/omap_device.c
@@ -191,12 +191,22 @@
{
struct platform_device *pdev = to_platform_device(dev);
struct omap_device *od;
+ int err;
switch (event) {
case BUS_NOTIFY_DEL_DEVICE:
if (pdev->archdata.od)
omap_device_delete(pdev->archdata.od);
break;
+ case BUS_NOTIFY_UNBOUND_DRIVER:
+ od = to_omap_device(pdev);
+ if (od && (od->_state == OMAP_DEVICE_STATE_ENABLED)) {
+ dev_info(dev, "enabled after unload, idling\n");
+ err = omap_device_idle(pdev);
+ if (err)
+ dev_err(dev, "failed to idle\n");
+ }
+ break;
case BUS_NOTIFY_ADD_DEVICE:
if (pdev->dev.of_node)
omap_device_build_from_dt(pdev);
@@ -602,8 +612,10 @@
int ret;
ret = omap_device_enable(pdev);
- if (ret)
+ if (ret) {
+ dev_err(dev, "use pm_runtime_put_sync_suspend() in driver?\n");
return ret;
+ }
return pm_generic_runtime_resume(dev);
}
diff --git a/arch/arm/mach-shmobile/common.h b/arch/arm/mach-shmobile/common.h
index 9cb1121..b3a4ed528 100644
--- a/arch/arm/mach-shmobile/common.h
+++ b/arch/arm/mach-shmobile/common.h
@@ -4,7 +4,6 @@
extern void shmobile_init_delay(void);
extern void shmobile_boot_vector(void);
extern unsigned long shmobile_boot_fn;
-extern unsigned long shmobile_boot_arg;
extern unsigned long shmobile_boot_size;
extern void shmobile_smp_boot(void);
extern void shmobile_smp_sleep(void);
diff --git a/arch/arm/mach-shmobile/headsmp-scu.S b/arch/arm/mach-shmobile/headsmp-scu.S
index fa5248c..5e503d9 100644
--- a/arch/arm/mach-shmobile/headsmp-scu.S
+++ b/arch/arm/mach-shmobile/headsmp-scu.S
@@ -38,9 +38,3 @@
b secondary_startup
ENDPROC(shmobile_boot_scu)
-
- .text
- .align 2
- .globl shmobile_scu_base
-shmobile_scu_base:
- .space 4
diff --git a/arch/arm/mach-shmobile/headsmp.S b/arch/arm/mach-shmobile/headsmp.S
index 330c1fc..32e0bf6 100644
--- a/arch/arm/mach-shmobile/headsmp.S
+++ b/arch/arm/mach-shmobile/headsmp.S
@@ -24,7 +24,6 @@
.arm
.align 12
ENTRY(shmobile_boot_vector)
- ldr r0, 2f
ldr r1, 1f
bx r1
@@ -34,9 +33,6 @@
.globl shmobile_boot_fn
shmobile_boot_fn:
1: .space 4
- .globl shmobile_boot_arg
-shmobile_boot_arg:
-2: .space 4
.globl shmobile_boot_size
shmobile_boot_size:
.long . - shmobile_boot_vector
@@ -46,13 +42,15 @@
*/
ENTRY(shmobile_smp_boot)
- @ r0 = MPIDR_HWID_BITMASK
mrc p15, 0, r1, c0, c0, 5 @ r1 = MPIDR
- and r0, r1, r0 @ r0 = cpu_logical_map() value
+ and r0, r1, #0xffffff @ MPIDR_HWID_BITMASK
+ @ r0 = cpu_logical_map() value
mov r1, #0 @ r1 = CPU index
- adr r5, 1f @ array of per-cpu mpidr values
- adr r6, 2f @ array of per-cpu functions
- adr r7, 3f @ array of per-cpu arguments
+ adr r2, 1f
+ ldmia r2, {r5, r6, r7}
+ add r5, r5, r2 @ array of per-cpu mpidr values
+ add r6, r6, r2 @ array of per-cpu functions
+ add r7, r7, r2 @ array of per-cpu arguments
shmobile_smp_boot_find_mpidr:
ldr r8, [r5, r1, lsl #2]
@@ -80,12 +78,18 @@
b shmobile_smp_boot
ENDPROC(shmobile_smp_sleep)
+ .align 2
+1: .long shmobile_smp_mpidr - .
+ .long shmobile_smp_fn - 1b
+ .long shmobile_smp_arg - 1b
+
+ .bss
.globl shmobile_smp_mpidr
shmobile_smp_mpidr:
-1: .space NR_CPUS * 4
+ .space NR_CPUS * 4
.globl shmobile_smp_fn
shmobile_smp_fn:
-2: .space NR_CPUS * 4
+ .space NR_CPUS * 4
.globl shmobile_smp_arg
shmobile_smp_arg:
-3: .space NR_CPUS * 4
+ .space NR_CPUS * 4
diff --git a/arch/arm/mach-shmobile/platsmp-apmu.c b/arch/arm/mach-shmobile/platsmp-apmu.c
index 911884f..aba75c8 100644
--- a/arch/arm/mach-shmobile/platsmp-apmu.c
+++ b/arch/arm/mach-shmobile/platsmp-apmu.c
@@ -123,7 +123,6 @@
{
/* install boot code shared by all CPUs */
shmobile_boot_fn = virt_to_phys(shmobile_smp_boot);
- shmobile_boot_arg = MPIDR_HWID_BITMASK;
/* perform per-cpu setup */
apmu_parse_cfg(apmu_init_cpu, apmu_config, num);
diff --git a/arch/arm/mach-shmobile/platsmp-scu.c b/arch/arm/mach-shmobile/platsmp-scu.c
index 6466311..081a097 100644
--- a/arch/arm/mach-shmobile/platsmp-scu.c
+++ b/arch/arm/mach-shmobile/platsmp-scu.c
@@ -17,6 +17,9 @@
#include <asm/smp_scu.h>
#include "common.h"
+
+void __iomem *shmobile_scu_base;
+
static int shmobile_smp_scu_notifier_call(struct notifier_block *nfb,
unsigned long action, void *hcpu)
{
@@ -41,7 +44,6 @@
{
/* install boot code shared by all CPUs */
shmobile_boot_fn = virt_to_phys(shmobile_smp_boot);
- shmobile_boot_arg = MPIDR_HWID_BITMASK;
/* enable SCU and cache coherency on booting CPU */
scu_enable(shmobile_scu_base);
diff --git a/arch/arm/mach-shmobile/smp-r8a7779.c b/arch/arm/mach-shmobile/smp-r8a7779.c
index b854fe2..0b024a9 100644
--- a/arch/arm/mach-shmobile/smp-r8a7779.c
+++ b/arch/arm/mach-shmobile/smp-r8a7779.c
@@ -92,8 +92,6 @@
{
/* Map the reset vector (in headsmp-scu.S, headsmp.S) */
__raw_writel(__pa(shmobile_boot_vector), AVECR);
- shmobile_boot_fn = virt_to_phys(shmobile_boot_scu);
- shmobile_boot_arg = (unsigned long)shmobile_scu_base;
/* setup r8a7779 specific SCU bits */
shmobile_scu_base = IOMEM(R8A7779_SCU_BASE);
diff --git a/arch/arm/mm/mmap.c b/arch/arm/mm/mmap.c
index 4b4058d..66353ca 100644
--- a/arch/arm/mm/mmap.c
+++ b/arch/arm/mm/mmap.c
@@ -173,7 +173,7 @@
{
unsigned long rnd;
- rnd = (unsigned long)get_random_int() & ((1 << mmap_rnd_bits) - 1);
+ rnd = get_random_long() & ((1UL << mmap_rnd_bits) - 1);
return rnd << PAGE_SHIFT;
}
diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
index 307237c..b5e3f6d 100644
--- a/arch/arm64/Makefile
+++ b/arch/arm64/Makefile
@@ -88,7 +88,7 @@
Image.%: vmlinux
$(Q)$(MAKE) $(build)=$(boot) $(boot)/$@
-zinstall install: vmlinux
+zinstall install:
$(Q)$(MAKE) $(build)=$(boot) $@
%.dtb: scripts
diff --git a/arch/arm64/boot/Makefile b/arch/arm64/boot/Makefile
index abcbba2..305c552 100644
--- a/arch/arm64/boot/Makefile
+++ b/arch/arm64/boot/Makefile
@@ -34,10 +34,10 @@
$(obj)/Image.lzo: $(obj)/Image FORCE
$(call if_changed,lzo)
-install: $(obj)/Image
+install:
$(CONFIG_SHELL) $(srctree)/$(src)/install.sh $(KERNELRELEASE) \
$(obj)/Image System.map "$(INSTALL_PATH)"
-zinstall: $(obj)/Image.gz
+zinstall:
$(CONFIG_SHELL) $(srctree)/$(src)/install.sh $(KERNELRELEASE) \
$(obj)/Image.gz System.map "$(INSTALL_PATH)"
diff --git a/arch/arm64/boot/install.sh b/arch/arm64/boot/install.sh
index 12ed78a..d91e1f0 100644
--- a/arch/arm64/boot/install.sh
+++ b/arch/arm64/boot/install.sh
@@ -20,6 +20,20 @@
# $4 - default install path (blank if root directory)
#
+verify () {
+ if [ ! -f "$1" ]; then
+ echo "" 1>&2
+ echo " *** Missing file: $1" 1>&2
+ echo ' *** You need to run "make" before "make install".' 1>&2
+ echo "" 1>&2
+ exit 1
+ fi
+}
+
+# Make sure the files actually exist
+verify "$2"
+verify "$3"
+
# User may have a custom install script
if [ -x ~/bin/${INSTALLKERNEL} ]; then exec ~/bin/${INSTALLKERNEL} "$@"; fi
if [ -x /sbin/${INSTALLKERNEL} ]; then exec /sbin/${INSTALLKERNEL} "$@"; fi
diff --git a/arch/arm64/crypto/aes-glue.c b/arch/arm64/crypto/aes-glue.c
index 05d9e16..7a3d22a 100644
--- a/arch/arm64/crypto/aes-glue.c
+++ b/arch/arm64/crypto/aes-glue.c
@@ -294,7 +294,7 @@
.cra_blkcipher = {
.min_keysize = AES_MIN_KEY_SIZE,
.max_keysize = AES_MAX_KEY_SIZE,
- .ivsize = AES_BLOCK_SIZE,
+ .ivsize = 0,
.setkey = aes_setkey,
.encrypt = ecb_encrypt,
.decrypt = ecb_decrypt,
@@ -371,7 +371,7 @@
.cra_ablkcipher = {
.min_keysize = AES_MIN_KEY_SIZE,
.max_keysize = AES_MAX_KEY_SIZE,
- .ivsize = AES_BLOCK_SIZE,
+ .ivsize = 0,
.setkey = ablk_set_key,
.encrypt = ablk_encrypt,
.decrypt = ablk_decrypt,
diff --git a/arch/arm64/include/asm/arch_gicv3.h b/arch/arm64/include/asm/arch_gicv3.h
index 2731d3b..8ec88e5 100644
--- a/arch/arm64/include/asm/arch_gicv3.h
+++ b/arch/arm64/include/asm/arch_gicv3.h
@@ -103,6 +103,7 @@
u64 irqstat;
asm volatile("mrs_s %0, " __stringify(ICC_IAR1_EL1) : "=r" (irqstat));
+ dsb(sy);
return irqstat;
}
diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index 738a95f..d201d4b 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -107,8 +107,6 @@
#define TCR_EL2_MASK (TCR_EL2_TG0 | TCR_EL2_SH0 | \
TCR_EL2_ORGN0 | TCR_EL2_IRGN0 | TCR_EL2_T0SZ)
-#define TCR_EL2_FLAGS (TCR_EL2_RES1 | TCR_EL2_PS_40B)
-
/* VTCR_EL2 Registers bits */
#define VTCR_EL2_RES1 (1 << 31)
#define VTCR_EL2_PS_MASK (7 << 16)
@@ -182,6 +180,7 @@
#define CPTR_EL2_TCPAC (1 << 31)
#define CPTR_EL2_TTA (1 << 20)
#define CPTR_EL2_TFP (1 << CPTR_EL2_TFP_SHIFT)
+#define CPTR_EL2_DEFAULT 0x000033ff
/* Hyp Debug Configuration Register bits */
#define MDCR_EL2_TDRA (1 << 11)
diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index 3066328..779a587 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -127,10 +127,14 @@
static inline bool vcpu_mode_priv(const struct kvm_vcpu *vcpu)
{
- u32 mode = *vcpu_cpsr(vcpu) & PSR_MODE_MASK;
+ u32 mode;
- if (vcpu_mode_is_32bit(vcpu))
+ if (vcpu_mode_is_32bit(vcpu)) {
+ mode = *vcpu_cpsr(vcpu) & COMPAT_PSR_MODE_MASK;
return mode > COMPAT_PSR_MODE_USR;
+ }
+
+ mode = *vcpu_cpsr(vcpu) & PSR_MODE_MASK;
return mode != PSR_MODE_EL0t;
}
diff --git a/arch/arm64/kernel/debug-monitors.c b/arch/arm64/kernel/debug-monitors.c
index 8aee3ae..c536c9e 100644
--- a/arch/arm64/kernel/debug-monitors.c
+++ b/arch/arm64/kernel/debug-monitors.c
@@ -226,11 +226,28 @@
return retval;
}
+static void send_user_sigtrap(int si_code)
+{
+ struct pt_regs *regs = current_pt_regs();
+ siginfo_t info = {
+ .si_signo = SIGTRAP,
+ .si_errno = 0,
+ .si_code = si_code,
+ .si_addr = (void __user *)instruction_pointer(regs),
+ };
+
+ if (WARN_ON(!user_mode(regs)))
+ return;
+
+ if (interrupts_enabled(regs))
+ local_irq_enable();
+
+ force_sig_info(SIGTRAP, &info, current);
+}
+
static int single_step_handler(unsigned long addr, unsigned int esr,
struct pt_regs *regs)
{
- siginfo_t info;
-
/*
* If we are stepping a pending breakpoint, call the hw_breakpoint
* handler first.
@@ -239,11 +256,7 @@
return 0;
if (user_mode(regs)) {
- info.si_signo = SIGTRAP;
- info.si_errno = 0;
- info.si_code = TRAP_HWBKPT;
- info.si_addr = (void __user *)instruction_pointer(regs);
- force_sig_info(SIGTRAP, &info, current);
+ send_user_sigtrap(TRAP_HWBKPT);
/*
* ptrace will disable single step unless explicitly
@@ -307,17 +320,8 @@
static int brk_handler(unsigned long addr, unsigned int esr,
struct pt_regs *regs)
{
- siginfo_t info;
-
if (user_mode(regs)) {
- info = (siginfo_t) {
- .si_signo = SIGTRAP,
- .si_errno = 0,
- .si_code = TRAP_BRKPT,
- .si_addr = (void __user *)instruction_pointer(regs),
- };
-
- force_sig_info(SIGTRAP, &info, current);
+ send_user_sigtrap(TRAP_BRKPT);
} else if (call_break_hook(regs, esr) != DBG_HOOK_HANDLED) {
pr_warning("Unexpected kernel BRK exception at EL1\n");
return -EFAULT;
@@ -328,7 +332,6 @@
int aarch32_break_handler(struct pt_regs *regs)
{
- siginfo_t info;
u32 arm_instr;
u16 thumb_instr;
bool bp = false;
@@ -359,14 +362,7 @@
if (!bp)
return -EFAULT;
- info = (siginfo_t) {
- .si_signo = SIGTRAP,
- .si_errno = 0,
- .si_code = TRAP_BRKPT,
- .si_addr = pc,
- };
-
- force_sig_info(SIGTRAP, &info, current);
+ send_user_sigtrap(TRAP_BRKPT);
return 0;
}
diff --git a/arch/arm64/kernel/image.h b/arch/arm64/kernel/image.h
index 999633b..352f7ab 100644
--- a/arch/arm64/kernel/image.h
+++ b/arch/arm64/kernel/image.h
@@ -89,6 +89,7 @@
__efistub_memmove = KALLSYMS_HIDE(__pi_memmove);
__efistub_memset = KALLSYMS_HIDE(__pi_memset);
__efistub_strlen = KALLSYMS_HIDE(__pi_strlen);
+__efistub_strnlen = KALLSYMS_HIDE(__pi_strnlen);
__efistub_strcmp = KALLSYMS_HIDE(__pi_strcmp);
__efistub_strncmp = KALLSYMS_HIDE(__pi_strncmp);
__efistub___flush_dcache_area = KALLSYMS_HIDE(__pi___flush_dcache_area);
diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
index 4fad978..d9751a4 100644
--- a/arch/arm64/kernel/stacktrace.c
+++ b/arch/arm64/kernel/stacktrace.c
@@ -44,14 +44,13 @@
unsigned long irq_stack_ptr;
/*
- * Use raw_smp_processor_id() to avoid false-positives from
- * CONFIG_DEBUG_PREEMPT. get_wchan() calls unwind_frame() on sleeping
- * task stacks, we can be pre-empted in this case, so
- * {raw_,}smp_processor_id() may give us the wrong value. Sleeping
- * tasks can't ever be on an interrupt stack, so regardless of cpu,
- * the checks will always fail.
+ * Switching between stacks is valid when tracing current and in
+ * non-preemptible context.
*/
- irq_stack_ptr = IRQ_STACK_PTR(raw_smp_processor_id());
+ if (tsk == current && !preemptible())
+ irq_stack_ptr = IRQ_STACK_PTR(smp_processor_id());
+ else
+ irq_stack_ptr = 0;
low = frame->sp;
/* irq stacks are not THREAD_SIZE aligned */
@@ -64,8 +63,8 @@
return -EINVAL;
frame->sp = fp + 0x10;
- frame->fp = *(unsigned long *)(fp);
- frame->pc = *(unsigned long *)(fp + 8);
+ frame->fp = READ_ONCE_NOCHECK(*(unsigned long *)(fp));
+ frame->pc = READ_ONCE_NOCHECK(*(unsigned long *)(fp + 8));
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
if (tsk && tsk->ret_stack &&
diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
index cbedd72..c539208 100644
--- a/arch/arm64/kernel/traps.c
+++ b/arch/arm64/kernel/traps.c
@@ -146,9 +146,18 @@
static void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk)
{
struct stackframe frame;
- unsigned long irq_stack_ptr = IRQ_STACK_PTR(smp_processor_id());
+ unsigned long irq_stack_ptr;
int skip;
+ /*
+ * Switching between stacks is valid when tracing current and in
+ * non-preemptible context.
+ */
+ if (tsk == current && !preemptible())
+ irq_stack_ptr = IRQ_STACK_PTR(smp_processor_id());
+ else
+ irq_stack_ptr = 0;
+
pr_debug("%s(regs = %p tsk = %p)\n", __func__, regs, tsk);
if (!tsk)
diff --git a/arch/arm64/kvm/hyp-init.S b/arch/arm64/kvm/hyp-init.S
index 3e568dc..d073b5a 100644
--- a/arch/arm64/kvm/hyp-init.S
+++ b/arch/arm64/kvm/hyp-init.S
@@ -64,7 +64,7 @@
mrs x4, tcr_el1
ldr x5, =TCR_EL2_MASK
and x4, x4, x5
- ldr x5, =TCR_EL2_FLAGS
+ mov x5, #TCR_EL2_RES1
orr x4, x4, x5
#ifndef CONFIG_ARM64_VA_BITS_48
@@ -85,14 +85,16 @@
ldr_l x5, idmap_t0sz
bfi x4, x5, TCR_T0SZ_OFFSET, TCR_TxSZ_WIDTH
#endif
+ /*
+ * Read the PARange bits from ID_AA64MMFR0_EL1 and set the PS bits in
+ * TCR_EL2 and VTCR_EL2.
+ */
+ mrs x5, ID_AA64MMFR0_EL1
+ bfi x4, x5, #16, #3
+
msr tcr_el2, x4
ldr x4, =VTCR_EL2_FLAGS
- /*
- * Read the PARange bits from ID_AA64MMFR0_EL1 and set the PS bits in
- * VTCR_EL2.
- */
- mrs x5, ID_AA64MMFR0_EL1
bfi x4, x5, #16, #3
/*
* Read the VMIDBits bits from ID_AA64MMFR1_EL1 and set the VS bit in
diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index ca8f5a5..f0e7bdf 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -36,7 +36,11 @@
write_sysreg(val, hcr_el2);
/* Trap on AArch32 cp15 c15 accesses (EL1 or EL0) */
write_sysreg(1 << 15, hstr_el2);
- write_sysreg(CPTR_EL2_TTA | CPTR_EL2_TFP, cptr_el2);
+
+ val = CPTR_EL2_DEFAULT;
+ val |= CPTR_EL2_TTA | CPTR_EL2_TFP;
+ write_sysreg(val, cptr_el2);
+
write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2);
}
@@ -45,7 +49,7 @@
write_sysreg(HCR_RW, hcr_el2);
write_sysreg(0, hstr_el2);
write_sysreg(read_sysreg(mdcr_el2) & MDCR_EL2_HPMN_MASK, mdcr_el2);
- write_sysreg(0, cptr_el2);
+ write_sysreg(CPTR_EL2_DEFAULT, cptr_el2);
}
static void __hyp_text __activate_vm(struct kvm_vcpu *vcpu)
diff --git a/arch/arm64/kvm/hyp/vgic-v3-sr.c b/arch/arm64/kvm/hyp/vgic-v3-sr.c
index 9142e08..5dd2a26 100644
--- a/arch/arm64/kvm/hyp/vgic-v3-sr.c
+++ b/arch/arm64/kvm/hyp/vgic-v3-sr.c
@@ -149,16 +149,6 @@
switch (nr_pri_bits) {
case 7:
- write_gicreg(cpu_if->vgic_ap1r[3], ICH_AP1R3_EL2);
- write_gicreg(cpu_if->vgic_ap1r[2], ICH_AP1R2_EL2);
- case 6:
- write_gicreg(cpu_if->vgic_ap1r[1], ICH_AP1R1_EL2);
- default:
- write_gicreg(cpu_if->vgic_ap1r[0], ICH_AP1R0_EL2);
- }
-
- switch (nr_pri_bits) {
- case 7:
write_gicreg(cpu_if->vgic_ap0r[3], ICH_AP0R3_EL2);
write_gicreg(cpu_if->vgic_ap0r[2], ICH_AP0R2_EL2);
case 6:
@@ -167,6 +157,16 @@
write_gicreg(cpu_if->vgic_ap0r[0], ICH_AP0R0_EL2);
}
+ switch (nr_pri_bits) {
+ case 7:
+ write_gicreg(cpu_if->vgic_ap1r[3], ICH_AP1R3_EL2);
+ write_gicreg(cpu_if->vgic_ap1r[2], ICH_AP1R2_EL2);
+ case 6:
+ write_gicreg(cpu_if->vgic_ap1r[1], ICH_AP1R1_EL2);
+ default:
+ write_gicreg(cpu_if->vgic_ap1r[0], ICH_AP1R0_EL2);
+ }
+
switch (max_lr_idx) {
case 15:
write_gicreg(cpu_if->vgic_lr[VGIC_V3_LR_INDEX(15)], ICH_LR15_EL2);
diff --git a/arch/arm64/kvm/inject_fault.c b/arch/arm64/kvm/inject_fault.c
index 648112e..4d1ac81 100644
--- a/arch/arm64/kvm/inject_fault.c
+++ b/arch/arm64/kvm/inject_fault.c
@@ -27,7 +27,11 @@
#define PSTATE_FAULT_BITS_64 (PSR_MODE_EL1h | PSR_A_BIT | PSR_F_BIT | \
PSR_I_BIT | PSR_D_BIT)
-#define EL1_EXCEPT_SYNC_OFFSET 0x200
+
+#define CURRENT_EL_SP_EL0_VECTOR 0x0
+#define CURRENT_EL_SP_ELx_VECTOR 0x200
+#define LOWER_EL_AArch64_VECTOR 0x400
+#define LOWER_EL_AArch32_VECTOR 0x600
static void prepare_fault32(struct kvm_vcpu *vcpu, u32 mode, u32 vect_offset)
{
@@ -97,6 +101,34 @@
*fsr = 0x14;
}
+enum exception_type {
+ except_type_sync = 0,
+ except_type_irq = 0x80,
+ except_type_fiq = 0x100,
+ except_type_serror = 0x180,
+};
+
+static u64 get_except_vector(struct kvm_vcpu *vcpu, enum exception_type type)
+{
+ u64 exc_offset;
+
+ switch (*vcpu_cpsr(vcpu) & (PSR_MODE_MASK | PSR_MODE32_BIT)) {
+ case PSR_MODE_EL1t:
+ exc_offset = CURRENT_EL_SP_EL0_VECTOR;
+ break;
+ case PSR_MODE_EL1h:
+ exc_offset = CURRENT_EL_SP_ELx_VECTOR;
+ break;
+ case PSR_MODE_EL0t:
+ exc_offset = LOWER_EL_AArch64_VECTOR;
+ break;
+ default:
+ exc_offset = LOWER_EL_AArch32_VECTOR;
+ }
+
+ return vcpu_sys_reg(vcpu, VBAR_EL1) + exc_offset + type;
+}
+
static void inject_abt64(struct kvm_vcpu *vcpu, bool is_iabt, unsigned long addr)
{
unsigned long cpsr = *vcpu_cpsr(vcpu);
@@ -108,8 +140,8 @@
*vcpu_spsr(vcpu) = cpsr;
*vcpu_elr_el1(vcpu) = *vcpu_pc(vcpu);
+ *vcpu_pc(vcpu) = get_except_vector(vcpu, except_type_sync);
*vcpu_cpsr(vcpu) = PSTATE_FAULT_BITS_64;
- *vcpu_pc(vcpu) = vcpu_sys_reg(vcpu, VBAR_EL1) + EL1_EXCEPT_SYNC_OFFSET;
vcpu_sys_reg(vcpu, FAR_EL1) = addr;
@@ -143,8 +175,8 @@
*vcpu_spsr(vcpu) = cpsr;
*vcpu_elr_el1(vcpu) = *vcpu_pc(vcpu);
+ *vcpu_pc(vcpu) = get_except_vector(vcpu, except_type_sync);
*vcpu_cpsr(vcpu) = PSTATE_FAULT_BITS_64;
- *vcpu_pc(vcpu) = vcpu_sys_reg(vcpu, VBAR_EL1) + EL1_EXCEPT_SYNC_OFFSET;
/*
* Build an unknown exception, depending on the instruction
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index eec3598b..2e90371 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1007,10 +1007,9 @@
if (likely(r->access(vcpu, params, r))) {
/* Skip instruction, since it was emulated */
kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
+ /* Handled */
+ return 0;
}
-
- /* Handled */
- return 0;
}
/* Not handled */
@@ -1043,7 +1042,7 @@
}
/**
- * kvm_handle_cp_64 -- handles a mrrc/mcrr trap on a guest CP15 access
+ * kvm_handle_cp_64 -- handles a mrrc/mcrr trap on a guest CP14/CP15 access
* @vcpu: The VCPU pointer
* @run: The kvm_run struct
*/
@@ -1095,7 +1094,7 @@
}
/**
- * kvm_handle_cp15_32 -- handles a mrc/mcr trap on a guest CP15 access
+ * kvm_handle_cp_32 -- handles a mrc/mcr trap on a guest CP14/CP15 access
* @vcpu: The VCPU pointer
* @run: The kvm_run struct
*/
diff --git a/arch/arm64/lib/strnlen.S b/arch/arm64/lib/strnlen.S
index 2ca6657..eae38da 100644
--- a/arch/arm64/lib/strnlen.S
+++ b/arch/arm64/lib/strnlen.S
@@ -168,4 +168,4 @@
.Lhit_limit:
mov len, limit
ret
-ENDPROC(strnlen)
+ENDPIPROC(strnlen)
diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c
index 331c4ca..a6e757c 100644
--- a/arch/arm64/mm/dma-mapping.c
+++ b/arch/arm64/mm/dma-mapping.c
@@ -933,6 +933,10 @@
ret = register_iommu_dma_ops_notifier(&platform_bus_type);
if (!ret)
ret = register_iommu_dma_ops_notifier(&amba_bustype);
+
+ /* handle devices queued before this arch_initcall */
+ if (!ret)
+ __iommu_attach_notifier(NULL, BUS_NOTIFY_ADD_DEVICE, NULL);
return ret;
}
arch_initcall(__iommu_dma_init);
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index 92ddac1..abe2a95 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -371,6 +371,13 @@
return 0;
}
+static int do_alignment_fault(unsigned long addr, unsigned int esr,
+ struct pt_regs *regs)
+{
+ do_bad_area(addr, esr, regs);
+ return 0;
+}
+
/*
* This abort handler always returns "fault".
*/
@@ -418,7 +425,7 @@
{ do_bad, SIGBUS, 0, "synchronous parity error (translation table walk)" },
{ do_bad, SIGBUS, 0, "synchronous parity error (translation table walk)" },
{ do_bad, SIGBUS, 0, "unknown 32" },
- { do_bad, SIGBUS, BUS_ADRALN, "alignment fault" },
+ { do_alignment_fault, SIGBUS, BUS_ADRALN, "alignment fault" },
{ do_bad, SIGBUS, 0, "unknown 34" },
{ do_bad, SIGBUS, 0, "unknown 35" },
{ do_bad, SIGBUS, 0, "unknown 36" },
diff --git a/arch/arm64/mm/mmap.c b/arch/arm64/mm/mmap.c
index 4c893b5..232f787 100644
--- a/arch/arm64/mm/mmap.c
+++ b/arch/arm64/mm/mmap.c
@@ -53,10 +53,10 @@
#ifdef CONFIG_COMPAT
if (test_thread_flag(TIF_32BIT))
- rnd = (unsigned long)get_random_int() & ((1 << mmap_rnd_compat_bits) - 1);
+ rnd = get_random_long() & ((1UL << mmap_rnd_compat_bits) - 1);
else
#endif
- rnd = (unsigned long)get_random_int() & ((1 << mmap_rnd_bits) - 1);
+ rnd = get_random_long() & ((1UL << mmap_rnd_bits) - 1);
return rnd << PAGE_SHIFT;
}
diff --git a/arch/m68k/configs/amiga_defconfig b/arch/m68k/configs/amiga_defconfig
index fc96e81..d1fc479 100644
--- a/arch/m68k/configs/amiga_defconfig
+++ b/arch/m68k/configs/amiga_defconfig
@@ -108,6 +108,8 @@
CONFIG_NFT_QUEUE=m
CONFIG_NFT_REJECT=m
CONFIG_NFT_COMPAT=m
+CONFIG_NFT_DUP_NETDEV=m
+CONFIG_NFT_FWD_NETDEV=m
CONFIG_NETFILTER_XT_SET=m
CONFIG_NETFILTER_XT_TARGET_CHECKSUM=m
CONFIG_NETFILTER_XT_TARGET_CLASSIFY=m
@@ -266,6 +268,12 @@
CONFIG_BRIDGE=m
CONFIG_ATALK=m
CONFIG_6LOWPAN=m
+CONFIG_6LOWPAN_GHC_EXT_HDR_HOP=m
+CONFIG_6LOWPAN_GHC_UDP=m
+CONFIG_6LOWPAN_GHC_ICMPV6=m
+CONFIG_6LOWPAN_GHC_EXT_HDR_DEST=m
+CONFIG_6LOWPAN_GHC_EXT_HDR_FRAG=m
+CONFIG_6LOWPAN_GHC_EXT_HDR_ROUTE=m
CONFIG_DNS_RESOLVER=y
CONFIG_BATMAN_ADV=m
CONFIG_BATMAN_ADV_DAT=y
@@ -366,6 +374,7 @@
# CONFIG_NET_VENDOR_INTEL is not set
# CONFIG_NET_VENDOR_MARVELL is not set
# CONFIG_NET_VENDOR_MICREL is not set
+# CONFIG_NET_VENDOR_NETRONOME is not set
CONFIG_HYDRA=y
CONFIG_APNE=y
CONFIG_ZORRO8390=y
diff --git a/arch/m68k/configs/apollo_defconfig b/arch/m68k/configs/apollo_defconfig
index 05c904f..9bfe8be 100644
--- a/arch/m68k/configs/apollo_defconfig
+++ b/arch/m68k/configs/apollo_defconfig
@@ -106,6 +106,8 @@
CONFIG_NFT_QUEUE=m
CONFIG_NFT_REJECT=m
CONFIG_NFT_COMPAT=m
+CONFIG_NFT_DUP_NETDEV=m
+CONFIG_NFT_FWD_NETDEV=m
CONFIG_NETFILTER_XT_SET=m
CONFIG_NETFILTER_XT_TARGET_CHECKSUM=m
CONFIG_NETFILTER_XT_TARGET_CLASSIFY=m
@@ -264,6 +266,12 @@
CONFIG_BRIDGE=m
CONFIG_ATALK=m
CONFIG_6LOWPAN=m
+CONFIG_6LOWPAN_GHC_EXT_HDR_HOP=m
+CONFIG_6LOWPAN_GHC_UDP=m
+CONFIG_6LOWPAN_GHC_ICMPV6=m
+CONFIG_6LOWPAN_GHC_EXT_HDR_DEST=m
+CONFIG_6LOWPAN_GHC_EXT_HDR_FRAG=m
+CONFIG_6LOWPAN_GHC_EXT_HDR_ROUTE=m
CONFIG_DNS_RESOLVER=y
CONFIG_BATMAN_ADV=m
CONFIG_BATMAN_ADV_DAT=y
@@ -344,6 +352,7 @@
# CONFIG_NET_VENDOR_MARVELL is not set
# CONFIG_NET_VENDOR_MICREL is not set
# CONFIG_NET_VENDOR_NATSEMI is not set
+# CONFIG_NET_VENDOR_NETRONOME is not set
# CONFIG_NET_VENDOR_QUALCOMM is not set
# CONFIG_NET_VENDOR_RENESAS is not set
# CONFIG_NET_VENDOR_ROCKER is not set
diff --git a/arch/m68k/configs/atari_defconfig b/arch/m68k/configs/atari_defconfig
index d572b73..ebdcfae 100644
--- a/arch/m68k/configs/atari_defconfig
+++ b/arch/m68k/configs/atari_defconfig
@@ -106,6 +106,8 @@
CONFIG_NFT_QUEUE=m
CONFIG_NFT_REJECT=m
CONFIG_NFT_COMPAT=m
+CONFIG_NFT_DUP_NETDEV=m
+CONFIG_NFT_FWD_NETDEV=m
CONFIG_NETFILTER_XT_SET=m
CONFIG_NETFILTER_XT_TARGET_CHECKSUM=m
CONFIG_NETFILTER_XT_TARGET_CLASSIFY=m
@@ -264,6 +266,12 @@
CONFIG_BRIDGE=m
CONFIG_ATALK=m
CONFIG_6LOWPAN=m
+CONFIG_6LOWPAN_GHC_EXT_HDR_HOP=m
+CONFIG_6LOWPAN_GHC_UDP=m
+CONFIG_6LOWPAN_GHC_ICMPV6=m
+CONFIG_6LOWPAN_GHC_EXT_HDR_DEST=m
+CONFIG_6LOWPAN_GHC_EXT_HDR_FRAG=m
+CONFIG_6LOWPAN_GHC_EXT_HDR_ROUTE=m
CONFIG_DNS_RESOLVER=y
CONFIG_BATMAN_ADV=m
CONFIG_BATMAN_ADV_DAT=y
@@ -353,6 +361,7 @@
# CONFIG_NET_VENDOR_INTEL is not set
# CONFIG_NET_VENDOR_MARVELL is not set
# CONFIG_NET_VENDOR_MICREL is not set
+# CONFIG_NET_VENDOR_NETRONOME is not set
CONFIG_NE2000=y
# CONFIG_NET_VENDOR_QUALCOMM is not set
# CONFIG_NET_VENDOR_RENESAS is not set
diff --git a/arch/m68k/configs/bvme6000_defconfig b/arch/m68k/configs/bvme6000_defconfig
index 11a30c6..8acc65e 100644
--- a/arch/m68k/configs/bvme6000_defconfig
+++ b/arch/m68k/configs/bvme6000_defconfig
@@ -104,6 +104,8 @@
CONFIG_NFT_QUEUE=m
CONFIG_NFT_REJECT=m
CONFIG_NFT_COMPAT=m
+CONFIG_NFT_DUP_NETDEV=m
+CONFIG_NFT_FWD_NETDEV=m
CONFIG_NETFILTER_XT_SET=m
CONFIG_NETFILTER_XT_TARGET_CHECKSUM=m
CONFIG_NETFILTER_XT_TARGET_CLASSIFY=m
@@ -262,6 +264,12 @@
CONFIG_BRIDGE=m
CONFIG_ATALK=m
CONFIG_6LOWPAN=m
+CONFIG_6LOWPAN_GHC_EXT_HDR_HOP=m
+CONFIG_6LOWPAN_GHC_UDP=m
+CONFIG_6LOWPAN_GHC_ICMPV6=m
+CONFIG_6LOWPAN_GHC_EXT_HDR_DEST=m
+CONFIG_6LOWPAN_GHC_EXT_HDR_FRAG=m
+CONFIG_6LOWPAN_GHC_EXT_HDR_ROUTE=m
CONFIG_DNS_RESOLVER=y
CONFIG_BATMAN_ADV=m
CONFIG_BATMAN_ADV_DAT=y
@@ -343,6 +351,7 @@
# CONFIG_NET_VENDOR_MARVELL is not set
# CONFIG_NET_VENDOR_MICREL is not set
# CONFIG_NET_VENDOR_NATSEMI is not set
+# CONFIG_NET_VENDOR_NETRONOME is not set
# CONFIG_NET_VENDOR_QUALCOMM is not set
# CONFIG_NET_VENDOR_RENESAS is not set
# CONFIG_NET_VENDOR_ROCKER is not set
diff --git a/arch/m68k/configs/hp300_defconfig b/arch/m68k/configs/hp300_defconfig
index 6630a51..0c6a3d5 100644
--- a/arch/m68k/configs/hp300_defconfig
+++ b/arch/m68k/configs/hp300_defconfig
@@ -106,6 +106,8 @@
CONFIG_NFT_QUEUE=m
CONFIG_NFT_REJECT=m
CONFIG_NFT_COMPAT=m
+CONFIG_NFT_DUP_NETDEV=m
+CONFIG_NFT_FWD_NETDEV=m
CONFIG_NETFILTER_XT_SET=m
CONFIG_NETFILTER_XT_TARGET_CHECKSUM=m
CONFIG_NETFILTER_XT_TARGET_CLASSIFY=m
@@ -264,6 +266,12 @@
CONFIG_BRIDGE=m
CONFIG_ATALK=m
CONFIG_6LOWPAN=m
+CONFIG_6LOWPAN_GHC_EXT_HDR_HOP=m
+CONFIG_6LOWPAN_GHC_UDP=m
+CONFIG_6LOWPAN_GHC_ICMPV6=m
+CONFIG_6LOWPAN_GHC_EXT_HDR_DEST=m
+CONFIG_6LOWPAN_GHC_EXT_HDR_FRAG=m
+CONFIG_6LOWPAN_GHC_EXT_HDR_ROUTE=m
CONFIG_DNS_RESOLVER=y
CONFIG_BATMAN_ADV=m
CONFIG_BATMAN_ADV_DAT=y
@@ -345,6 +353,7 @@
# CONFIG_NET_VENDOR_MARVELL is not set
# CONFIG_NET_VENDOR_MICREL is not set
# CONFIG_NET_VENDOR_NATSEMI is not set
+# CONFIG_NET_VENDOR_NETRONOME is not set
# CONFIG_NET_VENDOR_QUALCOMM is not set
# CONFIG_NET_VENDOR_RENESAS is not set
# CONFIG_NET_VENDOR_ROCKER is not set
diff --git a/arch/m68k/configs/mac_defconfig b/arch/m68k/configs/mac_defconfig
index 1d90b71..12a8a6c 100644
--- a/arch/m68k/configs/mac_defconfig
+++ b/arch/m68k/configs/mac_defconfig
@@ -105,6 +105,8 @@
CONFIG_NFT_QUEUE=m
CONFIG_NFT_REJECT=m
CONFIG_NFT_COMPAT=m
+CONFIG_NFT_DUP_NETDEV=m
+CONFIG_NFT_FWD_NETDEV=m
CONFIG_NETFILTER_XT_SET=m
CONFIG_NETFILTER_XT_TARGET_CHECKSUM=m
CONFIG_NETFILTER_XT_TARGET_CLASSIFY=m
@@ -266,6 +268,12 @@
CONFIG_IPDDP=m
CONFIG_IPDDP_ENCAP=y
CONFIG_6LOWPAN=m
+CONFIG_6LOWPAN_GHC_EXT_HDR_HOP=m
+CONFIG_6LOWPAN_GHC_UDP=m
+CONFIG_6LOWPAN_GHC_ICMPV6=m
+CONFIG_6LOWPAN_GHC_EXT_HDR_DEST=m
+CONFIG_6LOWPAN_GHC_EXT_HDR_FRAG=m
+CONFIG_6LOWPAN_GHC_EXT_HDR_ROUTE=m
CONFIG_DNS_RESOLVER=y
CONFIG_BATMAN_ADV=m
CONFIG_BATMAN_ADV_DAT=y
@@ -362,6 +370,7 @@
# CONFIG_NET_VENDOR_MARVELL is not set
# CONFIG_NET_VENDOR_MICREL is not set
CONFIG_MACSONIC=y
+# CONFIG_NET_VENDOR_NETRONOME is not set
CONFIG_MAC8390=y
# CONFIG_NET_VENDOR_QUALCOMM is not set
# CONFIG_NET_VENDOR_RENESAS is not set
diff --git a/arch/m68k/configs/multi_defconfig b/arch/m68k/configs/multi_defconfig
index 1fd21c1..64ff2dc 100644
--- a/arch/m68k/configs/multi_defconfig
+++ b/arch/m68k/configs/multi_defconfig
@@ -115,6 +115,8 @@
CONFIG_NFT_QUEUE=m
CONFIG_NFT_REJECT=m
CONFIG_NFT_COMPAT=m
+CONFIG_NFT_DUP_NETDEV=m
+CONFIG_NFT_FWD_NETDEV=m
CONFIG_NETFILTER_XT_SET=m
CONFIG_NETFILTER_XT_TARGET_CHECKSUM=m
CONFIG_NETFILTER_XT_TARGET_CLASSIFY=m
@@ -276,6 +278,12 @@
CONFIG_IPDDP=m
CONFIG_IPDDP_ENCAP=y
CONFIG_6LOWPAN=m
+CONFIG_6LOWPAN_GHC_EXT_HDR_HOP=m
+CONFIG_6LOWPAN_GHC_UDP=m
+CONFIG_6LOWPAN_GHC_ICMPV6=m
+CONFIG_6LOWPAN_GHC_EXT_HDR_DEST=m
+CONFIG_6LOWPAN_GHC_EXT_HDR_FRAG=m
+CONFIG_6LOWPAN_GHC_EXT_HDR_ROUTE=m
CONFIG_DNS_RESOLVER=y
CONFIG_BATMAN_ADV=m
CONFIG_BATMAN_ADV_DAT=y
@@ -404,6 +412,7 @@
# CONFIG_NET_VENDOR_MARVELL is not set
# CONFIG_NET_VENDOR_MICREL is not set
CONFIG_MACSONIC=y
+# CONFIG_NET_VENDOR_NETRONOME is not set
CONFIG_HYDRA=y
CONFIG_MAC8390=y
CONFIG_NE2000=y
diff --git a/arch/m68k/configs/mvme147_defconfig b/arch/m68k/configs/mvme147_defconfig
index 74e10f7..07fc6ab 100644
--- a/arch/m68k/configs/mvme147_defconfig
+++ b/arch/m68k/configs/mvme147_defconfig
@@ -103,6 +103,8 @@
CONFIG_NFT_QUEUE=m
CONFIG_NFT_REJECT=m
CONFIG_NFT_COMPAT=m
+CONFIG_NFT_DUP_NETDEV=m
+CONFIG_NFT_FWD_NETDEV=m
CONFIG_NETFILTER_XT_SET=m
CONFIG_NETFILTER_XT_TARGET_CHECKSUM=m
CONFIG_NETFILTER_XT_TARGET_CLASSIFY=m
@@ -261,6 +263,12 @@
CONFIG_BRIDGE=m
CONFIG_ATALK=m
CONFIG_6LOWPAN=m
+CONFIG_6LOWPAN_GHC_EXT_HDR_HOP=m
+CONFIG_6LOWPAN_GHC_UDP=m
+CONFIG_6LOWPAN_GHC_ICMPV6=m
+CONFIG_6LOWPAN_GHC_EXT_HDR_DEST=m
+CONFIG_6LOWPAN_GHC_EXT_HDR_FRAG=m
+CONFIG_6LOWPAN_GHC_EXT_HDR_ROUTE=m
CONFIG_DNS_RESOLVER=y
CONFIG_BATMAN_ADV=m
CONFIG_BATMAN_ADV_DAT=y
@@ -343,6 +351,7 @@
# CONFIG_NET_VENDOR_MARVELL is not set
# CONFIG_NET_VENDOR_MICREL is not set
# CONFIG_NET_VENDOR_NATSEMI is not set
+# CONFIG_NET_VENDOR_NETRONOME is not set
# CONFIG_NET_VENDOR_QUALCOMM is not set
# CONFIG_NET_VENDOR_RENESAS is not set
# CONFIG_NET_VENDOR_ROCKER is not set
diff --git a/arch/m68k/configs/mvme16x_defconfig b/arch/m68k/configs/mvme16x_defconfig
index 7034e71..69903de 100644
--- a/arch/m68k/configs/mvme16x_defconfig
+++ b/arch/m68k/configs/mvme16x_defconfig
@@ -104,6 +104,8 @@
CONFIG_NFT_QUEUE=m
CONFIG_NFT_REJECT=m
CONFIG_NFT_COMPAT=m
+CONFIG_NFT_DUP_NETDEV=m
+CONFIG_NFT_FWD_NETDEV=m
CONFIG_NETFILTER_XT_SET=m
CONFIG_NETFILTER_XT_TARGET_CHECKSUM=m
CONFIG_NETFILTER_XT_TARGET_CLASSIFY=m
@@ -262,6 +264,12 @@
CONFIG_BRIDGE=m
CONFIG_ATALK=m
CONFIG_6LOWPAN=m
+CONFIG_6LOWPAN_GHC_EXT_HDR_HOP=m
+CONFIG_6LOWPAN_GHC_UDP=m
+CONFIG_6LOWPAN_GHC_ICMPV6=m
+CONFIG_6LOWPAN_GHC_EXT_HDR_DEST=m
+CONFIG_6LOWPAN_GHC_EXT_HDR_FRAG=m
+CONFIG_6LOWPAN_GHC_EXT_HDR_ROUTE=m
CONFIG_DNS_RESOLVER=y
CONFIG_BATMAN_ADV=m
CONFIG_BATMAN_ADV_DAT=y
@@ -343,6 +351,7 @@
# CONFIG_NET_VENDOR_MARVELL is not set
# CONFIG_NET_VENDOR_MICREL is not set
# CONFIG_NET_VENDOR_NATSEMI is not set
+# CONFIG_NET_VENDOR_NETRONOME is not set
# CONFIG_NET_VENDOR_QUALCOMM is not set
# CONFIG_NET_VENDOR_RENESAS is not set
# CONFIG_NET_VENDOR_ROCKER is not set
diff --git a/arch/m68k/configs/q40_defconfig b/arch/m68k/configs/q40_defconfig
index f7deb5f..bd84016 100644
--- a/arch/m68k/configs/q40_defconfig
+++ b/arch/m68k/configs/q40_defconfig
@@ -104,6 +104,8 @@
CONFIG_NFT_QUEUE=m
CONFIG_NFT_REJECT=m
CONFIG_NFT_COMPAT=m
+CONFIG_NFT_DUP_NETDEV=m
+CONFIG_NFT_FWD_NETDEV=m
CONFIG_NETFILTER_XT_SET=m
CONFIG_NETFILTER_XT_TARGET_CHECKSUM=m
CONFIG_NETFILTER_XT_TARGET_CLASSIFY=m
@@ -262,6 +264,12 @@
CONFIG_BRIDGE=m
CONFIG_ATALK=m
CONFIG_6LOWPAN=m
+CONFIG_6LOWPAN_GHC_EXT_HDR_HOP=m
+CONFIG_6LOWPAN_GHC_UDP=m
+CONFIG_6LOWPAN_GHC_ICMPV6=m
+CONFIG_6LOWPAN_GHC_EXT_HDR_DEST=m
+CONFIG_6LOWPAN_GHC_EXT_HDR_FRAG=m
+CONFIG_6LOWPAN_GHC_EXT_HDR_ROUTE=m
CONFIG_DNS_RESOLVER=y
CONFIG_BATMAN_ADV=m
CONFIG_BATMAN_ADV_DAT=y
@@ -352,6 +360,7 @@
# CONFIG_NET_VENDOR_INTEL is not set
# CONFIG_NET_VENDOR_MARVELL is not set
# CONFIG_NET_VENDOR_MICREL is not set
+# CONFIG_NET_VENDOR_NETRONOME is not set
CONFIG_NE2000=y
# CONFIG_NET_VENDOR_QUALCOMM is not set
# CONFIG_NET_VENDOR_RENESAS is not set
diff --git a/arch/m68k/configs/sun3_defconfig b/arch/m68k/configs/sun3_defconfig
index 0ce79eb..5f9fb3a 100644
--- a/arch/m68k/configs/sun3_defconfig
+++ b/arch/m68k/configs/sun3_defconfig
@@ -101,6 +101,8 @@
CONFIG_NFT_QUEUE=m
CONFIG_NFT_REJECT=m
CONFIG_NFT_COMPAT=m
+CONFIG_NFT_DUP_NETDEV=m
+CONFIG_NFT_FWD_NETDEV=m
CONFIG_NETFILTER_XT_SET=m
CONFIG_NETFILTER_XT_TARGET_CHECKSUM=m
CONFIG_NETFILTER_XT_TARGET_CLASSIFY=m
@@ -259,6 +261,12 @@
CONFIG_BRIDGE=m
CONFIG_ATALK=m
CONFIG_6LOWPAN=m
+CONFIG_6LOWPAN_GHC_EXT_HDR_HOP=m
+CONFIG_6LOWPAN_GHC_UDP=m
+CONFIG_6LOWPAN_GHC_ICMPV6=m
+CONFIG_6LOWPAN_GHC_EXT_HDR_DEST=m
+CONFIG_6LOWPAN_GHC_EXT_HDR_FRAG=m
+CONFIG_6LOWPAN_GHC_EXT_HDR_ROUTE=m
CONFIG_DNS_RESOLVER=y
CONFIG_BATMAN_ADV=m
CONFIG_BATMAN_ADV_DAT=y
@@ -340,6 +348,7 @@
# CONFIG_NET_VENDOR_MARVELL is not set
# CONFIG_NET_VENDOR_MICREL is not set
# CONFIG_NET_VENDOR_NATSEMI is not set
+# CONFIG_NET_VENDOR_NETRONOME is not set
# CONFIG_NET_VENDOR_QUALCOMM is not set
# CONFIG_NET_VENDOR_RENESAS is not set
# CONFIG_NET_VENDOR_ROCKER is not set
diff --git a/arch/m68k/configs/sun3x_defconfig b/arch/m68k/configs/sun3x_defconfig
index 4cb787e..5d1c674 100644
--- a/arch/m68k/configs/sun3x_defconfig
+++ b/arch/m68k/configs/sun3x_defconfig
@@ -101,6 +101,8 @@
CONFIG_NFT_QUEUE=m
CONFIG_NFT_REJECT=m
CONFIG_NFT_COMPAT=m
+CONFIG_NFT_DUP_NETDEV=m
+CONFIG_NFT_FWD_NETDEV=m
CONFIG_NETFILTER_XT_SET=m
CONFIG_NETFILTER_XT_TARGET_CHECKSUM=m
CONFIG_NETFILTER_XT_TARGET_CLASSIFY=m
@@ -259,6 +261,12 @@
CONFIG_BRIDGE=m
CONFIG_ATALK=m
CONFIG_6LOWPAN=m
+CONFIG_6LOWPAN_GHC_EXT_HDR_HOP=m
+CONFIG_6LOWPAN_GHC_UDP=m
+CONFIG_6LOWPAN_GHC_ICMPV6=m
+CONFIG_6LOWPAN_GHC_EXT_HDR_DEST=m
+CONFIG_6LOWPAN_GHC_EXT_HDR_FRAG=m
+CONFIG_6LOWPAN_GHC_EXT_HDR_ROUTE=m
CONFIG_DNS_RESOLVER=y
CONFIG_BATMAN_ADV=m
CONFIG_BATMAN_ADV_DAT=y
@@ -341,6 +349,7 @@
# CONFIG_NET_VENDOR_MARVELL is not set
# CONFIG_NET_VENDOR_MICREL is not set
# CONFIG_NET_VENDOR_NATSEMI is not set
+# CONFIG_NET_VENDOR_NETRONOME is not set
# CONFIG_NET_VENDOR_QUALCOMM is not set
# CONFIG_NET_VENDOR_RENESAS is not set
# CONFIG_NET_VENDOR_ROCKER is not set
diff --git a/arch/m68k/include/asm/unistd.h b/arch/m68k/include/asm/unistd.h
index f9d96bf..bafaff6 100644
--- a/arch/m68k/include/asm/unistd.h
+++ b/arch/m68k/include/asm/unistd.h
@@ -4,7 +4,7 @@
#include <uapi/asm/unistd.h>
-#define NR_syscalls 376
+#define NR_syscalls 377
#define __ARCH_WANT_OLD_READDIR
#define __ARCH_WANT_OLD_STAT
diff --git a/arch/m68k/include/uapi/asm/unistd.h b/arch/m68k/include/uapi/asm/unistd.h
index 36cf129..0ca7296 100644
--- a/arch/m68k/include/uapi/asm/unistd.h
+++ b/arch/m68k/include/uapi/asm/unistd.h
@@ -381,5 +381,6 @@
#define __NR_userfaultfd 373
#define __NR_membarrier 374
#define __NR_mlock2 375
+#define __NR_copy_file_range 376
#endif /* _UAPI_ASM_M68K_UNISTD_H_ */
diff --git a/arch/m68k/kernel/syscalltable.S b/arch/m68k/kernel/syscalltable.S
index 282cd90..8bb9426 100644
--- a/arch/m68k/kernel/syscalltable.S
+++ b/arch/m68k/kernel/syscalltable.S
@@ -396,3 +396,4 @@
.long sys_userfaultfd
.long sys_membarrier
.long sys_mlock2 /* 375 */
+ .long sys_copy_file_range
diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
index 57a945e..74a3db9 100644
--- a/arch/mips/Kconfig
+++ b/arch/mips/Kconfig
@@ -2085,7 +2085,7 @@
config PAGE_SIZE_64KB
bool "64kB"
- depends on !CPU_R3000 && !CPU_TX39XX
+ depends on !CPU_R3000 && !CPU_TX39XX && !CPU_R6000
help
Using 64kB page size will result in higher performance kernel at
the price of higher memory consumption. This option is available on
diff --git a/arch/mips/boot/dts/brcm/bcm6328.dtsi b/arch/mips/boot/dts/brcm/bcm6328.dtsi
index 459b9b2..d61b161 100644
--- a/arch/mips/boot/dts/brcm/bcm6328.dtsi
+++ b/arch/mips/boot/dts/brcm/bcm6328.dtsi
@@ -74,6 +74,7 @@
timer: timer@10000040 {
compatible = "syscon";
reg = <0x10000040 0x2c>;
+ little-endian;
};
reboot {
diff --git a/arch/mips/boot/dts/brcm/bcm7125.dtsi b/arch/mips/boot/dts/brcm/bcm7125.dtsi
index 4fc7ece..1a7efa8 100644
--- a/arch/mips/boot/dts/brcm/bcm7125.dtsi
+++ b/arch/mips/boot/dts/brcm/bcm7125.dtsi
@@ -98,6 +98,7 @@
sun_top_ctrl: syscon@404000 {
compatible = "brcm,bcm7125-sun-top-ctrl", "syscon";
reg = <0x404000 0x60c>;
+ little-endian;
};
reboot {
diff --git a/arch/mips/boot/dts/brcm/bcm7346.dtsi b/arch/mips/boot/dts/brcm/bcm7346.dtsi
index a3039bb..d4bf52c 100644
--- a/arch/mips/boot/dts/brcm/bcm7346.dtsi
+++ b/arch/mips/boot/dts/brcm/bcm7346.dtsi
@@ -118,6 +118,7 @@
sun_top_ctrl: syscon@404000 {
compatible = "brcm,bcm7346-sun-top-ctrl", "syscon";
reg = <0x404000 0x51c>;
+ little-endian;
};
reboot {
diff --git a/arch/mips/boot/dts/brcm/bcm7358.dtsi b/arch/mips/boot/dts/brcm/bcm7358.dtsi
index 4274ff4..8e25016 100644
--- a/arch/mips/boot/dts/brcm/bcm7358.dtsi
+++ b/arch/mips/boot/dts/brcm/bcm7358.dtsi
@@ -112,6 +112,7 @@
sun_top_ctrl: syscon@404000 {
compatible = "brcm,bcm7358-sun-top-ctrl", "syscon";
reg = <0x404000 0x51c>;
+ little-endian;
};
reboot {
diff --git a/arch/mips/boot/dts/brcm/bcm7360.dtsi b/arch/mips/boot/dts/brcm/bcm7360.dtsi
index 0dcc9163..7e5f760 100644
--- a/arch/mips/boot/dts/brcm/bcm7360.dtsi
+++ b/arch/mips/boot/dts/brcm/bcm7360.dtsi
@@ -112,6 +112,7 @@
sun_top_ctrl: syscon@404000 {
compatible = "brcm,bcm7360-sun-top-ctrl", "syscon";
reg = <0x404000 0x51c>;
+ little-endian;
};
reboot {
diff --git a/arch/mips/boot/dts/brcm/bcm7362.dtsi b/arch/mips/boot/dts/brcm/bcm7362.dtsi
index 2f3f9fc..c739ea7 100644
--- a/arch/mips/boot/dts/brcm/bcm7362.dtsi
+++ b/arch/mips/boot/dts/brcm/bcm7362.dtsi
@@ -118,6 +118,7 @@
sun_top_ctrl: syscon@404000 {
compatible = "brcm,bcm7362-sun-top-ctrl", "syscon";
reg = <0x404000 0x51c>;
+ little-endian;
};
reboot {
diff --git a/arch/mips/boot/dts/brcm/bcm7420.dtsi b/arch/mips/boot/dts/brcm/bcm7420.dtsi
index bee221b..5f55d0a 100644
--- a/arch/mips/boot/dts/brcm/bcm7420.dtsi
+++ b/arch/mips/boot/dts/brcm/bcm7420.dtsi
@@ -99,6 +99,7 @@
sun_top_ctrl: syscon@404000 {
compatible = "brcm,bcm7420-sun-top-ctrl", "syscon";
reg = <0x404000 0x60c>;
+ little-endian;
};
reboot {
diff --git a/arch/mips/boot/dts/brcm/bcm7425.dtsi b/arch/mips/boot/dts/brcm/bcm7425.dtsi
index 571f30f..e24d41a 100644
--- a/arch/mips/boot/dts/brcm/bcm7425.dtsi
+++ b/arch/mips/boot/dts/brcm/bcm7425.dtsi
@@ -100,6 +100,7 @@
sun_top_ctrl: syscon@404000 {
compatible = "brcm,bcm7425-sun-top-ctrl", "syscon";
reg = <0x404000 0x51c>;
+ little-endian;
};
reboot {
diff --git a/arch/mips/boot/dts/brcm/bcm7435.dtsi b/arch/mips/boot/dts/brcm/bcm7435.dtsi
index 614ee21..8b9432c 100644
--- a/arch/mips/boot/dts/brcm/bcm7435.dtsi
+++ b/arch/mips/boot/dts/brcm/bcm7435.dtsi
@@ -114,6 +114,7 @@
sun_top_ctrl: syscon@404000 {
compatible = "brcm,bcm7425-sun-top-ctrl", "syscon";
reg = <0x404000 0x51c>;
+ little-endian;
};
reboot {
diff --git a/arch/mips/include/asm/elf.h b/arch/mips/include/asm/elf.h
index cefb7a5..e090fc3 100644
--- a/arch/mips/include/asm/elf.h
+++ b/arch/mips/include/asm/elf.h
@@ -227,7 +227,7 @@
int __res = 1; \
struct elfhdr *__h = (hdr); \
\
- if (__h->e_machine != EM_MIPS) \
+ if (!mips_elf_check_machine(__h)) \
__res = 0; \
if (__h->e_ident[EI_CLASS] != ELFCLASS32) \
__res = 0; \
@@ -258,7 +258,7 @@
int __res = 1; \
struct elfhdr *__h = (hdr); \
\
- if (__h->e_machine != EM_MIPS) \
+ if (!mips_elf_check_machine(__h)) \
__res = 0; \
if (__h->e_ident[EI_CLASS] != ELFCLASS64) \
__res = 0; \
@@ -285,6 +285,11 @@
#endif /* !defined(ELF_ARCH) */
+#define mips_elf_check_machine(x) ((x)->e_machine == EM_MIPS)
+
+#define vmcore_elf32_check_arch mips_elf_check_machine
+#define vmcore_elf64_check_arch mips_elf_check_machine
+
struct mips_abi;
extern struct mips_abi mips_abi;
diff --git a/arch/mips/include/asm/fpu.h b/arch/mips/include/asm/fpu.h
index 9cbf383..f06f97b 100644
--- a/arch/mips/include/asm/fpu.h
+++ b/arch/mips/include/asm/fpu.h
@@ -179,6 +179,10 @@
if (save)
_save_fp(tsk);
__disable_fpu();
+ } else {
+ /* FPU should not have been left enabled with no owner */
+ WARN(read_c0_status() & ST0_CU1,
+ "Orphaned FPU left enabled");
}
KSTK_STATUS(tsk) &= ~ST0_CU1;
clear_tsk_thread_flag(tsk, TIF_USEDFPU);
diff --git a/arch/mips/include/asm/octeon/octeon-feature.h b/arch/mips/include/asm/octeon/octeon-feature.h
index 8ebd3f57..3ed10a8 100644
--- a/arch/mips/include/asm/octeon/octeon-feature.h
+++ b/arch/mips/include/asm/octeon/octeon-feature.h
@@ -128,7 +128,8 @@
case OCTEON_FEATURE_PCIE:
return OCTEON_IS_MODEL(OCTEON_CN56XX)
|| OCTEON_IS_MODEL(OCTEON_CN52XX)
- || OCTEON_IS_MODEL(OCTEON_CN6XXX);
+ || OCTEON_IS_MODEL(OCTEON_CN6XXX)
+ || OCTEON_IS_MODEL(OCTEON_CN7XXX);
case OCTEON_FEATURE_SRIO:
return OCTEON_IS_MODEL(OCTEON_CN63XX)
diff --git a/arch/mips/include/asm/processor.h b/arch/mips/include/asm/processor.h
index 3f832c3..041153f 100644
--- a/arch/mips/include/asm/processor.h
+++ b/arch/mips/include/asm/processor.h
@@ -45,7 +45,7 @@
* User space process size: 2GB. This is hardcoded into a few places,
* so don't change it unless you know what you are doing.
*/
-#define TASK_SIZE 0x7fff8000UL
+#define TASK_SIZE 0x80000000UL
#endif
#define STACK_TOP_MAX TASK_SIZE
diff --git a/arch/mips/include/asm/stackframe.h b/arch/mips/include/asm/stackframe.h
index a71da57..eebf395 100644
--- a/arch/mips/include/asm/stackframe.h
+++ b/arch/mips/include/asm/stackframe.h
@@ -289,7 +289,7 @@
.set reorder
.set noat
mfc0 a0, CP0_STATUS
- li v1, 0xff00
+ li v1, ST0_CU1 | ST0_IM
ori a0, STATMASK
xori a0, STATMASK
mtc0 a0, CP0_STATUS
@@ -330,7 +330,7 @@
ori a0, STATMASK
xori a0, STATMASK
mtc0 a0, CP0_STATUS
- li v1, 0xff00
+ li v1, ST0_CU1 | ST0_FR | ST0_IM
and a0, v1
LONG_L v0, PT_STATUS(sp)
nor v1, $0, v1
diff --git a/arch/mips/include/asm/syscall.h b/arch/mips/include/asm/syscall.h
index 6499d93..47bc45a 100644
--- a/arch/mips/include/asm/syscall.h
+++ b/arch/mips/include/asm/syscall.h
@@ -101,10 +101,8 @@
/* O32 ABI syscall() - Either 64-bit with O32 or 32-bit */
if ((config_enabled(CONFIG_32BIT) ||
test_tsk_thread_flag(task, TIF_32BIT_REGS)) &&
- (regs->regs[2] == __NR_syscall)) {
+ (regs->regs[2] == __NR_syscall))
i++;
- n++;
- }
while (n--)
ret |= mips_get_syscall_arg(args++, task, regs, i++);
diff --git a/arch/mips/include/uapi/asm/unistd.h b/arch/mips/include/uapi/asm/unistd.h
index 90f03a7..3129795 100644
--- a/arch/mips/include/uapi/asm/unistd.h
+++ b/arch/mips/include/uapi/asm/unistd.h
@@ -380,16 +380,17 @@
#define __NR_userfaultfd (__NR_Linux + 357)
#define __NR_membarrier (__NR_Linux + 358)
#define __NR_mlock2 (__NR_Linux + 359)
+#define __NR_copy_file_range (__NR_Linux + 360)
/*
* Offset of the last Linux o32 flavoured syscall
*/
-#define __NR_Linux_syscalls 359
+#define __NR_Linux_syscalls 360
#endif /* _MIPS_SIM == _MIPS_SIM_ABI32 */
#define __NR_O32_Linux 4000
-#define __NR_O32_Linux_syscalls 359
+#define __NR_O32_Linux_syscalls 360
#if _MIPS_SIM == _MIPS_SIM_ABI64
@@ -717,16 +718,17 @@
#define __NR_userfaultfd (__NR_Linux + 317)
#define __NR_membarrier (__NR_Linux + 318)
#define __NR_mlock2 (__NR_Linux + 319)
+#define __NR_copy_file_range (__NR_Linux + 320)
/*
* Offset of the last Linux 64-bit flavoured syscall
*/
-#define __NR_Linux_syscalls 319
+#define __NR_Linux_syscalls 320
#endif /* _MIPS_SIM == _MIPS_SIM_ABI64 */
#define __NR_64_Linux 5000
-#define __NR_64_Linux_syscalls 319
+#define __NR_64_Linux_syscalls 320
#if _MIPS_SIM == _MIPS_SIM_NABI32
@@ -1058,15 +1060,16 @@
#define __NR_userfaultfd (__NR_Linux + 321)
#define __NR_membarrier (__NR_Linux + 322)
#define __NR_mlock2 (__NR_Linux + 323)
+#define __NR_copy_file_range (__NR_Linux + 324)
/*
* Offset of the last N32 flavoured syscall
*/
-#define __NR_Linux_syscalls 323
+#define __NR_Linux_syscalls 324
#endif /* _MIPS_SIM == _MIPS_SIM_NABI32 */
#define __NR_N32_Linux 6000
-#define __NR_N32_Linux_syscalls 323
+#define __NR_N32_Linux_syscalls 324
#endif /* _UAPI_ASM_UNISTD_H */
diff --git a/arch/mips/kernel/binfmt_elfn32.c b/arch/mips/kernel/binfmt_elfn32.c
index 1188e00..1b992c6 100644
--- a/arch/mips/kernel/binfmt_elfn32.c
+++ b/arch/mips/kernel/binfmt_elfn32.c
@@ -35,7 +35,7 @@
int __res = 1; \
struct elfhdr *__h = (hdr); \
\
- if (__h->e_machine != EM_MIPS) \
+ if (!mips_elf_check_machine(__h)) \
__res = 0; \
if (__h->e_ident[EI_CLASS] != ELFCLASS32) \
__res = 0; \
diff --git a/arch/mips/kernel/binfmt_elfo32.c b/arch/mips/kernel/binfmt_elfo32.c
index 9287678..abd3aff 100644
--- a/arch/mips/kernel/binfmt_elfo32.c
+++ b/arch/mips/kernel/binfmt_elfo32.c
@@ -47,7 +47,7 @@
int __res = 1; \
struct elfhdr *__h = (hdr); \
\
- if (__h->e_machine != EM_MIPS) \
+ if (!mips_elf_check_machine(__h)) \
__res = 0; \
if (__h->e_ident[EI_CLASS] != ELFCLASS32) \
__res = 0; \
diff --git a/arch/mips/kernel/process.c b/arch/mips/kernel/process.c
index f2975d4..eddd5fd 100644
--- a/arch/mips/kernel/process.c
+++ b/arch/mips/kernel/process.c
@@ -65,12 +65,10 @@
status = regs->cp0_status & ~(ST0_CU0|ST0_CU1|ST0_FR|KU_MASK);
status |= KU_USER;
regs->cp0_status = status;
- clear_used_math();
- clear_fpu_owner();
- init_dsp();
- clear_thread_flag(TIF_USEDMSA);
+ lose_fpu(0);
clear_thread_flag(TIF_MSA_CTX_LIVE);
- disable_msa();
+ clear_used_math();
+ init_dsp();
regs->cp0_epc = pc;
regs->regs[29] = sp;
}
diff --git a/arch/mips/kernel/scall32-o32.S b/arch/mips/kernel/scall32-o32.S
index 2d23c83..a5631744 100644
--- a/arch/mips/kernel/scall32-o32.S
+++ b/arch/mips/kernel/scall32-o32.S
@@ -595,3 +595,4 @@
PTR sys_userfaultfd
PTR sys_membarrier
PTR sys_mlock2
+ PTR sys_copy_file_range /* 4360 */
diff --git a/arch/mips/kernel/scall64-64.S b/arch/mips/kernel/scall64-64.S
index deac633..2b2dc14 100644
--- a/arch/mips/kernel/scall64-64.S
+++ b/arch/mips/kernel/scall64-64.S
@@ -433,4 +433,5 @@
PTR sys_userfaultfd
PTR sys_membarrier
PTR sys_mlock2
+ PTR sys_copy_file_range /* 5320 */
.size sys_call_table,.-sys_call_table
diff --git a/arch/mips/kernel/scall64-n32.S b/arch/mips/kernel/scall64-n32.S
index 5a69eb4..2bf5c85 100644
--- a/arch/mips/kernel/scall64-n32.S
+++ b/arch/mips/kernel/scall64-n32.S
@@ -423,4 +423,5 @@
PTR sys_userfaultfd
PTR sys_membarrier
PTR sys_mlock2
+ PTR sys_copy_file_range
.size sysn32_call_table,.-sysn32_call_table
diff --git a/arch/mips/kernel/scall64-o32.S b/arch/mips/kernel/scall64-o32.S
index e4b6d7c..c5b759e 100644
--- a/arch/mips/kernel/scall64-o32.S
+++ b/arch/mips/kernel/scall64-o32.S
@@ -578,4 +578,5 @@
PTR sys_userfaultfd
PTR sys_membarrier
PTR sys_mlock2
+ PTR sys_copy_file_range /* 4360 */
.size sys32_call_table,.-sys32_call_table
diff --git a/arch/mips/kernel/setup.c b/arch/mips/kernel/setup.c
index 569a7d5..5fdaf8b 100644
--- a/arch/mips/kernel/setup.c
+++ b/arch/mips/kernel/setup.c
@@ -782,6 +782,7 @@
void __init setup_arch(char **cmdline_p)
{
cpu_probe();
+ mips_cm_probe();
prom_init();
setup_early_fdc_console();
diff --git a/arch/mips/kernel/traps.c b/arch/mips/kernel/traps.c
index bafcb7a..ae790c5 100644
--- a/arch/mips/kernel/traps.c
+++ b/arch/mips/kernel/traps.c
@@ -663,7 +663,7 @@
return -1;
}
-static int simulate_rdhwr_mm(struct pt_regs *regs, unsigned short opcode)
+static int simulate_rdhwr_mm(struct pt_regs *regs, unsigned int opcode)
{
if ((opcode & MM_POOL32A_FUNC) == MM_RDHWR) {
int rd = (opcode & MM_RS) >> 16;
@@ -1119,11 +1119,12 @@
if (get_isa16_mode(regs->cp0_epc)) {
unsigned short mmop[2] = { 0 };
- if (unlikely(get_user(mmop[0], epc) < 0))
+ if (unlikely(get_user(mmop[0], (u16 __user *)epc + 0) < 0))
status = SIGSEGV;
- if (unlikely(get_user(mmop[1], epc) < 0))
+ if (unlikely(get_user(mmop[1], (u16 __user *)epc + 1) < 0))
status = SIGSEGV;
- opcode = (mmop[0] << 16) | mmop[1];
+ opcode = mmop[0];
+ opcode = (opcode << 16) | mmop[1];
if (status < 0)
status = simulate_rdhwr_mm(regs, opcode);
@@ -1369,26 +1370,12 @@
if (unlikely(compute_return_epc(regs) < 0))
break;
- if (get_isa16_mode(regs->cp0_epc)) {
- unsigned short mmop[2] = { 0 };
-
- if (unlikely(get_user(mmop[0], epc) < 0))
- status = SIGSEGV;
- if (unlikely(get_user(mmop[1], epc) < 0))
- status = SIGSEGV;
- opcode = (mmop[0] << 16) | mmop[1];
-
- if (status < 0)
- status = simulate_rdhwr_mm(regs, opcode);
- } else {
+ if (!get_isa16_mode(regs->cp0_epc)) {
if (unlikely(get_user(opcode, epc) < 0))
status = SIGSEGV;
if (!cpu_has_llsc && status < 0)
status = simulate_llsc(regs, opcode);
-
- if (status < 0)
- status = simulate_rdhwr_normal(regs, opcode);
}
if (status < 0)
diff --git a/arch/mips/mm/mmap.c b/arch/mips/mm/mmap.c
index 5c81fdd..3530376 100644
--- a/arch/mips/mm/mmap.c
+++ b/arch/mips/mm/mmap.c
@@ -146,7 +146,7 @@
{
unsigned long rnd;
- rnd = (unsigned long)get_random_int();
+ rnd = get_random_long();
rnd <<= PAGE_SHIFT;
if (TASK_IS_32BIT_ADDR)
rnd &= 0xfffffful;
@@ -174,7 +174,7 @@
static inline unsigned long brk_rnd(void)
{
- unsigned long rnd = get_random_int();
+ unsigned long rnd = get_random_long();
rnd = rnd << PAGE_SHIFT;
/* 8MB for 32bit, 256MB for 64bit */
diff --git a/arch/mips/mm/sc-mips.c b/arch/mips/mm/sc-mips.c
index 3bd0597..2496475 100644
--- a/arch/mips/mm/sc-mips.c
+++ b/arch/mips/mm/sc-mips.c
@@ -181,10 +181,6 @@
return 1;
}
-void __weak platform_early_l2_init(void)
-{
-}
-
static inline int __init mips_sc_probe(void)
{
struct cpuinfo_mips *c = ¤t_cpu_data;
@@ -194,12 +190,6 @@
/* Mark as not present until probe completed */
c->scache.flags |= MIPS_CACHE_NOT_PRESENT;
- /*
- * Do we need some platform specific probing before
- * we configure L2?
- */
- platform_early_l2_init();
-
if (mips_cm_revision() >= CM_REV_CM3)
return mips_sc_probe_cm3();
diff --git a/arch/mips/mti-malta/malta-init.c b/arch/mips/mti-malta/malta-init.c
index 571148c..dc2c521 100644
--- a/arch/mips/mti-malta/malta-init.c
+++ b/arch/mips/mti-malta/malta-init.c
@@ -293,7 +293,6 @@
console_config();
#endif
/* Early detection of CMP support */
- mips_cm_probe();
mips_cpc_probe();
if (!register_cps_smp_ops())
@@ -304,10 +303,3 @@
return;
register_up_smp_ops();
}
-
-void platform_early_l2_init(void)
-{
- /* L2 configuration lives in the CM3 */
- if (mips_cm_revision() >= CM_REV_CM3)
- mips_cm_probe();
-}
diff --git a/arch/mips/pci/pci-mt7620.c b/arch/mips/pci/pci-mt7620.c
index a009ee4..1ae932c2 100644
--- a/arch/mips/pci/pci-mt7620.c
+++ b/arch/mips/pci/pci-mt7620.c
@@ -297,12 +297,12 @@
return PTR_ERR(rstpcie0);
bridge_base = devm_ioremap_resource(&pdev->dev, bridge_res);
- if (!bridge_base)
- return -ENOMEM;
+ if (IS_ERR(bridge_base))
+ return PTR_ERR(bridge_base);
pcie_base = devm_ioremap_resource(&pdev->dev, pcie_res);
- if (!pcie_base)
- return -ENOMEM;
+ if (IS_ERR(pcie_base))
+ return PTR_ERR(pcie_base);
iomem_resource.start = 0;
iomem_resource.end = ~0;
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index e4824fd..9faa18c 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -557,7 +557,7 @@
config PPC_4K_PAGES
bool "4k page size"
- select HAVE_ARCH_SOFT_DIRTY if CHECKPOINT_RESTORE && PPC_BOOK3S
+ select HAVE_ARCH_SOFT_DIRTY if PPC_BOOK3S_64
config PPC_16K_PAGES
bool "16k page size"
@@ -566,7 +566,7 @@
config PPC_64K_PAGES
bool "64k page size"
depends on !PPC_FSL_BOOK3E && (44x || PPC_STD_MMU_64 || PPC_BOOK3E_64)
- select HAVE_ARCH_SOFT_DIRTY if CHECKPOINT_RESTORE && PPC_BOOK3S
+ select HAVE_ARCH_SOFT_DIRTY if PPC_BOOK3S_64
config PPC_256K_PAGES
bool "256k page size"
diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
index 8d1c41d..ac07a30 100644
--- a/arch/powerpc/include/asm/book3s/64/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
@@ -281,6 +281,10 @@
extern void pmdp_invalidate(struct vm_area_struct *vma, unsigned long address,
pmd_t *pmdp);
+#define __HAVE_ARCH_PMDP_HUGE_SPLIT_PREPARE
+extern void pmdp_huge_split_prepare(struct vm_area_struct *vma,
+ unsigned long address, pmd_t *pmdp);
+
#define pmd_move_must_withdraw pmd_move_must_withdraw
struct spinlock;
static inline int pmd_move_must_withdraw(struct spinlock *new_pmd_ptl,
diff --git a/arch/powerpc/include/asm/eeh.h b/arch/powerpc/include/asm/eeh.h
index c5eb86f..867c39b 100644
--- a/arch/powerpc/include/asm/eeh.h
+++ b/arch/powerpc/include/asm/eeh.h
@@ -81,6 +81,7 @@
#define EEH_PE_KEEP (1 << 8) /* Keep PE on hotplug */
#define EEH_PE_CFG_RESTRICTED (1 << 9) /* Block config on error */
#define EEH_PE_REMOVED (1 << 10) /* Removed permanently */
+#define EEH_PE_PRI_BUS (1 << 11) /* Cached primary bus */
struct eeh_pe {
int type; /* PE type: PHB/Bus/Device */
diff --git a/arch/powerpc/include/asm/trace.h b/arch/powerpc/include/asm/trace.h
index 8e86b48..32e36b1 100644
--- a/arch/powerpc/include/asm/trace.h
+++ b/arch/powerpc/include/asm/trace.h
@@ -57,12 +57,14 @@
extern void hcall_tracepoint_regfunc(void);
extern void hcall_tracepoint_unregfunc(void);
-TRACE_EVENT_FN(hcall_entry,
+TRACE_EVENT_FN_COND(hcall_entry,
TP_PROTO(unsigned long opcode, unsigned long *args),
TP_ARGS(opcode, args),
+ TP_CONDITION(cpu_online(raw_smp_processor_id())),
+
TP_STRUCT__entry(
__field(unsigned long, opcode)
),
@@ -76,13 +78,15 @@
hcall_tracepoint_regfunc, hcall_tracepoint_unregfunc
);
-TRACE_EVENT_FN(hcall_exit,
+TRACE_EVENT_FN_COND(hcall_exit,
TP_PROTO(unsigned long opcode, unsigned long retval,
unsigned long *retbuf),
TP_ARGS(opcode, retval, retbuf),
+ TP_CONDITION(cpu_online(raw_smp_processor_id())),
+
TP_STRUCT__entry(
__field(unsigned long, opcode)
__field(unsigned long, retval)
diff --git a/arch/powerpc/kernel/eeh_driver.c b/arch/powerpc/kernel/eeh_driver.c
index 9387421..650cfb3 100644
--- a/arch/powerpc/kernel/eeh_driver.c
+++ b/arch/powerpc/kernel/eeh_driver.c
@@ -418,8 +418,7 @@
eeh_pcid_put(dev);
if (driver->err_handler &&
driver->err_handler->error_detected &&
- driver->err_handler->slot_reset &&
- driver->err_handler->resume)
+ driver->err_handler->slot_reset)
return NULL;
}
@@ -564,6 +563,7 @@
*/
eeh_pe_state_mark(pe, EEH_PE_KEEP);
if (bus) {
+ eeh_pe_state_clear(pe, EEH_PE_PRI_BUS);
pci_lock_rescan_remove();
pcibios_remove_pci_devices(bus);
pci_unlock_rescan_remove();
@@ -803,6 +803,7 @@
* the their PCI config any more.
*/
if (frozen_bus) {
+ eeh_pe_state_clear(pe, EEH_PE_PRI_BUS);
eeh_pe_dev_mode_mark(pe, EEH_DEV_REMOVED);
pci_lock_rescan_remove();
@@ -886,6 +887,7 @@
continue;
/* Notify all devices to be down */
+ eeh_pe_state_clear(pe, EEH_PE_PRI_BUS);
bus = eeh_pe_bus_get(phb_pe);
eeh_pe_dev_traverse(pe,
eeh_report_failure, NULL);
diff --git a/arch/powerpc/kernel/eeh_pe.c b/arch/powerpc/kernel/eeh_pe.c
index ca9e537..98f8180 100644
--- a/arch/powerpc/kernel/eeh_pe.c
+++ b/arch/powerpc/kernel/eeh_pe.c
@@ -928,7 +928,7 @@
bus = pe->phb->bus;
} else if (pe->type & EEH_PE_BUS ||
pe->type & EEH_PE_DEVICE) {
- if (pe->bus) {
+ if (pe->state & EEH_PE_PRI_BUS) {
bus = pe->bus;
goto out;
}
diff --git a/arch/powerpc/kernel/module_64.c b/arch/powerpc/kernel/module_64.c
index ac64ffd..08b7a40 100644
--- a/arch/powerpc/kernel/module_64.c
+++ b/arch/powerpc/kernel/module_64.c
@@ -340,7 +340,7 @@
if (name[0] == '.') {
if (strcmp(name+1, "TOC.") == 0)
syms[i].st_shndx = SHN_ABS;
- memmove(name, name+1, strlen(name));
+ syms[i].st_name++;
}
}
}
diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
index dccc87e..3c5736e 100644
--- a/arch/powerpc/kernel/process.c
+++ b/arch/powerpc/kernel/process.c
@@ -1768,9 +1768,9 @@
/* 8MB for 32bit, 1GB for 64bit */
if (is_32bit_task())
- rnd = (long)(get_random_int() % (1<<(23-PAGE_SHIFT)));
+ rnd = (get_random_long() % (1UL<<(23-PAGE_SHIFT)));
else
- rnd = (long)(get_random_int() % (1<<(30-PAGE_SHIFT)));
+ rnd = (get_random_long() % (1UL<<(30-PAGE_SHIFT)));
return rnd << PAGE_SHIFT;
}
diff --git a/arch/powerpc/mm/hash64_64k.c b/arch/powerpc/mm/hash64_64k.c
index 0762c1e..edb0991 100644
--- a/arch/powerpc/mm/hash64_64k.c
+++ b/arch/powerpc/mm/hash64_64k.c
@@ -111,7 +111,13 @@
*/
if (!(old_pte & _PAGE_COMBO)) {
flush_hash_page(vpn, rpte, MMU_PAGE_64K, ssize, flags);
- old_pte &= ~_PAGE_HASHPTE | _PAGE_F_GIX | _PAGE_F_SECOND;
+ /*
+ * clear the old slot details from the old and new pte.
+ * On hash insert failure we use old pte value and we don't
+ * want slot information there if we have a insert failure.
+ */
+ old_pte &= ~(_PAGE_HASHPTE | _PAGE_F_GIX | _PAGE_F_SECOND);
+ new_pte &= ~(_PAGE_HASHPTE | _PAGE_F_GIX | _PAGE_F_SECOND);
goto htab_insert_hpte;
}
/*
diff --git a/arch/powerpc/mm/hugepage-hash64.c b/arch/powerpc/mm/hugepage-hash64.c
index 49b152b..eb2accd 100644
--- a/arch/powerpc/mm/hugepage-hash64.c
+++ b/arch/powerpc/mm/hugepage-hash64.c
@@ -78,9 +78,19 @@
* base page size. This is because demote_segment won't flush
* hash page table entries.
*/
- if ((old_pmd & _PAGE_HASHPTE) && !(old_pmd & _PAGE_COMBO))
+ if ((old_pmd & _PAGE_HASHPTE) && !(old_pmd & _PAGE_COMBO)) {
flush_hash_hugepage(vsid, ea, pmdp, MMU_PAGE_64K,
ssize, flags);
+ /*
+ * With THP, we also clear the slot information with
+ * respect to all the 64K hash pte mapping the 16MB
+ * page. They are all invalid now. This make sure we
+ * don't find the slot valid when we fault with 4k
+ * base page size.
+ *
+ */
+ memset(hpte_slot_array, 0, PTE_FRAG_SIZE);
+ }
}
valid = hpte_valid(hpte_slot_array, index);
diff --git a/arch/powerpc/mm/mmap.c b/arch/powerpc/mm/mmap.c
index 0f0502e..4087705 100644
--- a/arch/powerpc/mm/mmap.c
+++ b/arch/powerpc/mm/mmap.c
@@ -59,9 +59,9 @@
/* 8MB for 32bit, 1GB for 64bit */
if (is_32bit_task())
- rnd = (unsigned long)get_random_int() % (1<<(23-PAGE_SHIFT));
+ rnd = get_random_long() % (1<<(23-PAGE_SHIFT));
else
- rnd = (unsigned long)get_random_int() % (1<<(30-PAGE_SHIFT));
+ rnd = get_random_long() % (1UL<<(30-PAGE_SHIFT));
return rnd << PAGE_SHIFT;
}
diff --git a/arch/powerpc/mm/pgtable_64.c b/arch/powerpc/mm/pgtable_64.c
index 3124a20..cdf2123 100644
--- a/arch/powerpc/mm/pgtable_64.c
+++ b/arch/powerpc/mm/pgtable_64.c
@@ -646,6 +646,28 @@
return pgtable;
}
+void pmdp_huge_split_prepare(struct vm_area_struct *vma,
+ unsigned long address, pmd_t *pmdp)
+{
+ VM_BUG_ON(address & ~HPAGE_PMD_MASK);
+ VM_BUG_ON(REGION_ID(address) != USER_REGION_ID);
+
+ /*
+ * We can't mark the pmd none here, because that will cause a race
+ * against exit_mmap. We need to continue mark pmd TRANS HUGE, while
+ * we spilt, but at the same time we wan't rest of the ppc64 code
+ * not to insert hash pte on this, because we will be modifying
+ * the deposited pgtable in the caller of this function. Hence
+ * clear the _PAGE_USER so that we move the fault handling to
+ * higher level function and that will serialize against ptl.
+ * We need to flush existing hash pte entries here even though,
+ * the translation is still valid, because we will withdraw
+ * pgtable_t after this.
+ */
+ pmd_hugepage_update(vma->vm_mm, address, pmdp, _PAGE_USER, 0);
+}
+
+
/*
* set a new huge pmd. We should not be called for updating
* an existing pmd entry. That should go via pmd_hugepage_update.
@@ -663,10 +685,20 @@
return set_pte_at(mm, addr, pmdp_ptep(pmdp), pmd_pte(pmd));
}
+/*
+ * We use this to invalidate a pmdp entry before switching from a
+ * hugepte to regular pmd entry.
+ */
void pmdp_invalidate(struct vm_area_struct *vma, unsigned long address,
pmd_t *pmdp)
{
pmd_hugepage_update(vma->vm_mm, address, pmdp, _PAGE_PRESENT, 0);
+
+ /*
+ * This ensures that generic code that rely on IRQ disabling
+ * to prevent a parallel THP split work as expected.
+ */
+ kick_all_cpus_sync();
}
/*
diff --git a/arch/powerpc/platforms/powernv/eeh-powernv.c b/arch/powerpc/platforms/powernv/eeh-powernv.c
index 5f152b9..87f47e5 100644
--- a/arch/powerpc/platforms/powernv/eeh-powernv.c
+++ b/arch/powerpc/platforms/powernv/eeh-powernv.c
@@ -444,9 +444,12 @@
* PCI devices of the PE are expected to be removed prior
* to PE reset.
*/
- if (!edev->pe->bus)
+ if (!(edev->pe->state & EEH_PE_PRI_BUS)) {
edev->pe->bus = pci_find_bus(hose->global_number,
pdn->busno);
+ if (edev->pe->bus)
+ edev->pe->state |= EEH_PE_PRI_BUS;
+ }
/*
* Enable EEH explicitly so that we will do EEH check
diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c
index 573ae19..f90dc04 100644
--- a/arch/powerpc/platforms/powernv/pci-ioda.c
+++ b/arch/powerpc/platforms/powernv/pci-ioda.c
@@ -3180,6 +3180,7 @@
static const struct pci_controller_ops pnv_pci_ioda_controller_ops = {
.dma_dev_setup = pnv_pci_dma_dev_setup,
+ .dma_bus_setup = pnv_pci_dma_bus_setup,
#ifdef CONFIG_PCI_MSI
.setup_msi_irqs = pnv_setup_msi_irqs,
.teardown_msi_irqs = pnv_teardown_msi_irqs,
diff --git a/arch/powerpc/platforms/powernv/pci.c b/arch/powerpc/platforms/powernv/pci.c
index 2f55c86..b1ef84a 100644
--- a/arch/powerpc/platforms/powernv/pci.c
+++ b/arch/powerpc/platforms/powernv/pci.c
@@ -599,6 +599,9 @@
u64 rpn = __pa(uaddr) >> tbl->it_page_shift;
long i;
+ if (proto_tce & TCE_PCI_WRITE)
+ proto_tce |= TCE_PCI_READ;
+
for (i = 0; i < npages; i++) {
unsigned long newtce = proto_tce |
((rpn + i) << tbl->it_page_shift);
@@ -620,6 +623,9 @@
BUG_ON(*hpa & ~IOMMU_PAGE_MASK(tbl));
+ if (newtce & TCE_PCI_WRITE)
+ newtce |= TCE_PCI_READ;
+
oldtce = xchg(pnv_tce(tbl, idx), cpu_to_be64(newtce));
*hpa = be64_to_cpu(oldtce) & ~(TCE_PCI_READ | TCE_PCI_WRITE);
*direction = iommu_tce_direction(oldtce);
@@ -760,6 +766,26 @@
phb->dma_dev_setup(phb, pdev);
}
+void pnv_pci_dma_bus_setup(struct pci_bus *bus)
+{
+ struct pci_controller *hose = bus->sysdata;
+ struct pnv_phb *phb = hose->private_data;
+ struct pnv_ioda_pe *pe;
+
+ list_for_each_entry(pe, &phb->ioda.pe_list, list) {
+ if (!(pe->flags & (PNV_IODA_PE_BUS | PNV_IODA_PE_BUS_ALL)))
+ continue;
+
+ if (!pe->pbus)
+ continue;
+
+ if (bus->number == ((pe->rid >> 8) & 0xFF)) {
+ pe->pbus = bus;
+ break;
+ }
+ }
+}
+
void pnv_pci_shutdown(void)
{
struct pci_controller *hose;
diff --git a/arch/powerpc/platforms/powernv/pci.h b/arch/powerpc/platforms/powernv/pci.h
index 7f56313..00691a9 100644
--- a/arch/powerpc/platforms/powernv/pci.h
+++ b/arch/powerpc/platforms/powernv/pci.h
@@ -242,6 +242,7 @@
extern int pnv_eeh_phb_reset(struct pci_controller *hose, int option);
extern void pnv_pci_dma_dev_setup(struct pci_dev *pdev);
+extern void pnv_pci_dma_bus_setup(struct pci_bus *bus);
extern int pnv_setup_msi_irqs(struct pci_dev *pdev, int nvec, int type);
extern void pnv_teardown_msi_irqs(struct pci_dev *pdev);
diff --git a/arch/s390/include/asm/fpu/internal.h b/arch/s390/include/asm/fpu/internal.h
index ea91ddf..629c908 100644
--- a/arch/s390/include/asm/fpu/internal.h
+++ b/arch/s390/include/asm/fpu/internal.h
@@ -40,6 +40,7 @@
static inline void fpregs_store(_s390_fp_regs *fpregs, struct fpu *fpu)
{
fpregs->pad = 0;
+ fpregs->fpc = fpu->fpc;
if (MACHINE_HAS_VX)
convert_vx_to_fp((freg_t *)&fpregs->fprs, fpu->vxrs);
else
@@ -49,6 +50,7 @@
static inline void fpregs_load(_s390_fp_regs *fpregs, struct fpu *fpu)
{
+ fpu->fpc = fpregs->fpc;
if (MACHINE_HAS_VX)
convert_fp_to_vx(fpu->vxrs, (freg_t *)&fpregs->fprs);
else
diff --git a/arch/s390/include/asm/livepatch.h b/arch/s390/include/asm/livepatch.h
index 7aa7991..a52b6cc 100644
--- a/arch/s390/include/asm/livepatch.h
+++ b/arch/s390/include/asm/livepatch.h
@@ -37,7 +37,7 @@
regs->psw.addr = ip;
}
#else
-#error Live patching support is disabled; check CONFIG_LIVEPATCH
+#error Include linux/livepatch.h, not asm/livepatch.h
#endif
#endif
diff --git a/arch/s390/kernel/compat_signal.c b/arch/s390/kernel/compat_signal.c
index 66c9441..4af6037 100644
--- a/arch/s390/kernel/compat_signal.c
+++ b/arch/s390/kernel/compat_signal.c
@@ -271,7 +271,7 @@
/* Restore high gprs from signal stack */
if (__copy_from_user(&gprs_high, &sregs_ext->gprs_high,
- sizeof(&sregs_ext->gprs_high)))
+ sizeof(sregs_ext->gprs_high)))
return -EFAULT;
for (i = 0; i < NUM_GPRS; i++)
*(__u32 *)®s->gprs[i] = gprs_high[i];
diff --git a/arch/s390/kernel/perf_event.c b/arch/s390/kernel/perf_event.c
index cfcba2d..0943b11 100644
--- a/arch/s390/kernel/perf_event.c
+++ b/arch/s390/kernel/perf_event.c
@@ -260,12 +260,13 @@
void perf_callchain_kernel(struct perf_callchain_entry *entry,
struct pt_regs *regs)
{
- unsigned long head;
+ unsigned long head, frame_size;
struct stack_frame *head_sf;
if (user_mode(regs))
return;
+ frame_size = STACK_FRAME_OVERHEAD + sizeof(struct pt_regs);
head = regs->gprs[15];
head_sf = (struct stack_frame *) head;
@@ -273,8 +274,9 @@
return;
head = head_sf->back_chain;
- head = __store_trace(entry, head, S390_lowcore.async_stack - ASYNC_SIZE,
- S390_lowcore.async_stack);
+ head = __store_trace(entry, head,
+ S390_lowcore.async_stack + frame_size - ASYNC_SIZE,
+ S390_lowcore.async_stack + frame_size);
__store_trace(entry, head, S390_lowcore.thread_info,
S390_lowcore.thread_info + THREAD_SIZE);
diff --git a/arch/s390/kernel/stacktrace.c b/arch/s390/kernel/stacktrace.c
index 5acba3c..8f64ebd 100644
--- a/arch/s390/kernel/stacktrace.c
+++ b/arch/s390/kernel/stacktrace.c
@@ -59,26 +59,32 @@
}
}
-void save_stack_trace(struct stack_trace *trace)
+static void __save_stack_trace(struct stack_trace *trace, unsigned long sp)
{
- register unsigned long sp asm ("15");
- unsigned long orig_sp, new_sp;
+ unsigned long new_sp, frame_size;
- orig_sp = sp;
- new_sp = save_context_stack(trace, orig_sp,
- S390_lowcore.panic_stack - PAGE_SIZE,
- S390_lowcore.panic_stack, 1);
- if (new_sp != orig_sp)
- return;
+ frame_size = STACK_FRAME_OVERHEAD + sizeof(struct pt_regs);
+ new_sp = save_context_stack(trace, sp,
+ S390_lowcore.panic_stack + frame_size - PAGE_SIZE,
+ S390_lowcore.panic_stack + frame_size, 1);
new_sp = save_context_stack(trace, new_sp,
- S390_lowcore.async_stack - ASYNC_SIZE,
- S390_lowcore.async_stack, 1);
- if (new_sp != orig_sp)
- return;
+ S390_lowcore.async_stack + frame_size - ASYNC_SIZE,
+ S390_lowcore.async_stack + frame_size, 1);
save_context_stack(trace, new_sp,
S390_lowcore.thread_info,
S390_lowcore.thread_info + THREAD_SIZE, 1);
}
+
+void save_stack_trace(struct stack_trace *trace)
+{
+ register unsigned long r15 asm ("15");
+ unsigned long sp;
+
+ sp = r15;
+ __save_stack_trace(trace, sp);
+ if (trace->nr_entries < trace->max_entries)
+ trace->entries[trace->nr_entries++] = ULONG_MAX;
+}
EXPORT_SYMBOL_GPL(save_stack_trace);
void save_stack_trace_tsk(struct task_struct *tsk, struct stack_trace *trace)
@@ -86,6 +92,10 @@
unsigned long sp, low, high;
sp = tsk->thread.ksp;
+ if (tsk == current) {
+ /* Get current stack pointer. */
+ asm volatile("la %0,0(15)" : "=a" (sp));
+ }
low = (unsigned long) task_stack_page(tsk);
high = (unsigned long) task_pt_regs(tsk);
save_context_stack(trace, sp, low, high, 0);
@@ -93,3 +103,14 @@
trace->entries[trace->nr_entries++] = ULONG_MAX;
}
EXPORT_SYMBOL_GPL(save_stack_trace_tsk);
+
+void save_stack_trace_regs(struct pt_regs *regs, struct stack_trace *trace)
+{
+ unsigned long sp;
+
+ sp = kernel_stack_pointer(regs);
+ __save_stack_trace(trace, sp);
+ if (trace->nr_entries < trace->max_entries)
+ trace->entries[trace->nr_entries++] = ULONG_MAX;
+}
+EXPORT_SYMBOL_GPL(save_stack_trace_regs);
diff --git a/arch/s390/kernel/trace.c b/arch/s390/kernel/trace.c
index 21a5df9..dde7654 100644
--- a/arch/s390/kernel/trace.c
+++ b/arch/s390/kernel/trace.c
@@ -18,6 +18,9 @@
unsigned long flags;
unsigned int *depth;
+ /* Avoid lockdep recursion. */
+ if (IS_ENABLED(CONFIG_LOCKDEP))
+ return;
local_irq_save(flags);
depth = this_cpu_ptr(&diagnose_trace_depth);
if (*depth == 0) {
diff --git a/arch/s390/mm/maccess.c b/arch/s390/mm/maccess.c
index fec59c0..792f9c6 100644
--- a/arch/s390/mm/maccess.c
+++ b/arch/s390/mm/maccess.c
@@ -93,15 +93,19 @@
*/
int memcpy_real(void *dest, void *src, size_t count)
{
+ int irqs_disabled, rc;
unsigned long flags;
- int rc;
if (!count)
return 0;
- local_irq_save(flags);
- __arch_local_irq_stnsm(0xfbUL);
+ flags = __arch_local_irq_stnsm(0xf8UL);
+ irqs_disabled = arch_irqs_disabled_flags(flags);
+ if (!irqs_disabled)
+ trace_hardirqs_off();
rc = __memcpy_real(dest, src, count);
- local_irq_restore(flags);
+ if (!irqs_disabled)
+ trace_hardirqs_on();
+ __arch_local_irq_ssm(flags);
return rc;
}
diff --git a/arch/s390/oprofile/backtrace.c b/arch/s390/oprofile/backtrace.c
index fe0bfe3..1884e17 100644
--- a/arch/s390/oprofile/backtrace.c
+++ b/arch/s390/oprofile/backtrace.c
@@ -54,12 +54,13 @@
void s390_backtrace(struct pt_regs * const regs, unsigned int depth)
{
- unsigned long head;
+ unsigned long head, frame_size;
struct stack_frame* head_sf;
if (user_mode(regs))
return;
+ frame_size = STACK_FRAME_OVERHEAD + sizeof(struct pt_regs);
head = regs->gprs[15];
head_sf = (struct stack_frame*)head;
@@ -68,8 +69,9 @@
head = head_sf->back_chain;
- head = __show_trace(&depth, head, S390_lowcore.async_stack - ASYNC_SIZE,
- S390_lowcore.async_stack);
+ head = __show_trace(&depth, head,
+ S390_lowcore.async_stack + frame_size - ASYNC_SIZE,
+ S390_lowcore.async_stack + frame_size);
__show_trace(&depth, head, S390_lowcore.thread_info,
S390_lowcore.thread_info + THREAD_SIZE);
diff --git a/arch/sparc/kernel/sys_sparc_64.c b/arch/sparc/kernel/sys_sparc_64.c
index c690c8e..b489e97 100644
--- a/arch/sparc/kernel/sys_sparc_64.c
+++ b/arch/sparc/kernel/sys_sparc_64.c
@@ -264,7 +264,7 @@
unsigned long rnd = 0UL;
if (current->flags & PF_RANDOMIZE) {
- unsigned long val = get_random_int();
+ unsigned long val = get_random_long();
if (test_thread_flag(TIF_32BIT))
rnd = (val % (1UL << (23UL-PAGE_SHIFT)));
else
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 9af2e63..c46662f 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -475,6 +475,7 @@
depends on X86_64
depends on X86_EXTENDED_PLATFORM
depends on NUMA
+ depends on EFI
depends on X86_X2APIC
depends on PCI
---help---
@@ -777,8 +778,8 @@
HPET is the next generation timer replacing legacy 8254s.
The HPET provides a stable time base on SMP
systems, unlike the TSC, but it is more expensive to access,
- as it is off-chip. You can find the HPET spec at
- <http://www.intel.com/hardwaredesign/hpetspec_1.pdf>.
+ as it is off-chip. The interface used is documented
+ in the HPET spec, revision 1.
You can safely choose Y here. However, HPET will only be
activated if the platform and the BIOS support this feature.
diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S
index 77d8c51..bb3e376 100644
--- a/arch/x86/entry/entry_32.S
+++ b/arch/x86/entry/entry_32.S
@@ -294,6 +294,7 @@
pushl $__USER_DS /* pt_regs->ss */
pushl %ebp /* pt_regs->sp (stashed in bp) */
pushfl /* pt_regs->flags (except IF = 0) */
+ ASM_CLAC /* Clear AC after saving FLAGS */
orl $X86_EFLAGS_IF, (%esp) /* Fix IF */
pushl $__USER_CS /* pt_regs->cs */
pushl $0 /* pt_regs->ip = 0 (placeholder) */
diff --git a/arch/x86/entry/entry_64_compat.S b/arch/x86/entry/entry_64_compat.S
index ff1c6d6..3c990ee 100644
--- a/arch/x86/entry/entry_64_compat.S
+++ b/arch/x86/entry/entry_64_compat.S
@@ -261,6 +261,7 @@
* Interrupts are off on entry.
*/
PARAVIRT_ADJUST_EXCEPTION_FRAME
+ ASM_CLAC /* Do this early to minimize exposure */
SWAPGS
/*
diff --git a/arch/x86/include/asm/livepatch.h b/arch/x86/include/asm/livepatch.h
index 19c099a..e795f52 100644
--- a/arch/x86/include/asm/livepatch.h
+++ b/arch/x86/include/asm/livepatch.h
@@ -41,7 +41,7 @@
regs->ip = ip;
}
#else
-#error Live patching support is disabled; check CONFIG_LIVEPATCH
+#error Include linux/livepatch.h, not asm/livepatch.h
#endif
#endif /* _ASM_X86_LIVEPATCH_H */
diff --git a/arch/x86/include/asm/pci_x86.h b/arch/x86/include/asm/pci_x86.h
index 46873fb..d08eacd2 100644
--- a/arch/x86/include/asm/pci_x86.h
+++ b/arch/x86/include/asm/pci_x86.h
@@ -93,6 +93,8 @@
extern int (*pcibios_enable_irq)(struct pci_dev *dev);
extern void (*pcibios_disable_irq)(struct pci_dev *dev);
+extern bool mp_should_keep_irq(struct device *dev);
+
struct pci_raw_ops {
int (*read)(unsigned int domain, unsigned int bus, unsigned int devfn,
int reg, int len, u32 *val);
diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
index 2d5a50c..20c11d1 100644
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -766,7 +766,7 @@
* Return saved PC of a blocked thread.
* What is this good for? it will be always the scheduler or ret_from_fork.
*/
-#define thread_saved_pc(t) (*(unsigned long *)((t)->thread.sp - 8))
+#define thread_saved_pc(t) READ_ONCE_NOCHECK(*(unsigned long *)((t)->thread.sp - 8))
#define task_pt_regs(tsk) ((struct pt_regs *)(tsk)->thread.sp0 - 1)
extern unsigned long KSTK_ESP(struct task_struct *task);
diff --git a/arch/x86/include/asm/uaccess_32.h b/arch/x86/include/asm/uaccess_32.h
index f5dcb52..3fe0eac 100644
--- a/arch/x86/include/asm/uaccess_32.h
+++ b/arch/x86/include/asm/uaccess_32.h
@@ -48,20 +48,28 @@
switch (n) {
case 1:
+ __uaccess_begin();
__put_user_size(*(u8 *)from, (u8 __user *)to,
1, ret, 1);
+ __uaccess_end();
return ret;
case 2:
+ __uaccess_begin();
__put_user_size(*(u16 *)from, (u16 __user *)to,
2, ret, 2);
+ __uaccess_end();
return ret;
case 4:
+ __uaccess_begin();
__put_user_size(*(u32 *)from, (u32 __user *)to,
4, ret, 4);
+ __uaccess_end();
return ret;
case 8:
+ __uaccess_begin();
__put_user_size(*(u64 *)from, (u64 __user *)to,
8, ret, 8);
+ __uaccess_end();
return ret;
}
}
@@ -103,13 +111,19 @@
switch (n) {
case 1:
+ __uaccess_begin();
__get_user_size(*(u8 *)to, from, 1, ret, 1);
+ __uaccess_end();
return ret;
case 2:
+ __uaccess_begin();
__get_user_size(*(u16 *)to, from, 2, ret, 2);
+ __uaccess_end();
return ret;
case 4:
+ __uaccess_begin();
__get_user_size(*(u32 *)to, from, 4, ret, 4);
+ __uaccess_end();
return ret;
}
}
@@ -148,13 +162,19 @@
switch (n) {
case 1:
+ __uaccess_begin();
__get_user_size(*(u8 *)to, from, 1, ret, 1);
+ __uaccess_end();
return ret;
case 2:
+ __uaccess_begin();
__get_user_size(*(u16 *)to, from, 2, ret, 2);
+ __uaccess_end();
return ret;
case 4:
+ __uaccess_begin();
__get_user_size(*(u32 *)to, from, 4, ret, 4);
+ __uaccess_end();
return ret;
}
}
@@ -170,13 +190,19 @@
switch (n) {
case 1:
+ __uaccess_begin();
__get_user_size(*(u8 *)to, from, 1, ret, 1);
+ __uaccess_end();
return ret;
case 2:
+ __uaccess_begin();
__get_user_size(*(u16 *)to, from, 2, ret, 2);
+ __uaccess_end();
return ret;
case 4:
+ __uaccess_begin();
__get_user_size(*(u32 *)to, from, 4, ret, 4);
+ __uaccess_end();
return ret;
}
}
diff --git a/arch/x86/include/asm/xen/pci.h b/arch/x86/include/asm/xen/pci.h
index 968d57d..f320ee3 100644
--- a/arch/x86/include/asm/xen/pci.h
+++ b/arch/x86/include/asm/xen/pci.h
@@ -57,7 +57,7 @@
{
if (xen_pci_frontend && xen_pci_frontend->enable_msi)
return xen_pci_frontend->enable_msi(dev, vectors);
- return -ENODEV;
+ return -ENOSYS;
}
static inline void xen_pci_frontend_disable_msi(struct pci_dev *dev)
{
@@ -69,7 +69,7 @@
{
if (xen_pci_frontend && xen_pci_frontend->enable_msix)
return xen_pci_frontend->enable_msix(dev, vectors, nvec);
- return -ENODEV;
+ return -ENOSYS;
}
static inline void xen_pci_frontend_disable_msix(struct pci_dev *dev)
{
diff --git a/arch/x86/kernel/cpu/perf_event_amd_uncore.c b/arch/x86/kernel/cpu/perf_event_amd_uncore.c
index 4974274..8836fc9 100644
--- a/arch/x86/kernel/cpu/perf_event_amd_uncore.c
+++ b/arch/x86/kernel/cpu/perf_event_amd_uncore.c
@@ -323,6 +323,8 @@
return 0;
fail:
+ if (amd_uncore_nb)
+ *per_cpu_ptr(amd_uncore_nb, cpu) = NULL;
kfree(uncore_nb);
return -ENOMEM;
}
diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
index 1505587..b9b09fe 100644
--- a/arch/x86/kvm/emulate.c
+++ b/arch/x86/kvm/emulate.c
@@ -650,10 +650,10 @@
u16 sel;
la = seg_base(ctxt, addr.seg) + addr.ea;
- *linear = la;
*max_size = 0;
switch (mode) {
case X86EMUL_MODE_PROT64:
+ *linear = la;
if (is_noncanonical_address(la))
goto bad;
@@ -662,6 +662,7 @@
goto bad;
break;
default:
+ *linear = la = (u32)la;
usable = ctxt->ops->get_segment(ctxt, &sel, &desc, NULL,
addr.seg);
if (!usable)
@@ -689,7 +690,6 @@
if (size > *max_size)
goto bad;
}
- la &= (u32)-1;
break;
}
if (insn_aligned(ctxt, size) && ((la & (size - 1)) != 0))
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 6c9fed9..2ce4f05 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -249,7 +249,7 @@
return ret;
kvm_vcpu_mark_page_dirty(vcpu, table_gfn);
- walker->ptes[level] = pte;
+ walker->ptes[level - 1] = pte;
}
return 0;
}
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 4244c2b..f4891f2 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -2752,6 +2752,7 @@
}
kvm_make_request(KVM_REQ_STEAL_UPDATE, vcpu);
+ vcpu->arch.switch_db_regs |= KVM_DEBUGREG_RELOAD;
}
void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
diff --git a/arch/x86/lib/copy_user_64.S b/arch/x86/lib/copy_user_64.S
index 982ce34..27f89c7 100644
--- a/arch/x86/lib/copy_user_64.S
+++ b/arch/x86/lib/copy_user_64.S
@@ -232,17 +232,31 @@
/*
* copy_user_nocache - Uncached memory copy with exception handling
- * This will force destination/source out of cache for more performance.
+ * This will force destination out of cache for more performance.
+ *
+ * Note: Cached memory copy is used when destination or size is not
+ * naturally aligned. That is:
+ * - Require 8-byte alignment when size is 8 bytes or larger.
+ * - Require 4-byte alignment when size is 4 bytes.
*/
ENTRY(__copy_user_nocache)
ASM_STAC
+
+ /* If size is less than 8 bytes, go to 4-byte copy */
cmpl $8,%edx
- jb 20f /* less then 8 bytes, go to byte copy loop */
+ jb .L_4b_nocache_copy_entry
+
+ /* If destination is not 8-byte aligned, "cache" copy to align it */
ALIGN_DESTINATION
+
+ /* Set 4x8-byte copy count and remainder */
movl %edx,%ecx
andl $63,%edx
shrl $6,%ecx
- jz 17f
+ jz .L_8b_nocache_copy_entry /* jump if count is 0 */
+
+ /* Perform 4x8-byte nocache loop-copy */
+.L_4x8b_nocache_copy_loop:
1: movq (%rsi),%r8
2: movq 1*8(%rsi),%r9
3: movq 2*8(%rsi),%r10
@@ -262,60 +276,106 @@
leaq 64(%rsi),%rsi
leaq 64(%rdi),%rdi
decl %ecx
- jnz 1b
-17: movl %edx,%ecx
+ jnz .L_4x8b_nocache_copy_loop
+
+ /* Set 8-byte copy count and remainder */
+.L_8b_nocache_copy_entry:
+ movl %edx,%ecx
andl $7,%edx
shrl $3,%ecx
- jz 20f
-18: movq (%rsi),%r8
-19: movnti %r8,(%rdi)
+ jz .L_4b_nocache_copy_entry /* jump if count is 0 */
+
+ /* Perform 8-byte nocache loop-copy */
+.L_8b_nocache_copy_loop:
+20: movq (%rsi),%r8
+21: movnti %r8,(%rdi)
leaq 8(%rsi),%rsi
leaq 8(%rdi),%rdi
decl %ecx
- jnz 18b
-20: andl %edx,%edx
- jz 23f
+ jnz .L_8b_nocache_copy_loop
+
+ /* If no byte left, we're done */
+.L_4b_nocache_copy_entry:
+ andl %edx,%edx
+ jz .L_finish_copy
+
+ /* If destination is not 4-byte aligned, go to byte copy: */
+ movl %edi,%ecx
+ andl $3,%ecx
+ jnz .L_1b_cache_copy_entry
+
+ /* Set 4-byte copy count (1 or 0) and remainder */
movl %edx,%ecx
-21: movb (%rsi),%al
-22: movb %al,(%rdi)
+ andl $3,%edx
+ shrl $2,%ecx
+ jz .L_1b_cache_copy_entry /* jump if count is 0 */
+
+ /* Perform 4-byte nocache copy: */
+30: movl (%rsi),%r8d
+31: movnti %r8d,(%rdi)
+ leaq 4(%rsi),%rsi
+ leaq 4(%rdi),%rdi
+
+ /* If no bytes left, we're done: */
+ andl %edx,%edx
+ jz .L_finish_copy
+
+ /* Perform byte "cache" loop-copy for the remainder */
+.L_1b_cache_copy_entry:
+ movl %edx,%ecx
+.L_1b_cache_copy_loop:
+40: movb (%rsi),%al
+41: movb %al,(%rdi)
incq %rsi
incq %rdi
decl %ecx
- jnz 21b
-23: xorl %eax,%eax
+ jnz .L_1b_cache_copy_loop
+
+ /* Finished copying; fence the prior stores */
+.L_finish_copy:
+ xorl %eax,%eax
ASM_CLAC
sfence
ret
.section .fixup,"ax"
-30: shll $6,%ecx
+.L_fixup_4x8b_copy:
+ shll $6,%ecx
addl %ecx,%edx
- jmp 60f
-40: lea (%rdx,%rcx,8),%rdx
- jmp 60f
-50: movl %ecx,%edx
-60: sfence
+ jmp .L_fixup_handle_tail
+.L_fixup_8b_copy:
+ lea (%rdx,%rcx,8),%rdx
+ jmp .L_fixup_handle_tail
+.L_fixup_4b_copy:
+ lea (%rdx,%rcx,4),%rdx
+ jmp .L_fixup_handle_tail
+.L_fixup_1b_copy:
+ movl %ecx,%edx
+.L_fixup_handle_tail:
+ sfence
jmp copy_user_handle_tail
.previous
- _ASM_EXTABLE(1b,30b)
- _ASM_EXTABLE(2b,30b)
- _ASM_EXTABLE(3b,30b)
- _ASM_EXTABLE(4b,30b)
- _ASM_EXTABLE(5b,30b)
- _ASM_EXTABLE(6b,30b)
- _ASM_EXTABLE(7b,30b)
- _ASM_EXTABLE(8b,30b)
- _ASM_EXTABLE(9b,30b)
- _ASM_EXTABLE(10b,30b)
- _ASM_EXTABLE(11b,30b)
- _ASM_EXTABLE(12b,30b)
- _ASM_EXTABLE(13b,30b)
- _ASM_EXTABLE(14b,30b)
- _ASM_EXTABLE(15b,30b)
- _ASM_EXTABLE(16b,30b)
- _ASM_EXTABLE(18b,40b)
- _ASM_EXTABLE(19b,40b)
- _ASM_EXTABLE(21b,50b)
- _ASM_EXTABLE(22b,50b)
+ _ASM_EXTABLE(1b,.L_fixup_4x8b_copy)
+ _ASM_EXTABLE(2b,.L_fixup_4x8b_copy)
+ _ASM_EXTABLE(3b,.L_fixup_4x8b_copy)
+ _ASM_EXTABLE(4b,.L_fixup_4x8b_copy)
+ _ASM_EXTABLE(5b,.L_fixup_4x8b_copy)
+ _ASM_EXTABLE(6b,.L_fixup_4x8b_copy)
+ _ASM_EXTABLE(7b,.L_fixup_4x8b_copy)
+ _ASM_EXTABLE(8b,.L_fixup_4x8b_copy)
+ _ASM_EXTABLE(9b,.L_fixup_4x8b_copy)
+ _ASM_EXTABLE(10b,.L_fixup_4x8b_copy)
+ _ASM_EXTABLE(11b,.L_fixup_4x8b_copy)
+ _ASM_EXTABLE(12b,.L_fixup_4x8b_copy)
+ _ASM_EXTABLE(13b,.L_fixup_4x8b_copy)
+ _ASM_EXTABLE(14b,.L_fixup_4x8b_copy)
+ _ASM_EXTABLE(15b,.L_fixup_4x8b_copy)
+ _ASM_EXTABLE(16b,.L_fixup_4x8b_copy)
+ _ASM_EXTABLE(20b,.L_fixup_8b_copy)
+ _ASM_EXTABLE(21b,.L_fixup_8b_copy)
+ _ASM_EXTABLE(30b,.L_fixup_4b_copy)
+ _ASM_EXTABLE(31b,.L_fixup_4b_copy)
+ _ASM_EXTABLE(40b,.L_fixup_1b_copy)
+ _ASM_EXTABLE(41b,.L_fixup_1b_copy)
ENDPROC(__copy_user_nocache)
diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
index eef44d9..e830c71 100644
--- a/arch/x86/mm/fault.c
+++ b/arch/x86/mm/fault.c
@@ -287,6 +287,9 @@
if (!pmd_k)
return -1;
+ if (pmd_huge(*pmd_k))
+ return 0;
+
pte_k = pte_offset_kernel(pmd_k, address);
if (!pte_present(*pte_k))
return -1;
@@ -360,8 +363,6 @@
* 64-bit:
*
* Handle a fault on the vmalloc area
- *
- * This assumes no large pages in there.
*/
static noinline int vmalloc_fault(unsigned long address)
{
@@ -403,17 +404,23 @@
if (pud_none(*pud_ref))
return -1;
- if (pud_none(*pud) || pud_page_vaddr(*pud) != pud_page_vaddr(*pud_ref))
+ if (pud_none(*pud) || pud_pfn(*pud) != pud_pfn(*pud_ref))
BUG();
+ if (pud_huge(*pud))
+ return 0;
+
pmd = pmd_offset(pud, address);
pmd_ref = pmd_offset(pud_ref, address);
if (pmd_none(*pmd_ref))
return -1;
- if (pmd_none(*pmd) || pmd_page(*pmd) != pmd_page(*pmd_ref))
+ if (pmd_none(*pmd) || pmd_pfn(*pmd) != pmd_pfn(*pmd_ref))
BUG();
+ if (pmd_huge(*pmd))
+ return 0;
+
pte_ref = pte_offset_kernel(pmd_ref, address);
if (!pte_present(*pte_ref))
return -1;
diff --git a/arch/x86/mm/gup.c b/arch/x86/mm/gup.c
index 6d5eb59..d8a798d 100644
--- a/arch/x86/mm/gup.c
+++ b/arch/x86/mm/gup.c
@@ -102,7 +102,6 @@
return 0;
}
- page = pte_page(pte);
if (pte_devmap(pte)) {
pgmap = get_dev_pagemap(pte_pfn(pte), pgmap);
if (unlikely(!pgmap)) {
@@ -115,6 +114,7 @@
return 0;
}
VM_BUG_ON(!pfn_valid(pte_pfn(pte)));
+ page = pte_page(pte);
get_page(page);
put_dev_pagemap(pgmap);
SetPageReferenced(page);
diff --git a/arch/x86/mm/mmap.c b/arch/x86/mm/mmap.c
index 96bd1e2..72bb52f 100644
--- a/arch/x86/mm/mmap.c
+++ b/arch/x86/mm/mmap.c
@@ -71,12 +71,12 @@
if (mmap_is_ia32())
#ifdef CONFIG_COMPAT
- rnd = (unsigned long)get_random_int() & ((1 << mmap_rnd_compat_bits) - 1);
+ rnd = get_random_long() & ((1UL << mmap_rnd_compat_bits) - 1);
#else
- rnd = (unsigned long)get_random_int() & ((1 << mmap_rnd_bits) - 1);
+ rnd = get_random_long() & ((1UL << mmap_rnd_bits) - 1);
#endif
else
- rnd = (unsigned long)get_random_int() & ((1 << mmap_rnd_bits) - 1);
+ rnd = get_random_long() & ((1UL << mmap_rnd_bits) - 1);
return rnd << PAGE_SHIFT;
}
diff --git a/arch/x86/mm/mpx.c b/arch/x86/mm/mpx.c
index b2fd67d..ef05755 100644
--- a/arch/x86/mm/mpx.c
+++ b/arch/x86/mm/mpx.c
@@ -123,7 +123,7 @@
break;
}
- if (regno > nr_registers) {
+ if (regno >= nr_registers) {
WARN_ONCE(1, "decoded an instruction with an invalid register");
return -EINVAL;
}
diff --git a/arch/x86/mm/numa.c b/arch/x86/mm/numa.c
index c3b3f65..d04f809 100644
--- a/arch/x86/mm/numa.c
+++ b/arch/x86/mm/numa.c
@@ -469,7 +469,7 @@
{
int i, nid;
nodemask_t numa_kernel_nodes = NODE_MASK_NONE;
- unsigned long start, end;
+ phys_addr_t start, end;
struct memblock_region *r;
/*
diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
index 2440814..9cf96d8 100644
--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
@@ -419,24 +419,30 @@
phys_addr_t slow_virt_to_phys(void *__virt_addr)
{
unsigned long virt_addr = (unsigned long)__virt_addr;
- unsigned long phys_addr, offset;
+ phys_addr_t phys_addr;
+ unsigned long offset;
enum pg_level level;
pte_t *pte;
pte = lookup_address(virt_addr, &level);
BUG_ON(!pte);
+ /*
+ * pXX_pfn() returns unsigned long, which must be cast to phys_addr_t
+ * before being left-shifted PAGE_SHIFT bits -- this trick is to
+ * make 32-PAE kernel work correctly.
+ */
switch (level) {
case PG_LEVEL_1G:
- phys_addr = pud_pfn(*(pud_t *)pte) << PAGE_SHIFT;
+ phys_addr = (phys_addr_t)pud_pfn(*(pud_t *)pte) << PAGE_SHIFT;
offset = virt_addr & ~PUD_PAGE_MASK;
break;
case PG_LEVEL_2M:
- phys_addr = pmd_pfn(*(pmd_t *)pte) << PAGE_SHIFT;
+ phys_addr = (phys_addr_t)pmd_pfn(*(pmd_t *)pte) << PAGE_SHIFT;
offset = virt_addr & ~PMD_PAGE_MASK;
break;
default:
- phys_addr = pte_pfn(*pte) << PAGE_SHIFT;
+ phys_addr = (phys_addr_t)pte_pfn(*pte) << PAGE_SHIFT;
offset = virt_addr & ~PAGE_MASK;
}
diff --git a/arch/x86/pci/common.c b/arch/x86/pci/common.c
index 2879efc..d34b511 100644
--- a/arch/x86/pci/common.c
+++ b/arch/x86/pci/common.c
@@ -711,28 +711,22 @@
return 0;
}
-int pcibios_alloc_irq(struct pci_dev *dev)
-{
- /*
- * If the PCI device was already claimed by core code and has
- * MSI enabled, probing of the pcibios IRQ will overwrite
- * dev->irq. So bail out if MSI is already enabled.
- */
- if (pci_dev_msi_enabled(dev))
- return -EBUSY;
-
- return pcibios_enable_irq(dev);
-}
-
-void pcibios_free_irq(struct pci_dev *dev)
-{
- if (pcibios_disable_irq)
- pcibios_disable_irq(dev);
-}
-
int pcibios_enable_device(struct pci_dev *dev, int mask)
{
- return pci_enable_resources(dev, mask);
+ int err;
+
+ if ((err = pci_enable_resources(dev, mask)) < 0)
+ return err;
+
+ if (!pci_dev_msi_enabled(dev))
+ return pcibios_enable_irq(dev);
+ return 0;
+}
+
+void pcibios_disable_device (struct pci_dev *dev)
+{
+ if (!pci_dev_msi_enabled(dev) && pcibios_disable_irq)
+ pcibios_disable_irq(dev);
}
int pci_ext_cfg_avail(void)
diff --git a/arch/x86/pci/intel_mid_pci.c b/arch/x86/pci/intel_mid_pci.c
index 0d24e7c..8b93e63 100644
--- a/arch/x86/pci/intel_mid_pci.c
+++ b/arch/x86/pci/intel_mid_pci.c
@@ -215,7 +215,7 @@
int polarity;
int ret;
- if (pci_has_managed_irq(dev))
+ if (dev->irq_managed && dev->irq > 0)
return 0;
switch (intel_mid_identify_cpu()) {
@@ -256,13 +256,10 @@
static void intel_mid_pci_irq_disable(struct pci_dev *dev)
{
- if (pci_has_managed_irq(dev)) {
+ if (!mp_should_keep_irq(&dev->dev) && dev->irq_managed &&
+ dev->irq > 0) {
mp_unmap_irq(dev->irq);
dev->irq_managed = 0;
- /*
- * Don't reset dev->irq here, otherwise
- * intel_mid_pci_irq_enable() will fail on next call.
- */
}
}
diff --git a/arch/x86/pci/irq.c b/arch/x86/pci/irq.c
index 32e7034..9bd1154 100644
--- a/arch/x86/pci/irq.c
+++ b/arch/x86/pci/irq.c
@@ -1202,7 +1202,7 @@
struct pci_dev *temp_dev;
int irq;
- if (pci_has_managed_irq(dev))
+ if (dev->irq_managed && dev->irq > 0)
return 0;
irq = IO_APIC_get_PCI_irq_vector(dev->bus->number,
@@ -1230,7 +1230,8 @@
}
dev = temp_dev;
if (irq >= 0) {
- pci_set_managed_irq(dev, irq);
+ dev->irq_managed = 1;
+ dev->irq = irq;
dev_info(&dev->dev, "PCI->APIC IRQ transform: "
"INT %c -> IRQ %d\n", 'A' + pin - 1, irq);
return 0;
@@ -1256,10 +1257,24 @@
return 0;
}
+bool mp_should_keep_irq(struct device *dev)
+{
+ if (dev->power.is_prepared)
+ return true;
+#ifdef CONFIG_PM
+ if (dev->power.runtime_status == RPM_SUSPENDING)
+ return true;
+#endif
+
+ return false;
+}
+
static void pirq_disable_irq(struct pci_dev *dev)
{
- if (io_apic_assign_pci_irqs && pci_has_managed_irq(dev)) {
+ if (io_apic_assign_pci_irqs && !mp_should_keep_irq(&dev->dev) &&
+ dev->irq_managed && dev->irq) {
mp_unmap_irq(dev->irq);
- pci_reset_managed_irq(dev);
+ dev->irq = 0;
+ dev->irq_managed = 0;
}
}
diff --git a/arch/x86/pci/xen.c b/arch/x86/pci/xen.c
index ff31ab4..beac4df 100644
--- a/arch/x86/pci/xen.c
+++ b/arch/x86/pci/xen.c
@@ -196,7 +196,10 @@
return 0;
error:
- dev_err(&dev->dev, "Xen PCI frontend has not registered MSI/MSI-X support!\n");
+ if (ret == -ENOSYS)
+ dev_err(&dev->dev, "Xen PCI frontend has not registered MSI/MSI-X support!\n");
+ else if (ret)
+ dev_err(&dev->dev, "Xen PCI frontend error: %d!\n", ret);
free:
kfree(v);
return ret;
diff --git a/arch/x86/platform/intel-quark/imr.c b/arch/x86/platform/intel-quark/imr.c
index c61b6c3..bfadcd0 100644
--- a/arch/x86/platform/intel-quark/imr.c
+++ b/arch/x86/platform/intel-quark/imr.c
@@ -592,14 +592,14 @@
end = (unsigned long)__end_rodata - 1;
/*
- * Setup a locked IMR around the physical extent of the kernel
+ * Setup an unlocked IMR around the physical extent of the kernel
* from the beginning of the .text secton to the end of the
* .rodata section as one physically contiguous block.
*
* We don't round up @size since it is already PAGE_SIZE aligned.
* See vmlinux.lds.S for details.
*/
- ret = imr_add_range(base, size, IMR_CPU, IMR_CPU, true);
+ ret = imr_add_range(base, size, IMR_CPU, IMR_CPU, false);
if (ret < 0) {
pr_err("unable to setup IMR for kernel: %zu KiB (%lx - %lx)\n",
size / 1024, start, end);
diff --git a/block/Kconfig b/block/Kconfig
index 161491d..0363cd7 100644
--- a/block/Kconfig
+++ b/block/Kconfig
@@ -88,6 +88,19 @@
T10/SCSI Data Integrity Field or the T13/ATA External Path
Protection. If in doubt, say N.
+config BLK_DEV_DAX
+ bool "Block device DAX support"
+ depends on FS_DAX
+ depends on BROKEN
+ help
+ When DAX support is available (CONFIG_FS_DAX) raw block
+ devices can also support direct userspace access to the
+ storage capacity via MMAP(2) similar to a file on a
+ DAX-enabled filesystem. However, the DAX I/O-path disables
+ some standard I/O-statistics, and the MMAP(2) path has some
+ operational differences due to bypassing the page
+ cache. If in doubt, say N.
+
config BLK_DEV_THROTTLING
bool "Block layer bio throttling support"
depends on BLK_CGROUP=y
diff --git a/block/bio.c b/block/bio.c
index dbabd48..cf75915 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -874,7 +874,7 @@
bio->bi_private = &ret;
bio->bi_end_io = submit_bio_wait_endio;
submit_bio(rw, bio);
- wait_for_completion(&ret.event);
+ wait_for_completion_io(&ret.event);
return ret.error;
}
@@ -1090,9 +1090,12 @@
if (!bio_flagged(bio, BIO_NULL_MAPPED)) {
/*
* if we're in a workqueue, the request is orphaned, so
- * don't copy into a random user address space, just free.
+ * don't copy into a random user address space, just free
+ * and return -EINTR so user space doesn't expect any data.
*/
- if (current->mm && bio_data_dir(bio) == READ)
+ if (!current->mm)
+ ret = -EINTR;
+ else if (bio_data_dir(bio) == READ)
ret = bio_copy_to_iter(bio, bmd->iter);
if (bmd->is_our_pages)
bio_free_pages(bio);
diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
index 5a37188..66e6f1a 100644
--- a/block/blk-cgroup.c
+++ b/block/blk-cgroup.c
@@ -788,6 +788,7 @@
{
struct gendisk *disk;
struct blkcg_gq *blkg;
+ struct module *owner;
unsigned int major, minor;
int key_len, part, ret;
char *body;
@@ -804,7 +805,9 @@
if (!disk)
return -ENODEV;
if (part) {
+ owner = disk->fops->owner;
put_disk(disk);
+ module_put(owner);
return -ENODEV;
}
@@ -820,7 +823,9 @@
ret = PTR_ERR(blkg);
rcu_read_unlock();
spin_unlock_irq(disk->queue->queue_lock);
+ owner = disk->fops->owner;
put_disk(disk);
+ module_put(owner);
/*
* If queue was bypassing, we should retry. Do so after a
* short msleep(). It isn't strictly necessary but queue
@@ -851,9 +856,13 @@
void blkg_conf_finish(struct blkg_conf_ctx *ctx)
__releases(ctx->disk->queue->queue_lock) __releases(rcu)
{
+ struct module *owner;
+
spin_unlock_irq(ctx->disk->queue->queue_lock);
rcu_read_unlock();
+ owner = ctx->disk->fops->owner;
put_disk(ctx->disk);
+ module_put(owner);
}
EXPORT_SYMBOL_GPL(blkg_conf_finish);
diff --git a/block/blk-core.c b/block/blk-core.c
index ab51685..b83d297 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -2455,14 +2455,16 @@
rq = NULL;
break;
- } else if (ret == BLKPREP_KILL) {
+ } else if (ret == BLKPREP_KILL || ret == BLKPREP_INVALID) {
+ int err = (ret == BLKPREP_INVALID) ? -EREMOTEIO : -EIO;
+
rq->cmd_flags |= REQ_QUIET;
/*
* Mark this request as started so we don't trigger
* any debug logic in the end I/O path.
*/
blk_start_request(rq);
- __blk_end_request_all(rq, -EIO);
+ __blk_end_request_all(rq, err);
} else {
printk(KERN_ERR "%s: bad return=%d\n", __func__, ret);
break;
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 4c0622f..56c0a72 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -599,8 +599,10 @@
* If a request wasn't started before the queue was
* marked dying, kill it here or it'll go unnoticed.
*/
- if (unlikely(blk_queue_dying(rq->q)))
- blk_mq_complete_request(rq, -EIO);
+ if (unlikely(blk_queue_dying(rq->q))) {
+ rq->errors = -EIO;
+ blk_mq_end_request(rq, rq->errors);
+ }
return;
}
diff --git a/block/blk-settings.c b/block/blk-settings.c
index dd49735..c7bb666 100644
--- a/block/blk-settings.c
+++ b/block/blk-settings.c
@@ -91,8 +91,8 @@
lim->seg_boundary_mask = BLK_SEG_BOUNDARY_MASK;
lim->virt_boundary_mask = 0;
lim->max_segment_size = BLK_MAX_SEGMENT_SIZE;
- lim->max_sectors = lim->max_dev_sectors = lim->max_hw_sectors =
- BLK_SAFE_MAX_SECTORS;
+ lim->max_sectors = lim->max_hw_sectors = BLK_SAFE_MAX_SECTORS;
+ lim->max_dev_sectors = 0;
lim->chunk_sectors = 0;
lim->max_write_same_sectors = 0;
lim->max_discard_sectors = 0;
diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
index e140cc4..dd937630 100644
--- a/block/blk-sysfs.c
+++ b/block/blk-sysfs.c
@@ -147,10 +147,9 @@
static ssize_t queue_discard_max_hw_show(struct request_queue *q, char *page)
{
- unsigned long long val;
- val = q->limits.max_hw_discard_sectors << 9;
- return sprintf(page, "%llu\n", val);
+ return sprintf(page, "%llu\n",
+ (unsigned long long)q->limits.max_hw_discard_sectors << 9);
}
static ssize_t queue_discard_max_show(struct request_queue *q, char *page)
diff --git a/block/deadline-iosched.c b/block/deadline-iosched.c
index a753df2..d0dd788 100644
--- a/block/deadline-iosched.c
+++ b/block/deadline-iosched.c
@@ -39,7 +39,6 @@
*/
struct request *next_rq[2];
unsigned int batching; /* number of sequential requests made */
- sector_t last_sector; /* head position */
unsigned int starved; /* times reads have starved writes */
/*
@@ -210,8 +209,6 @@
dd->next_rq[WRITE] = NULL;
dd->next_rq[data_dir] = deadline_latter_request(rq);
- dd->last_sector = rq_end_sector(rq);
-
/*
* take it off the sort and fifo list, move
* to dispatch queue
diff --git a/crypto/algif_skcipher.c b/crypto/algif_skcipher.c
index 38c1aa8..28556fc 100644
--- a/crypto/algif_skcipher.c
+++ b/crypto/algif_skcipher.c
@@ -65,18 +65,10 @@
struct skcipher_async_rsgl first_sgl;
struct list_head list;
struct scatterlist *tsg;
- char iv[];
+ atomic_t *inflight;
+ struct skcipher_request req;
};
-#define GET_SREQ(areq, ctx) (struct skcipher_async_req *)((char *)areq + \
- crypto_skcipher_reqsize(crypto_skcipher_reqtfm(&ctx->req)))
-
-#define GET_REQ_SIZE(ctx) \
- crypto_skcipher_reqsize(crypto_skcipher_reqtfm(&ctx->req))
-
-#define GET_IV_SIZE(ctx) \
- crypto_skcipher_ivsize(crypto_skcipher_reqtfm(&ctx->req))
-
#define MAX_SGL_ENTS ((4096 - sizeof(struct skcipher_sg_list)) / \
sizeof(struct scatterlist) - 1)
@@ -102,15 +94,12 @@
static void skcipher_async_cb(struct crypto_async_request *req, int err)
{
- struct sock *sk = req->data;
- struct alg_sock *ask = alg_sk(sk);
- struct skcipher_ctx *ctx = ask->private;
- struct skcipher_async_req *sreq = GET_SREQ(req, ctx);
+ struct skcipher_async_req *sreq = req->data;
struct kiocb *iocb = sreq->iocb;
- atomic_dec(&ctx->inflight);
+ atomic_dec(sreq->inflight);
skcipher_free_async_sgls(sreq);
- kfree(req);
+ kzfree(sreq);
iocb->ki_complete(iocb, err, err);
}
@@ -306,8 +295,11 @@
{
struct sock *sk = sock->sk;
struct alg_sock *ask = alg_sk(sk);
+ struct sock *psk = ask->parent;
+ struct alg_sock *pask = alg_sk(psk);
struct skcipher_ctx *ctx = ask->private;
- struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(&ctx->req);
+ struct skcipher_tfm *skc = pask->private;
+ struct crypto_skcipher *tfm = skc->skcipher;
unsigned ivsize = crypto_skcipher_ivsize(tfm);
struct skcipher_sg_list *sgl;
struct af_alg_control con = {};
@@ -509,37 +501,43 @@
{
struct sock *sk = sock->sk;
struct alg_sock *ask = alg_sk(sk);
+ struct sock *psk = ask->parent;
+ struct alg_sock *pask = alg_sk(psk);
struct skcipher_ctx *ctx = ask->private;
+ struct skcipher_tfm *skc = pask->private;
+ struct crypto_skcipher *tfm = skc->skcipher;
struct skcipher_sg_list *sgl;
struct scatterlist *sg;
struct skcipher_async_req *sreq;
struct skcipher_request *req;
struct skcipher_async_rsgl *last_rsgl = NULL;
- unsigned int txbufs = 0, len = 0, tx_nents = skcipher_all_sg_nents(ctx);
- unsigned int reqlen = sizeof(struct skcipher_async_req) +
- GET_REQ_SIZE(ctx) + GET_IV_SIZE(ctx);
+ unsigned int txbufs = 0, len = 0, tx_nents;
+ unsigned int reqsize = crypto_skcipher_reqsize(tfm);
+ unsigned int ivsize = crypto_skcipher_ivsize(tfm);
int err = -ENOMEM;
bool mark = false;
+ char *iv;
+
+ sreq = kzalloc(sizeof(*sreq) + reqsize + ivsize, GFP_KERNEL);
+ if (unlikely(!sreq))
+ goto out;
+
+ req = &sreq->req;
+ iv = (char *)(req + 1) + reqsize;
+ sreq->iocb = msg->msg_iocb;
+ INIT_LIST_HEAD(&sreq->list);
+ sreq->inflight = &ctx->inflight;
lock_sock(sk);
- req = kmalloc(reqlen, GFP_KERNEL);
- if (unlikely(!req))
- goto unlock;
-
- sreq = GET_SREQ(req, ctx);
- sreq->iocb = msg->msg_iocb;
- memset(&sreq->first_sgl, '\0', sizeof(struct skcipher_async_rsgl));
- INIT_LIST_HEAD(&sreq->list);
+ tx_nents = skcipher_all_sg_nents(ctx);
sreq->tsg = kcalloc(tx_nents, sizeof(*sg), GFP_KERNEL);
- if (unlikely(!sreq->tsg)) {
- kfree(req);
+ if (unlikely(!sreq->tsg))
goto unlock;
- }
sg_init_table(sreq->tsg, tx_nents);
- memcpy(sreq->iv, ctx->iv, GET_IV_SIZE(ctx));
- skcipher_request_set_tfm(req, crypto_skcipher_reqtfm(&ctx->req));
- skcipher_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
- skcipher_async_cb, sk);
+ memcpy(iv, ctx->iv, ivsize);
+ skcipher_request_set_tfm(req, tfm);
+ skcipher_request_set_callback(req, CRYPTO_TFM_REQ_MAY_SLEEP,
+ skcipher_async_cb, sreq);
while (iov_iter_count(&msg->msg_iter)) {
struct skcipher_async_rsgl *rsgl;
@@ -615,20 +613,22 @@
sg_mark_end(sreq->tsg + txbufs - 1);
skcipher_request_set_crypt(req, sreq->tsg, sreq->first_sgl.sgl.sg,
- len, sreq->iv);
+ len, iv);
err = ctx->enc ? crypto_skcipher_encrypt(req) :
crypto_skcipher_decrypt(req);
if (err == -EINPROGRESS) {
atomic_inc(&ctx->inflight);
err = -EIOCBQUEUED;
+ sreq = NULL;
goto unlock;
}
free:
skcipher_free_async_sgls(sreq);
- kfree(req);
unlock:
skcipher_wmem_wakeup(sk);
release_sock(sk);
+ kzfree(sreq);
+out:
return err;
}
@@ -637,9 +637,12 @@
{
struct sock *sk = sock->sk;
struct alg_sock *ask = alg_sk(sk);
+ struct sock *psk = ask->parent;
+ struct alg_sock *pask = alg_sk(psk);
struct skcipher_ctx *ctx = ask->private;
- unsigned bs = crypto_skcipher_blocksize(crypto_skcipher_reqtfm(
- &ctx->req));
+ struct skcipher_tfm *skc = pask->private;
+ struct crypto_skcipher *tfm = skc->skcipher;
+ unsigned bs = crypto_skcipher_blocksize(tfm);
struct skcipher_sg_list *sgl;
struct scatterlist *sg;
int err = -EAGAIN;
@@ -947,7 +950,8 @@
ask->private = ctx;
skcipher_request_set_tfm(&ctx->req, skcipher);
- skcipher_request_set_callback(&ctx->req, CRYPTO_TFM_REQ_MAY_BACKLOG,
+ skcipher_request_set_callback(&ctx->req, CRYPTO_TFM_REQ_MAY_SLEEP |
+ CRYPTO_TFM_REQ_MAY_BACKLOG,
af_alg_complete, &ctx->completion);
sk->sk_destruct = skcipher_sock_destruct;
diff --git a/crypto/crypto_user.c b/crypto/crypto_user.c
index 237f379..43fe85f2 100644
--- a/crypto/crypto_user.c
+++ b/crypto/crypto_user.c
@@ -499,6 +499,7 @@
if (link->dump == NULL)
return -EINVAL;
+ down_read(&crypto_alg_sem);
list_for_each_entry(alg, &crypto_alg_list, cra_list)
dump_alloc += CRYPTO_REPORT_MAXSIZE;
@@ -508,8 +509,11 @@
.done = link->done,
.min_dump_alloc = dump_alloc,
};
- return netlink_dump_start(crypto_nlsk, skb, nlh, &c);
+ err = netlink_dump_start(crypto_nlsk, skb, nlh, &c);
}
+ up_read(&crypto_alg_sem);
+
+ return err;
}
err = nlmsg_parse(nlh, crypto_msg_min[type], attrs, CRYPTOCFGA_MAX,
diff --git a/drivers/acpi/nfit.c b/drivers/acpi/nfit.c
index ad6d8c6..fb53db1 100644
--- a/drivers/acpi/nfit.c
+++ b/drivers/acpi/nfit.c
@@ -469,37 +469,16 @@
nfit_mem->bdw = NULL;
}
-static int nfit_mem_add(struct acpi_nfit_desc *acpi_desc,
+static void nfit_mem_init_bdw(struct acpi_nfit_desc *acpi_desc,
struct nfit_mem *nfit_mem, struct acpi_nfit_system_address *spa)
{
u16 dcr = __to_nfit_memdev(nfit_mem)->region_index;
struct nfit_memdev *nfit_memdev;
struct nfit_flush *nfit_flush;
- struct nfit_dcr *nfit_dcr;
struct nfit_bdw *nfit_bdw;
struct nfit_idt *nfit_idt;
u16 idt_idx, range_index;
- list_for_each_entry(nfit_dcr, &acpi_desc->dcrs, list) {
- if (nfit_dcr->dcr->region_index != dcr)
- continue;
- nfit_mem->dcr = nfit_dcr->dcr;
- break;
- }
-
- if (!nfit_mem->dcr) {
- dev_dbg(acpi_desc->dev, "SPA %d missing:%s%s\n",
- spa->range_index, __to_nfit_memdev(nfit_mem)
- ? "" : " MEMDEV", nfit_mem->dcr ? "" : " DCR");
- return -ENODEV;
- }
-
- /*
- * We've found enough to create an nvdimm, optionally
- * find an associated BDW
- */
- list_add(&nfit_mem->list, &acpi_desc->dimms);
-
list_for_each_entry(nfit_bdw, &acpi_desc->bdws, list) {
if (nfit_bdw->bdw->region_index != dcr)
continue;
@@ -508,12 +487,12 @@
}
if (!nfit_mem->bdw)
- return 0;
+ return;
nfit_mem_find_spa_bdw(acpi_desc, nfit_mem);
if (!nfit_mem->spa_bdw)
- return 0;
+ return;
range_index = nfit_mem->spa_bdw->range_index;
list_for_each_entry(nfit_memdev, &acpi_desc->memdevs, list) {
@@ -538,8 +517,6 @@
}
break;
}
-
- return 0;
}
static int nfit_mem_dcr_init(struct acpi_nfit_desc *acpi_desc,
@@ -548,7 +525,6 @@
struct nfit_mem *nfit_mem, *found;
struct nfit_memdev *nfit_memdev;
int type = nfit_spa_type(spa);
- u16 dcr;
switch (type) {
case NFIT_SPA_DCR:
@@ -559,14 +535,18 @@
}
list_for_each_entry(nfit_memdev, &acpi_desc->memdevs, list) {
- int rc;
+ struct nfit_dcr *nfit_dcr;
+ u32 device_handle;
+ u16 dcr;
if (nfit_memdev->memdev->range_index != spa->range_index)
continue;
found = NULL;
dcr = nfit_memdev->memdev->region_index;
+ device_handle = nfit_memdev->memdev->device_handle;
list_for_each_entry(nfit_mem, &acpi_desc->dimms, list)
- if (__to_nfit_memdev(nfit_mem)->region_index == dcr) {
+ if (__to_nfit_memdev(nfit_mem)->device_handle
+ == device_handle) {
found = nfit_mem;
break;
}
@@ -579,6 +559,31 @@
if (!nfit_mem)
return -ENOMEM;
INIT_LIST_HEAD(&nfit_mem->list);
+ list_add(&nfit_mem->list, &acpi_desc->dimms);
+ }
+
+ list_for_each_entry(nfit_dcr, &acpi_desc->dcrs, list) {
+ if (nfit_dcr->dcr->region_index != dcr)
+ continue;
+ /*
+ * Record the control region for the dimm. For
+ * the ACPI 6.1 case, where there are separate
+ * control regions for the pmem vs blk
+ * interfaces, be sure to record the extended
+ * blk details.
+ */
+ if (!nfit_mem->dcr)
+ nfit_mem->dcr = nfit_dcr->dcr;
+ else if (nfit_mem->dcr->windows == 0
+ && nfit_dcr->dcr->windows)
+ nfit_mem->dcr = nfit_dcr->dcr;
+ break;
+ }
+
+ if (dcr && !nfit_mem->dcr) {
+ dev_err(acpi_desc->dev, "SPA %d missing DCR %d\n",
+ spa->range_index, dcr);
+ return -ENODEV;
}
if (type == NFIT_SPA_DCR) {
@@ -595,6 +600,7 @@
nfit_mem->idt_dcr = nfit_idt->idt;
break;
}
+ nfit_mem_init_bdw(acpi_desc, nfit_mem, spa);
} else {
/*
* A single dimm may belong to multiple SPA-PM
@@ -603,13 +609,6 @@
*/
nfit_mem->memdev_pmem = nfit_memdev->memdev;
}
-
- if (found)
- continue;
-
- rc = nfit_mem_add(acpi_desc, nfit_mem, spa);
- if (rc)
- return rc;
}
return 0;
@@ -1504,9 +1503,7 @@
case 1:
/* ARS unsupported, but we should never get here */
return 0;
- case 2:
- return -EINVAL;
- case 3:
+ case 6:
/* ARS is in progress */
msleep(1000);
break;
@@ -1517,13 +1514,13 @@
}
static int ars_get_status(struct nvdimm_bus_descriptor *nd_desc,
- struct nd_cmd_ars_status *cmd)
+ struct nd_cmd_ars_status *cmd, u32 size)
{
int rc;
while (1) {
rc = nd_desc->ndctl(nd_desc, NULL, ND_CMD_ARS_STATUS, cmd,
- sizeof(*cmd));
+ size);
if (rc || cmd->status & 0xffff)
return -ENXIO;
@@ -1538,6 +1535,8 @@
case 2:
/* No ARS performed for the current boot */
return 0;
+ case 3:
+ /* TODO: error list overflow support */
default:
return -ENXIO;
}
@@ -1581,6 +1580,7 @@
struct nd_cmd_ars_start *ars_start = NULL;
struct nd_cmd_ars_cap *ars_cap = NULL;
u64 start, len, cur, remaining;
+ u32 ars_status_size;
int rc;
ars_cap = kzalloc(sizeof(*ars_cap), GFP_KERNEL);
@@ -1610,14 +1610,14 @@
* Check if a full-range ARS has been run. If so, use those results
* without having to start a new ARS.
*/
- ars_status = kzalloc(ars_cap->max_ars_out + sizeof(*ars_status),
- GFP_KERNEL);
+ ars_status_size = ars_cap->max_ars_out;
+ ars_status = kzalloc(ars_status_size, GFP_KERNEL);
if (!ars_status) {
rc = -ENOMEM;
goto out;
}
- rc = ars_get_status(nd_desc, ars_status);
+ rc = ars_get_status(nd_desc, ars_status, ars_status_size);
if (rc)
goto out;
@@ -1647,7 +1647,7 @@
if (rc)
goto out;
- rc = ars_get_status(nd_desc, ars_status);
+ rc = ars_get_status(nd_desc, ars_status, ars_status_size);
if (rc)
goto out;
diff --git a/drivers/acpi/pci_irq.c b/drivers/acpi/pci_irq.c
index d30184c..c8e169e 100644
--- a/drivers/acpi/pci_irq.c
+++ b/drivers/acpi/pci_irq.c
@@ -406,7 +406,7 @@
return 0;
}
- if (pci_has_managed_irq(dev))
+ if (dev->irq_managed && dev->irq > 0)
return 0;
entry = acpi_pci_irq_lookup(dev, pin);
@@ -451,7 +451,8 @@
kfree(entry);
return rc;
}
- pci_set_managed_irq(dev, rc);
+ dev->irq = rc;
+ dev->irq_managed = 1;
if (link)
snprintf(link_desc, sizeof(link_desc), " -> Link[%s]", link);
@@ -474,9 +475,17 @@
u8 pin;
pin = dev->pin;
- if (!pin || !pci_has_managed_irq(dev))
+ if (!pin || !dev->irq_managed || dev->irq <= 0)
return;
+ /* Keep IOAPIC pin configuration when suspending */
+ if (dev->dev.power.is_prepared)
+ return;
+#ifdef CONFIG_PM
+ if (dev->dev.power.runtime_status == RPM_SUSPENDING)
+ return;
+#endif
+
entry = acpi_pci_irq_lookup(dev, pin);
if (!entry)
return;
@@ -496,6 +505,6 @@
dev_dbg(&dev->dev, "PCI INT %c disabled\n", pin_name(pin));
if (gsi >= 0) {
acpi_unregister_gsi(gsi);
- pci_reset_managed_irq(dev);
+ dev->irq_managed = 0;
}
}
diff --git a/drivers/acpi/pci_link.c b/drivers/acpi/pci_link.c
index fa28635..ededa90 100644
--- a/drivers/acpi/pci_link.c
+++ b/drivers/acpi/pci_link.c
@@ -4,7 +4,6 @@
* Copyright (C) 2001, 2002 Andy Grover <andrew.grover@intel.com>
* Copyright (C) 2001, 2002 Paul Diefenbaugh <paul.s.diefenbaugh@intel.com>
* Copyright (C) 2002 Dominik Brodowski <devel@brodo.de>
- * Copyright (c) 2015, The Linux Foundation. All rights reserved.
*
* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*
@@ -438,6 +437,7 @@
* enabled system.
*/
+#define ACPI_MAX_IRQS 256
#define ACPI_MAX_ISA_IRQ 16
#define PIRQ_PENALTY_PCI_AVAILABLE (0)
@@ -447,7 +447,7 @@
#define PIRQ_PENALTY_ISA_USED (16*16*16*16*16)
#define PIRQ_PENALTY_ISA_ALWAYS (16*16*16*16*16*16)
-static int acpi_irq_isa_penalty[ACPI_MAX_ISA_IRQ] = {
+static int acpi_irq_penalty[ACPI_MAX_IRQS] = {
PIRQ_PENALTY_ISA_ALWAYS, /* IRQ0 timer */
PIRQ_PENALTY_ISA_ALWAYS, /* IRQ1 keyboard */
PIRQ_PENALTY_ISA_ALWAYS, /* IRQ2 cascade */
@@ -464,68 +464,9 @@
PIRQ_PENALTY_ISA_USED, /* IRQ13 fpe, sometimes */
PIRQ_PENALTY_ISA_USED, /* IRQ14 ide0 */
PIRQ_PENALTY_ISA_USED, /* IRQ15 ide1 */
+ /* >IRQ15 */
};
-struct irq_penalty_info {
- int irq;
- int penalty;
- struct list_head node;
-};
-
-static LIST_HEAD(acpi_irq_penalty_list);
-
-static int acpi_irq_get_penalty(int irq)
-{
- struct irq_penalty_info *irq_info;
-
- if (irq < ACPI_MAX_ISA_IRQ)
- return acpi_irq_isa_penalty[irq];
-
- list_for_each_entry(irq_info, &acpi_irq_penalty_list, node) {
- if (irq_info->irq == irq)
- return irq_info->penalty;
- }
-
- return 0;
-}
-
-static int acpi_irq_set_penalty(int irq, int new_penalty)
-{
- struct irq_penalty_info *irq_info;
-
- /* see if this is a ISA IRQ */
- if (irq < ACPI_MAX_ISA_IRQ) {
- acpi_irq_isa_penalty[irq] = new_penalty;
- return 0;
- }
-
- /* next, try to locate from the dynamic list */
- list_for_each_entry(irq_info, &acpi_irq_penalty_list, node) {
- if (irq_info->irq == irq) {
- irq_info->penalty = new_penalty;
- return 0;
- }
- }
-
- /* nope, let's allocate a slot for this IRQ */
- irq_info = kzalloc(sizeof(*irq_info), GFP_KERNEL);
- if (!irq_info)
- return -ENOMEM;
-
- irq_info->irq = irq;
- irq_info->penalty = new_penalty;
- list_add_tail(&irq_info->node, &acpi_irq_penalty_list);
-
- return 0;
-}
-
-static void acpi_irq_add_penalty(int irq, int penalty)
-{
- int curpen = acpi_irq_get_penalty(irq);
-
- acpi_irq_set_penalty(irq, curpen + penalty);
-}
-
int __init acpi_irq_penalty_init(void)
{
struct acpi_pci_link *link;
@@ -546,16 +487,15 @@
link->irq.possible_count;
for (i = 0; i < link->irq.possible_count; i++) {
- if (link->irq.possible[i] < ACPI_MAX_ISA_IRQ) {
- int irqpos = link->irq.possible[i];
-
- acpi_irq_add_penalty(irqpos, penalty);
- }
+ if (link->irq.possible[i] < ACPI_MAX_ISA_IRQ)
+ acpi_irq_penalty[link->irq.
+ possible[i]] +=
+ penalty;
}
} else if (link->irq.active) {
- acpi_irq_add_penalty(link->irq.active,
- PIRQ_PENALTY_PCI_POSSIBLE);
+ acpi_irq_penalty[link->irq.active] +=
+ PIRQ_PENALTY_PCI_POSSIBLE;
}
}
@@ -607,12 +547,12 @@
* the use of IRQs 9, 10, 11, and >15.
*/
for (i = (link->irq.possible_count - 1); i >= 0; i--) {
- if (acpi_irq_get_penalty(irq) >
- acpi_irq_get_penalty(link->irq.possible[i]))
+ if (acpi_irq_penalty[irq] >
+ acpi_irq_penalty[link->irq.possible[i]])
irq = link->irq.possible[i];
}
}
- if (acpi_irq_get_penalty(irq) >= PIRQ_PENALTY_ISA_ALWAYS) {
+ if (acpi_irq_penalty[irq] >= PIRQ_PENALTY_ISA_ALWAYS) {
printk(KERN_ERR PREFIX "No IRQ available for %s [%s]. "
"Try pci=noacpi or acpi=off\n",
acpi_device_name(link->device),
@@ -628,8 +568,7 @@
acpi_device_bid(link->device));
return -ENODEV;
} else {
- acpi_irq_add_penalty(link->irq.active, PIRQ_PENALTY_PCI_USING);
-
+ acpi_irq_penalty[link->irq.active] += PIRQ_PENALTY_PCI_USING;
printk(KERN_WARNING PREFIX "%s [%s] enabled at IRQ %d\n",
acpi_device_name(link->device),
acpi_device_bid(link->device), link->irq.active);
@@ -839,7 +778,7 @@
}
/*
- * modify penalty from cmdline
+ * modify acpi_irq_penalty[] from cmdline
*/
static int __init acpi_irq_penalty_update(char *str, int used)
{
@@ -857,10 +796,13 @@
if (irq < 0)
continue;
+ if (irq >= ARRAY_SIZE(acpi_irq_penalty))
+ continue;
+
if (used)
- acpi_irq_add_penalty(irq, PIRQ_PENALTY_ISA_USED);
+ acpi_irq_penalty[irq] += PIRQ_PENALTY_ISA_USED;
else
- acpi_irq_set_penalty(irq, PIRQ_PENALTY_PCI_AVAILABLE);
+ acpi_irq_penalty[irq] = PIRQ_PENALTY_PCI_AVAILABLE;
if (retval != 2) /* no next number */
break;
@@ -877,15 +819,18 @@
*/
void acpi_penalize_isa_irq(int irq, int active)
{
- if (irq >= 0)
- acpi_irq_add_penalty(irq, active ?
- PIRQ_PENALTY_ISA_USED : PIRQ_PENALTY_PCI_USING);
+ if (irq >= 0 && irq < ARRAY_SIZE(acpi_irq_penalty)) {
+ if (active)
+ acpi_irq_penalty[irq] += PIRQ_PENALTY_ISA_USED;
+ else
+ acpi_irq_penalty[irq] += PIRQ_PENALTY_PCI_USING;
+ }
}
bool acpi_isa_irq_available(int irq)
{
- return irq >= 0 &&
- (acpi_irq_get_penalty(irq) < PIRQ_PENALTY_ISA_ALWAYS);
+ return irq >= 0 && (irq >= ARRAY_SIZE(acpi_irq_penalty) ||
+ acpi_irq_penalty[irq] < PIRQ_PENALTY_ISA_ALWAYS);
}
/*
@@ -895,18 +840,13 @@
*/
void acpi_penalize_sci_irq(int irq, int trigger, int polarity)
{
- int penalty;
-
- if (irq < 0)
- return;
-
- if (trigger != ACPI_MADT_TRIGGER_LEVEL ||
- polarity != ACPI_MADT_POLARITY_ACTIVE_LOW)
- penalty = PIRQ_PENALTY_ISA_ALWAYS;
- else
- penalty = PIRQ_PENALTY_PCI_USING;
-
- acpi_irq_add_penalty(irq, penalty);
+ if (irq >= 0 && irq < ARRAY_SIZE(acpi_irq_penalty)) {
+ if (trigger != ACPI_MADT_TRIGGER_LEVEL ||
+ polarity != ACPI_MADT_POLARITY_ACTIVE_LOW)
+ acpi_irq_penalty[irq] += PIRQ_PENALTY_ISA_ALWAYS;
+ else
+ acpi_irq_penalty[irq] += PIRQ_PENALTY_PCI_USING;
+ }
}
/*
diff --git a/drivers/android/binder.c b/drivers/android/binder.c
index f080a8b..16288e7 100644
--- a/drivers/android/binder.c
+++ b/drivers/android/binder.c
@@ -1321,6 +1321,7 @@
struct binder_transaction *t;
struct binder_work *tcomplete;
binder_size_t *offp, *off_end;
+ binder_size_t off_min;
struct binder_proc *target_proc;
struct binder_thread *target_thread = NULL;
struct binder_node *target_node = NULL;
@@ -1522,18 +1523,24 @@
goto err_bad_offset;
}
off_end = (void *)offp + tr->offsets_size;
+ off_min = 0;
for (; offp < off_end; offp++) {
struct flat_binder_object *fp;
if (*offp > t->buffer->data_size - sizeof(*fp) ||
+ *offp < off_min ||
t->buffer->data_size < sizeof(*fp) ||
!IS_ALIGNED(*offp, sizeof(u32))) {
- binder_user_error("%d:%d got transaction with invalid offset, %lld\n",
- proc->pid, thread->pid, (u64)*offp);
+ binder_user_error("%d:%d got transaction with invalid offset, %lld (min %lld, max %lld)\n",
+ proc->pid, thread->pid, (u64)*offp,
+ (u64)off_min,
+ (u64)(t->buffer->data_size -
+ sizeof(*fp)));
return_error = BR_FAILED_REPLY;
goto err_bad_offset;
}
fp = (struct flat_binder_object *)(t->buffer->data + *offp);
+ off_min = *offp + sizeof(struct flat_binder_object);
switch (fp->type) {
case BINDER_TYPE_BINDER:
case BINDER_TYPE_WEAK_BINDER: {
@@ -2074,7 +2081,7 @@
if (get_user(cookie, (binder_uintptr_t __user *)ptr))
return -EFAULT;
- ptr += sizeof(void *);
+ ptr += sizeof(cookie);
list_for_each_entry(w, &proc->delivered_death, entry) {
struct binder_ref_death *tmp_death = container_of(w, struct binder_ref_death, work);
@@ -3598,13 +3605,24 @@
static int binder_proc_show(struct seq_file *m, void *unused)
{
+ struct binder_proc *itr;
struct binder_proc *proc = m->private;
int do_lock = !binder_debug_no_lock;
+ bool valid_proc = false;
if (do_lock)
binder_lock(__func__);
- seq_puts(m, "binder proc state:\n");
- print_binder_proc(m, proc, 1);
+
+ hlist_for_each_entry(itr, &binder_procs, proc_node) {
+ if (itr == proc) {
+ valid_proc = true;
+ break;
+ }
+ }
+ if (valid_proc) {
+ seq_puts(m, "binder proc state:\n");
+ print_binder_proc(m, proc, 1);
+ }
if (do_lock)
binder_unlock(__func__);
return 0;
diff --git a/drivers/ata/ahci.c b/drivers/ata/ahci.c
index 594fcab..546a369 100644
--- a/drivers/ata/ahci.c
+++ b/drivers/ata/ahci.c
@@ -264,6 +264,26 @@
{ PCI_VDEVICE(INTEL, 0x3b2b), board_ahci }, /* PCH RAID */
{ PCI_VDEVICE(INTEL, 0x3b2c), board_ahci }, /* PCH RAID */
{ PCI_VDEVICE(INTEL, 0x3b2f), board_ahci }, /* PCH AHCI */
+ { PCI_VDEVICE(INTEL, 0x19b0), board_ahci }, /* DNV AHCI */
+ { PCI_VDEVICE(INTEL, 0x19b1), board_ahci }, /* DNV AHCI */
+ { PCI_VDEVICE(INTEL, 0x19b2), board_ahci }, /* DNV AHCI */
+ { PCI_VDEVICE(INTEL, 0x19b3), board_ahci }, /* DNV AHCI */
+ { PCI_VDEVICE(INTEL, 0x19b4), board_ahci }, /* DNV AHCI */
+ { PCI_VDEVICE(INTEL, 0x19b5), board_ahci }, /* DNV AHCI */
+ { PCI_VDEVICE(INTEL, 0x19b6), board_ahci }, /* DNV AHCI */
+ { PCI_VDEVICE(INTEL, 0x19b7), board_ahci }, /* DNV AHCI */
+ { PCI_VDEVICE(INTEL, 0x19bE), board_ahci }, /* DNV AHCI */
+ { PCI_VDEVICE(INTEL, 0x19bF), board_ahci }, /* DNV AHCI */
+ { PCI_VDEVICE(INTEL, 0x19c0), board_ahci }, /* DNV AHCI */
+ { PCI_VDEVICE(INTEL, 0x19c1), board_ahci }, /* DNV AHCI */
+ { PCI_VDEVICE(INTEL, 0x19c2), board_ahci }, /* DNV AHCI */
+ { PCI_VDEVICE(INTEL, 0x19c3), board_ahci }, /* DNV AHCI */
+ { PCI_VDEVICE(INTEL, 0x19c4), board_ahci }, /* DNV AHCI */
+ { PCI_VDEVICE(INTEL, 0x19c5), board_ahci }, /* DNV AHCI */
+ { PCI_VDEVICE(INTEL, 0x19c6), board_ahci }, /* DNV AHCI */
+ { PCI_VDEVICE(INTEL, 0x19c7), board_ahci }, /* DNV AHCI */
+ { PCI_VDEVICE(INTEL, 0x19cE), board_ahci }, /* DNV AHCI */
+ { PCI_VDEVICE(INTEL, 0x19cF), board_ahci }, /* DNV AHCI */
{ PCI_VDEVICE(INTEL, 0x1c02), board_ahci }, /* CPT AHCI */
{ PCI_VDEVICE(INTEL, 0x1c03), board_ahci }, /* CPT AHCI */
{ PCI_VDEVICE(INTEL, 0x1c04), board_ahci }, /* CPT RAID */
diff --git a/drivers/ata/ahci.h b/drivers/ata/ahci.h
index a4faa43..a44c75d 100644
--- a/drivers/ata/ahci.h
+++ b/drivers/ata/ahci.h
@@ -250,6 +250,7 @@
AHCI_HFLAG_MULTI_MSI = 0,
AHCI_HFLAG_MULTI_MSIX = 0,
#endif
+ AHCI_HFLAG_WAKE_BEFORE_STOP = (1 << 22), /* wake before DMA stop */
/* ap->flags bits */
diff --git a/drivers/ata/ahci_brcmstb.c b/drivers/ata/ahci_brcmstb.c
index b36cae2..e87bcec 100644
--- a/drivers/ata/ahci_brcmstb.c
+++ b/drivers/ata/ahci_brcmstb.c
@@ -317,6 +317,7 @@
if (IS_ERR(hpriv))
return PTR_ERR(hpriv);
hpriv->plat_data = priv;
+ hpriv->flags = AHCI_HFLAG_WAKE_BEFORE_STOP;
brcm_sata_alpm_init(hpriv);
diff --git a/drivers/ata/libahci.c b/drivers/ata/libahci.c
index d61740e..4029679 100644
--- a/drivers/ata/libahci.c
+++ b/drivers/ata/libahci.c
@@ -496,8 +496,8 @@
}
}
- /* fabricate port_map from cap.nr_ports */
- if (!port_map) {
+ /* fabricate port_map from cap.nr_ports for < AHCI 1.3 */
+ if (!port_map && vers < 0x10300) {
port_map = (1 << ahci_nr_ports(cap)) - 1;
dev_warn(dev, "forcing PORTS_IMPL to 0x%x\n", port_map);
@@ -593,8 +593,22 @@
int ahci_stop_engine(struct ata_port *ap)
{
void __iomem *port_mmio = ahci_port_base(ap);
+ struct ahci_host_priv *hpriv = ap->host->private_data;
u32 tmp;
+ /*
+ * On some controllers, stopping a port's DMA engine while the port
+ * is in ALPM state (partial or slumber) results in failures on
+ * subsequent DMA engine starts. For those controllers, put the
+ * port back in active state before stopping its DMA engine.
+ */
+ if ((hpriv->flags & AHCI_HFLAG_WAKE_BEFORE_STOP) &&
+ (ap->link.lpm_policy > ATA_LPM_MAX_POWER) &&
+ ahci_set_lpm(&ap->link, ATA_LPM_MAX_POWER, ATA_LPM_WAKE_ONLY)) {
+ dev_err(ap->host->dev, "Failed to wake up port before engine stop\n");
+ return -EIO;
+ }
+
tmp = readl(port_mmio + PORT_CMD);
/* check if the HBA is idle */
@@ -689,6 +703,9 @@
void __iomem *port_mmio = ahci_port_base(ap);
if (policy != ATA_LPM_MAX_POWER) {
+ /* wakeup flag only applies to the max power policy */
+ hints &= ~ATA_LPM_WAKE_ONLY;
+
/*
* Disable interrupts on Phy Ready. This keeps us from
* getting woken up due to spurious phy ready
@@ -704,7 +721,8 @@
u32 cmd = readl(port_mmio + PORT_CMD);
if (policy == ATA_LPM_MAX_POWER || !(hints & ATA_LPM_HIPM)) {
- cmd &= ~(PORT_CMD_ASP | PORT_CMD_ALPE);
+ if (!(hints & ATA_LPM_WAKE_ONLY))
+ cmd &= ~(PORT_CMD_ASP | PORT_CMD_ALPE);
cmd |= PORT_CMD_ICC_ACTIVE;
writel(cmd, port_mmio + PORT_CMD);
@@ -712,6 +730,9 @@
/* wait 10ms to be sure we've come out of LPM state */
ata_msleep(ap, 10);
+
+ if (hints & ATA_LPM_WAKE_ONLY)
+ return 0;
} else {
cmd |= PORT_CMD_ALPE;
if (policy == ATA_LPM_MIN_POWER)
diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
index cbb7471..55e257c 100644
--- a/drivers/ata/libata-core.c
+++ b/drivers/ata/libata-core.c
@@ -4125,6 +4125,7 @@
{ "SAMSUNG CD-ROM SN-124", "N001", ATA_HORKAGE_NODMA },
{ "Seagate STT20000A", NULL, ATA_HORKAGE_NODMA },
{ " 2GB ATA Flash Disk", "ADMA428M", ATA_HORKAGE_NODMA },
+ { "VRFDFC22048UCHC-TE*", NULL, ATA_HORKAGE_NODMA },
/* Odd clown on sil3726/4726 PMPs */
{ "Config Disk", NULL, ATA_HORKAGE_DISABLE },
diff --git a/drivers/ata/libata-sff.c b/drivers/ata/libata-sff.c
index cdf6215..051b615 100644
--- a/drivers/ata/libata-sff.c
+++ b/drivers/ata/libata-sff.c
@@ -997,12 +997,9 @@
static void ata_hsm_qc_complete(struct ata_queued_cmd *qc, int in_wq)
{
struct ata_port *ap = qc->ap;
- unsigned long flags;
if (ap->ops->error_handler) {
if (in_wq) {
- spin_lock_irqsave(ap->lock, flags);
-
/* EH might have kicked in while host lock is
* released.
*/
@@ -1014,8 +1011,6 @@
} else
ata_port_freeze(ap);
}
-
- spin_unlock_irqrestore(ap->lock, flags);
} else {
if (likely(!(qc->err_mask & AC_ERR_HSM)))
ata_qc_complete(qc);
@@ -1024,10 +1019,8 @@
}
} else {
if (in_wq) {
- spin_lock_irqsave(ap->lock, flags);
ata_sff_irq_on(ap);
ata_qc_complete(qc);
- spin_unlock_irqrestore(ap->lock, flags);
} else
ata_qc_complete(qc);
}
@@ -1048,9 +1041,10 @@
{
struct ata_link *link = qc->dev->link;
struct ata_eh_info *ehi = &link->eh_info;
- unsigned long flags = 0;
int poll_next;
+ lockdep_assert_held(ap->lock);
+
WARN_ON_ONCE((qc->flags & ATA_QCFLAG_ACTIVE) == 0);
/* Make sure ata_sff_qc_issue() does not throw things
@@ -1112,14 +1106,6 @@
}
}
- /* Send the CDB (atapi) or the first data block (ata pio out).
- * During the state transition, interrupt handler shouldn't
- * be invoked before the data transfer is complete and
- * hsm_task_state is changed. Hence, the following locking.
- */
- if (in_wq)
- spin_lock_irqsave(ap->lock, flags);
-
if (qc->tf.protocol == ATA_PROT_PIO) {
/* PIO data out protocol.
* send first data block.
@@ -1135,9 +1121,6 @@
/* send CDB */
atapi_send_cdb(ap, qc);
- if (in_wq)
- spin_unlock_irqrestore(ap->lock, flags);
-
/* if polling, ata_sff_pio_task() handles the rest.
* otherwise, interrupt handler takes over from here.
*/
@@ -1296,7 +1279,8 @@
break;
default:
poll_next = 0;
- BUG();
+ WARN(true, "ata%d: SFF host state machine in invalid state %d",
+ ap->print_id, ap->hsm_task_state);
}
return poll_next;
@@ -1361,12 +1345,14 @@
u8 status;
int poll_next;
+ spin_lock_irq(ap->lock);
+
BUG_ON(ap->sff_pio_task_link == NULL);
/* qc can be NULL if timeout occurred */
qc = ata_qc_from_tag(ap, link->active_tag);
if (!qc) {
ap->sff_pio_task_link = NULL;
- return;
+ goto out_unlock;
}
fsm_start:
@@ -1381,11 +1367,14 @@
*/
status = ata_sff_busy_wait(ap, ATA_BUSY, 5);
if (status & ATA_BUSY) {
+ spin_unlock_irq(ap->lock);
ata_msleep(ap, 2);
+ spin_lock_irq(ap->lock);
+
status = ata_sff_busy_wait(ap, ATA_BUSY, 10);
if (status & ATA_BUSY) {
ata_sff_queue_pio_task(link, ATA_SHORT_PAUSE);
- return;
+ goto out_unlock;
}
}
@@ -1402,6 +1391,8 @@
*/
if (poll_next)
goto fsm_start;
+out_unlock:
+ spin_unlock_irq(ap->lock);
}
/**
diff --git a/drivers/base/component.c b/drivers/base/component.c
index 89f5cf68..04a1582 100644
--- a/drivers/base/component.c
+++ b/drivers/base/component.c
@@ -206,6 +206,8 @@
if (mc->release)
mc->release(master, mc->data);
}
+
+ kfree(match->compare);
}
static void devm_component_match_release(struct device *dev, void *res)
@@ -221,14 +223,14 @@
if (match->alloc == num)
return 0;
- new = devm_kmalloc_array(dev, num, sizeof(*new), GFP_KERNEL);
+ new = kmalloc_array(num, sizeof(*new), GFP_KERNEL);
if (!new)
return -ENOMEM;
if (match->compare) {
memcpy(new, match->compare, sizeof(*new) *
min(match->num, num));
- devm_kfree(dev, match->compare);
+ kfree(match->compare);
}
match->compare = new;
match->alloc = num;
@@ -283,6 +285,24 @@
}
EXPORT_SYMBOL(component_match_add_release);
+static void free_master(struct master *master)
+{
+ struct component_match *match = master->match;
+ int i;
+
+ list_del(&master->node);
+
+ if (match) {
+ for (i = 0; i < match->num; i++) {
+ struct component *c = match->compare[i].component;
+ if (c)
+ c->master = NULL;
+ }
+ }
+
+ kfree(master);
+}
+
int component_master_add_with_match(struct device *dev,
const struct component_master_ops *ops,
struct component_match *match)
@@ -309,11 +329,9 @@
ret = try_to_bring_up_master(master, NULL);
- if (ret < 0) {
- /* Delete off the list if we weren't successful */
- list_del(&master->node);
- kfree(master);
- }
+ if (ret < 0)
+ free_master(master);
+
mutex_unlock(&component_mutex);
return ret < 0 ? ret : 0;
@@ -324,25 +342,12 @@
const struct component_master_ops *ops)
{
struct master *master;
- int i;
mutex_lock(&component_mutex);
master = __master_find(dev, ops);
if (master) {
- struct component_match *match = master->match;
-
take_down_master(master);
-
- list_del(&master->node);
-
- if (match) {
- for (i = 0; i < match->num; i++) {
- struct component *c = match->compare[i].component;
- if (c)
- c->master = NULL;
- }
- }
- kfree(master);
+ free_master(master);
}
mutex_unlock(&component_mutex);
}
@@ -486,6 +491,8 @@
ret = try_to_bring_up_masters(component);
if (ret < 0) {
+ if (component->master)
+ remove_component(component->master, component);
list_del(&component->node);
kfree(component);
diff --git a/drivers/base/firmware_class.c b/drivers/base/firmware_class.c
index b9250e5..a7f4aa3 100644
--- a/drivers/base/firmware_class.c
+++ b/drivers/base/firmware_class.c
@@ -257,7 +257,7 @@
vunmap(buf->data);
for (i = 0; i < buf->nr_pages; i++)
__free_page(buf->pages[i]);
- kfree(buf->pages);
+ vfree(buf->pages);
} else
#endif
vfree(buf->data);
@@ -353,15 +353,15 @@
rc = fw_read_file_contents(file, buf);
fput(file);
if (rc)
- dev_warn(device, "firmware, attempted to load %s, but failed with error %d\n",
- path, rc);
+ dev_warn(device, "loading %s failed with error %d\n",
+ path, rc);
else
break;
}
__putname(path);
if (!rc) {
- dev_dbg(device, "firmware: direct-loading firmware %s\n",
+ dev_dbg(device, "direct-loading %s\n",
buf->fw_id);
mutex_lock(&fw_lock);
set_bit(FW_STATUS_DONE, &buf->status);
@@ -660,7 +660,7 @@
if (!test_bit(FW_STATUS_DONE, &fw_buf->status)) {
for (i = 0; i < fw_buf->nr_pages; i++)
__free_page(fw_buf->pages[i]);
- kfree(fw_buf->pages);
+ vfree(fw_buf->pages);
fw_buf->pages = NULL;
fw_buf->page_array_size = 0;
fw_buf->nr_pages = 0;
@@ -770,8 +770,7 @@
buf->page_array_size * 2);
struct page **new_pages;
- new_pages = kmalloc(new_array_size * sizeof(void *),
- GFP_KERNEL);
+ new_pages = vmalloc(new_array_size * sizeof(void *));
if (!new_pages) {
fw_load_abort(fw_priv);
return -ENOMEM;
@@ -780,7 +779,7 @@
buf->page_array_size * sizeof(void *));
memset(&new_pages[buf->page_array_size], 0, sizeof(void *) *
(new_array_size - buf->page_array_size));
- kfree(buf->pages);
+ vfree(buf->pages);
buf->pages = new_pages;
buf->page_array_size = new_array_size;
}
@@ -1051,7 +1050,7 @@
}
if (fw_get_builtin_firmware(firmware, name)) {
- dev_dbg(device, "firmware: using built-in firmware %s\n", name);
+ dev_dbg(device, "using built-in %s\n", name);
return 0; /* assigned */
}
diff --git a/drivers/base/regmap/regmap-mmio.c b/drivers/base/regmap/regmap-mmio.c
index 8812bfb..eea5156 100644
--- a/drivers/base/regmap/regmap-mmio.c
+++ b/drivers/base/regmap/regmap-mmio.c
@@ -133,17 +133,17 @@
while (val_size) {
switch (ctx->val_bytes) {
case 1:
- __raw_writeb(*(u8 *)val, ctx->regs + offset);
+ writeb(*(u8 *)val, ctx->regs + offset);
break;
case 2:
- __raw_writew(*(u16 *)val, ctx->regs + offset);
+ writew(*(u16 *)val, ctx->regs + offset);
break;
case 4:
- __raw_writel(*(u32 *)val, ctx->regs + offset);
+ writel(*(u32 *)val, ctx->regs + offset);
break;
#ifdef CONFIG_64BIT
case 8:
- __raw_writeq(*(u64 *)val, ctx->regs + offset);
+ writeq(*(u64 *)val, ctx->regs + offset);
break;
#endif
default:
@@ -193,17 +193,17 @@
while (val_size) {
switch (ctx->val_bytes) {
case 1:
- *(u8 *)val = __raw_readb(ctx->regs + offset);
+ *(u8 *)val = readb(ctx->regs + offset);
break;
case 2:
- *(u16 *)val = __raw_readw(ctx->regs + offset);
+ *(u16 *)val = readw(ctx->regs + offset);
break;
case 4:
- *(u32 *)val = __raw_readl(ctx->regs + offset);
+ *(u32 *)val = readl(ctx->regs + offset);
break;
#ifdef CONFIG_64BIT
case 8:
- *(u64 *)val = __raw_readq(ctx->regs + offset);
+ *(u64 *)val = readq(ctx->regs + offset);
break;
#endif
default:
diff --git a/drivers/block/floppy.c b/drivers/block/floppy.c
index 9e25120..84708a5 100644
--- a/drivers/block/floppy.c
+++ b/drivers/block/floppy.c
@@ -866,7 +866,7 @@
}
/* locks the driver */
-static int lock_fdc(int drive, bool interruptible)
+static int lock_fdc(int drive)
{
if (WARN(atomic_read(&usage_count) == 0,
"Trying to lock fdc while usage count=0\n"))
@@ -2173,7 +2173,7 @@
{
int ret;
- if (lock_fdc(drive, true))
+ if (lock_fdc(drive))
return -EINTR;
set_floppy(drive);
@@ -2960,7 +2960,7 @@
{
int ret;
- if (lock_fdc(drive, interruptible))
+ if (lock_fdc(drive))
return -EINTR;
if (arg == FD_RESET_ALWAYS)
@@ -3243,7 +3243,7 @@
if (!capable(CAP_SYS_ADMIN))
return -EPERM;
mutex_lock(&open_lock);
- if (lock_fdc(drive, true)) {
+ if (lock_fdc(drive)) {
mutex_unlock(&open_lock);
return -EINTR;
}
@@ -3263,7 +3263,7 @@
} else {
int oldStretch;
- if (lock_fdc(drive, true))
+ if (lock_fdc(drive))
return -EINTR;
if (cmd != FDDEFPRM) {
/* notice a disk change immediately, else
@@ -3349,7 +3349,7 @@
if (type)
*g = &floppy_type[type];
else {
- if (lock_fdc(drive, false))
+ if (lock_fdc(drive))
return -EINTR;
if (poll_drive(false, 0) == -EINTR)
return -EINTR;
@@ -3433,7 +3433,7 @@
if (UDRS->fd_ref != 1)
/* somebody else has this drive open */
return -EBUSY;
- if (lock_fdc(drive, true))
+ if (lock_fdc(drive))
return -EINTR;
/* do the actual eject. Fails on
@@ -3445,7 +3445,7 @@
process_fd_request();
return ret;
case FDCLRPRM:
- if (lock_fdc(drive, true))
+ if (lock_fdc(drive))
return -EINTR;
current_type[drive] = NULL;
floppy_sizes[drive] = MAX_DISK_SIZE << 1;
@@ -3467,7 +3467,7 @@
UDP->flags &= ~FTD_MSG;
return 0;
case FDFMTBEG:
- if (lock_fdc(drive, true))
+ if (lock_fdc(drive))
return -EINTR;
if (poll_drive(true, FD_RAW_NEED_DISK) == -EINTR)
return -EINTR;
@@ -3484,7 +3484,7 @@
return do_format(drive, &inparam.f);
case FDFMTEND:
case FDFLUSH:
- if (lock_fdc(drive, true))
+ if (lock_fdc(drive))
return -EINTR;
return invalidate_drive(bdev);
case FDSETEMSGTRESH:
@@ -3507,7 +3507,7 @@
outparam = UDP;
break;
case FDPOLLDRVSTAT:
- if (lock_fdc(drive, true))
+ if (lock_fdc(drive))
return -EINTR;
if (poll_drive(true, FD_RAW_NEED_DISK) == -EINTR)
return -EINTR;
@@ -3530,7 +3530,7 @@
case FDRAWCMD:
if (type)
return -EINVAL;
- if (lock_fdc(drive, true))
+ if (lock_fdc(drive))
return -EINTR;
set_floppy(drive);
i = raw_cmd_ioctl(cmd, (void __user *)param);
@@ -3539,7 +3539,7 @@
process_fd_request();
return i;
case FDTWADDLE:
- if (lock_fdc(drive, true))
+ if (lock_fdc(drive))
return -EINTR;
twaddle();
process_fd_request();
@@ -3663,6 +3663,11 @@
opened_bdev[drive] = bdev;
+ if (!(mode & (FMODE_READ|FMODE_WRITE))) {
+ res = -EINVAL;
+ goto out;
+ }
+
res = -ENXIO;
if (!floppy_track_buffer) {
@@ -3706,21 +3711,20 @@
if (UFDCS->rawcmd == 1)
UFDCS->rawcmd = 2;
- if (!(mode & FMODE_NDELAY)) {
- if (mode & (FMODE_READ|FMODE_WRITE)) {
- UDRS->last_checked = 0;
- clear_bit(FD_OPEN_SHOULD_FAIL_BIT, &UDRS->flags);
- check_disk_change(bdev);
- if (test_bit(FD_DISK_CHANGED_BIT, &UDRS->flags))
- goto out;
- if (test_bit(FD_OPEN_SHOULD_FAIL_BIT, &UDRS->flags))
- goto out;
- }
- res = -EROFS;
- if ((mode & FMODE_WRITE) &&
- !test_bit(FD_DISK_WRITABLE_BIT, &UDRS->flags))
- goto out;
- }
+ UDRS->last_checked = 0;
+ clear_bit(FD_OPEN_SHOULD_FAIL_BIT, &UDRS->flags);
+ check_disk_change(bdev);
+ if (test_bit(FD_DISK_CHANGED_BIT, &UDRS->flags))
+ goto out;
+ if (test_bit(FD_OPEN_SHOULD_FAIL_BIT, &UDRS->flags))
+ goto out;
+
+ res = -EROFS;
+
+ if ((mode & FMODE_WRITE) &&
+ !test_bit(FD_DISK_WRITABLE_BIT, &UDRS->flags))
+ goto out;
+
mutex_unlock(&open_lock);
mutex_unlock(&floppy_mutex);
return 0;
@@ -3748,7 +3752,8 @@
return DISK_EVENT_MEDIA_CHANGE;
if (time_after(jiffies, UDRS->last_checked + UDP->checkfreq)) {
- lock_fdc(drive, false);
+ if (lock_fdc(drive))
+ return -EINTR;
poll_drive(false, 0);
process_fd_request();
}
@@ -3847,7 +3852,9 @@
"VFS: revalidate called on non-open device.\n"))
return -EFAULT;
- lock_fdc(drive, false);
+ res = lock_fdc(drive);
+ if (res)
+ return res;
cf = (test_bit(FD_DISK_CHANGED_BIT, &UDRS->flags) ||
test_bit(FD_VERIFY_BIT, &UDRS->flags));
if (!(cf || test_bit(drive, &fake_change) || drive_no_geom(drive))) {
diff --git a/drivers/block/null_blk.c b/drivers/block/null_blk.c
index 8ba1e97..64a7b59 100644
--- a/drivers/block/null_blk.c
+++ b/drivers/block/null_blk.c
@@ -478,7 +478,7 @@
id->ver_id = 0x1;
id->vmnt = 0;
id->cgrps = 1;
- id->cap = 0x3;
+ id->cap = 0x2;
id->dom = 0x1;
id->ppaf.blk_offset = 0;
@@ -707,9 +707,7 @@
queue_flag_set_unlocked(QUEUE_FLAG_NONROT, nullb->q);
queue_flag_clear_unlocked(QUEUE_FLAG_ADD_RANDOM, nullb->q);
-
mutex_lock(&lock);
- list_add_tail(&nullb->list, &nullb_list);
nullb->index = nullb_indexes++;
mutex_unlock(&lock);
@@ -743,6 +741,10 @@
strncpy(disk->disk_name, nullb->disk_name, DISK_NAME_LEN);
add_disk(disk);
+
+ mutex_lock(&lock);
+ list_add_tail(&nullb->list, &nullb_list);
+ mutex_unlock(&lock);
done:
return 0;
diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 8a8dc91..83eb9e6 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -1873,6 +1873,43 @@
return err;
}
+static int negotiate_mq(struct blkfront_info *info)
+{
+ unsigned int backend_max_queues = 0;
+ int err;
+ unsigned int i;
+
+ BUG_ON(info->nr_rings);
+
+ /* Check if backend supports multiple queues. */
+ err = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
+ "multi-queue-max-queues", "%u", &backend_max_queues);
+ if (err < 0)
+ backend_max_queues = 1;
+
+ info->nr_rings = min(backend_max_queues, xen_blkif_max_queues);
+ /* We need at least one ring. */
+ if (!info->nr_rings)
+ info->nr_rings = 1;
+
+ info->rinfo = kzalloc(sizeof(struct blkfront_ring_info) * info->nr_rings, GFP_KERNEL);
+ if (!info->rinfo) {
+ xenbus_dev_fatal(info->xbdev, -ENOMEM, "allocating ring_info structure");
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < info->nr_rings; i++) {
+ struct blkfront_ring_info *rinfo;
+
+ rinfo = &info->rinfo[i];
+ INIT_LIST_HEAD(&rinfo->indirect_pages);
+ INIT_LIST_HEAD(&rinfo->grants);
+ rinfo->dev_info = info;
+ INIT_WORK(&rinfo->work, blkif_restart_queue);
+ spin_lock_init(&rinfo->ring_lock);
+ }
+ return 0;
+}
/**
* Entry point to this code when a new device is created. Allocate the basic
* structures and the ring buffer for communication with the backend, and
@@ -1883,9 +1920,7 @@
const struct xenbus_device_id *id)
{
int err, vdevice;
- unsigned int r_index;
struct blkfront_info *info;
- unsigned int backend_max_queues = 0;
/* FIXME: Use dynamic device id if this is not set. */
err = xenbus_scanf(XBT_NIL, dev->nodename,
@@ -1936,33 +1971,10 @@
}
info->xbdev = dev;
- /* Check if backend supports multiple queues. */
- err = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
- "multi-queue-max-queues", "%u", &backend_max_queues);
- if (err < 0)
- backend_max_queues = 1;
-
- info->nr_rings = min(backend_max_queues, xen_blkif_max_queues);
- /* We need at least one ring. */
- if (!info->nr_rings)
- info->nr_rings = 1;
-
- info->rinfo = kzalloc(sizeof(struct blkfront_ring_info) * info->nr_rings, GFP_KERNEL);
- if (!info->rinfo) {
- xenbus_dev_fatal(dev, -ENOMEM, "allocating ring_info structure");
+ err = negotiate_mq(info);
+ if (err) {
kfree(info);
- return -ENOMEM;
- }
-
- for (r_index = 0; r_index < info->nr_rings; r_index++) {
- struct blkfront_ring_info *rinfo;
-
- rinfo = &info->rinfo[r_index];
- INIT_LIST_HEAD(&rinfo->indirect_pages);
- INIT_LIST_HEAD(&rinfo->grants);
- rinfo->dev_info = info;
- INIT_WORK(&rinfo->work, blkif_restart_queue);
- spin_lock_init(&rinfo->ring_lock);
+ return err;
}
mutex_init(&info->mutex);
@@ -2123,12 +2135,16 @@
static int blkfront_resume(struct xenbus_device *dev)
{
struct blkfront_info *info = dev_get_drvdata(&dev->dev);
- int err;
+ int err = 0;
dev_dbg(&dev->dev, "blkfront_resume: %s\n", dev->nodename);
blkif_free(info, info->connected == BLKIF_STATE_CONNECTED);
+ err = negotiate_mq(info);
+ if (err)
+ return err;
+
err = talk_to_blkback(dev, info);
/*
diff --git a/drivers/char/hpet.c b/drivers/char/hpet.c
index 240b6cf..be54e53 100644
--- a/drivers/char/hpet.c
+++ b/drivers/char/hpet.c
@@ -42,7 +42,7 @@
/*
* The High Precision Event Timer driver.
* This driver is closely modelled after the rtc.c driver.
- * http://www.intel.com/hardwaredesign/hpetspec_1.pdf
+ * See HPET spec revision 1.
*/
#define HPET_USER_FREQ (64)
#define HPET_DRIFT (500)
diff --git a/drivers/char/nvram.c b/drivers/char/nvram.c
index 0129232..678fa97 100644
--- a/drivers/char/nvram.c
+++ b/drivers/char/nvram.c
@@ -496,12 +496,12 @@
#ifdef CONFIG_PROC_FS
-static char *floppy_types[] = {
+static const char * const floppy_types[] = {
"none", "5.25'' 360k", "5.25'' 1.2M", "3.5'' 720k", "3.5'' 1.44M",
"3.5'' 2.88M", "3.5'' 2.88M"
};
-static char *gfx_types[] = {
+static const char * const gfx_types[] = {
"EGA, VGA, ... (with BIOS)",
"CGA (40 cols)",
"CGA (80 cols)",
@@ -602,7 +602,7 @@
static struct {
unsigned char val;
- char *name;
+ const char *name;
} boot_prefs[] = {
{ 0x80, "TOS" },
{ 0x40, "ASV" },
@@ -611,7 +611,7 @@
{ 0x00, "unspecified" }
};
-static char *languages[] = {
+static const char * const languages[] = {
"English (US)",
"German",
"French",
@@ -623,7 +623,7 @@
"Swiss (German)"
};
-static char *dateformat[] = {
+static const char * const dateformat[] = {
"MM%cDD%cYY",
"DD%cMM%cYY",
"YY%cMM%cDD",
@@ -634,7 +634,7 @@
"7 (undefined)"
};
-static char *colors[] = {
+static const char * const colors[] = {
"2", "4", "16", "256", "65536", "??", "??", "??"
};
diff --git a/drivers/char/nwbutton.c b/drivers/char/nwbutton.c
index 76c490f..0e18442 100644
--- a/drivers/char/nwbutton.c
+++ b/drivers/char/nwbutton.c
@@ -129,10 +129,9 @@
static void button_sequence_finished (unsigned long parameters)
{
-#ifdef CONFIG_NWBUTTON_REBOOT /* Reboot using button is enabled */
- if (button_press_count == reboot_count)
+ if (IS_ENABLED(CONFIG_NWBUTTON_REBOOT) &&
+ button_press_count == reboot_count)
kill_cad_pid(SIGINT, 1); /* Ask init to reboot us */
-#endif /* CONFIG_NWBUTTON_REBOOT */
button_consume_callbacks (button_press_count);
bcount = sprintf (button_output_buffer, "%d\n", button_press_count);
button_press_count = 0; /* Reset the button press counter */
diff --git a/drivers/char/ppdev.c b/drivers/char/ppdev.c
index ae0b42b..d233688 100644
--- a/drivers/char/ppdev.c
+++ b/drivers/char/ppdev.c
@@ -69,12 +69,13 @@
#include <linux/ppdev.h>
#include <linux/mutex.h>
#include <linux/uaccess.h>
+#include <linux/compat.h>
#define PP_VERSION "ppdev: user-space parallel port driver"
#define CHRDEV "ppdev"
struct pp_struct {
- struct pardevice * pdev;
+ struct pardevice *pdev;
wait_queue_head_t irq_wait;
atomic_t irqc;
unsigned int flags;
@@ -98,18 +99,26 @@
#define ROUND_UP(x,y) (((x)+(y)-1)/(y))
static DEFINE_MUTEX(pp_do_mutex);
-static inline void pp_enable_irq (struct pp_struct *pp)
+
+/* define fixed sized ioctl cmd for y2038 migration */
+#define PPGETTIME32 _IOR(PP_IOCTL, 0x95, s32[2])
+#define PPSETTIME32 _IOW(PP_IOCTL, 0x96, s32[2])
+#define PPGETTIME64 _IOR(PP_IOCTL, 0x95, s64[2])
+#define PPSETTIME64 _IOW(PP_IOCTL, 0x96, s64[2])
+
+static inline void pp_enable_irq(struct pp_struct *pp)
{
struct parport *port = pp->pdev->port;
- port->ops->enable_irq (port);
+
+ port->ops->enable_irq(port);
}
-static ssize_t pp_read (struct file * file, char __user * buf, size_t count,
- loff_t * ppos)
+static ssize_t pp_read(struct file *file, char __user *buf, size_t count,
+ loff_t *ppos)
{
unsigned int minor = iminor(file_inode(file));
struct pp_struct *pp = file->private_data;
- char * kbuffer;
+ char *kbuffer;
ssize_t bytes_read = 0;
struct parport *pport;
int mode;
@@ -125,16 +134,15 @@
return 0;
kbuffer = kmalloc(min_t(size_t, count, PP_BUFFER_SIZE), GFP_KERNEL);
- if (!kbuffer) {
+ if (!kbuffer)
return -ENOMEM;
- }
pport = pp->pdev->port;
mode = pport->ieee1284.mode & ~(IEEE1284_DEVICEID | IEEE1284_ADDR);
- parport_set_timeout (pp->pdev,
- (file->f_flags & O_NONBLOCK) ?
- PARPORT_INACTIVITY_O_NONBLOCK :
- pp->default_inactivity);
+ parport_set_timeout(pp->pdev,
+ (file->f_flags & O_NONBLOCK) ?
+ PARPORT_INACTIVITY_O_NONBLOCK :
+ pp->default_inactivity);
while (bytes_read == 0) {
ssize_t need = min_t(unsigned long, count, PP_BUFFER_SIZE);
@@ -144,20 +152,17 @@
int flags = 0;
size_t (*fn)(struct parport *, void *, size_t, int);
- if (pp->flags & PP_W91284PIC) {
+ if (pp->flags & PP_W91284PIC)
flags |= PARPORT_W91284PIC;
- }
- if (pp->flags & PP_FASTREAD) {
+ if (pp->flags & PP_FASTREAD)
flags |= PARPORT_EPP_FAST;
- }
- if (pport->ieee1284.mode & IEEE1284_ADDR) {
+ if (pport->ieee1284.mode & IEEE1284_ADDR)
fn = pport->ops->epp_read_addr;
- } else {
+ else
fn = pport->ops->epp_read_data;
- }
bytes_read = (*fn)(pport, kbuffer, need, flags);
} else {
- bytes_read = parport_read (pport, kbuffer, need);
+ bytes_read = parport_read(pport, kbuffer, need);
}
if (bytes_read != 0)
@@ -168,7 +173,7 @@
break;
}
- if (signal_pending (current)) {
+ if (signal_pending(current)) {
bytes_read = -ERESTARTSYS;
break;
}
@@ -176,22 +181,22 @@
cond_resched();
}
- parport_set_timeout (pp->pdev, pp->default_inactivity);
+ parport_set_timeout(pp->pdev, pp->default_inactivity);
- if (bytes_read > 0 && copy_to_user (buf, kbuffer, bytes_read))
+ if (bytes_read > 0 && copy_to_user(buf, kbuffer, bytes_read))
bytes_read = -EFAULT;
- kfree (kbuffer);
- pp_enable_irq (pp);
+ kfree(kbuffer);
+ pp_enable_irq(pp);
return bytes_read;
}
-static ssize_t pp_write (struct file * file, const char __user * buf,
- size_t count, loff_t * ppos)
+static ssize_t pp_write(struct file *file, const char __user *buf,
+ size_t count, loff_t *ppos)
{
unsigned int minor = iminor(file_inode(file));
struct pp_struct *pp = file->private_data;
- char * kbuffer;
+ char *kbuffer;
ssize_t bytes_written = 0;
ssize_t wrote;
int mode;
@@ -204,21 +209,21 @@
}
kbuffer = kmalloc(min_t(size_t, count, PP_BUFFER_SIZE), GFP_KERNEL);
- if (!kbuffer) {
+ if (!kbuffer)
return -ENOMEM;
- }
+
pport = pp->pdev->port;
mode = pport->ieee1284.mode & ~(IEEE1284_DEVICEID | IEEE1284_ADDR);
- parport_set_timeout (pp->pdev,
- (file->f_flags & O_NONBLOCK) ?
- PARPORT_INACTIVITY_O_NONBLOCK :
- pp->default_inactivity);
+ parport_set_timeout(pp->pdev,
+ (file->f_flags & O_NONBLOCK) ?
+ PARPORT_INACTIVITY_O_NONBLOCK :
+ pp->default_inactivity);
while (bytes_written < count) {
ssize_t n = min_t(unsigned long, count - bytes_written, PP_BUFFER_SIZE);
- if (copy_from_user (kbuffer, buf + bytes_written, n)) {
+ if (copy_from_user(kbuffer, buf + bytes_written, n)) {
bytes_written = -EFAULT;
break;
}
@@ -226,20 +231,19 @@
if ((pp->flags & PP_FASTWRITE) && (mode == IEEE1284_MODE_EPP)) {
/* do a fast EPP write */
if (pport->ieee1284.mode & IEEE1284_ADDR) {
- wrote = pport->ops->epp_write_addr (pport,
+ wrote = pport->ops->epp_write_addr(pport,
kbuffer, n, PARPORT_EPP_FAST);
} else {
- wrote = pport->ops->epp_write_data (pport,
+ wrote = pport->ops->epp_write_data(pport,
kbuffer, n, PARPORT_EPP_FAST);
}
} else {
- wrote = parport_write (pp->pdev->port, kbuffer, n);
+ wrote = parport_write(pp->pdev->port, kbuffer, n);
}
if (wrote <= 0) {
- if (!bytes_written) {
+ if (!bytes_written)
bytes_written = wrote;
- }
break;
}
@@ -251,67 +255,69 @@
break;
}
- if (signal_pending (current))
+ if (signal_pending(current))
break;
cond_resched();
}
- parport_set_timeout (pp->pdev, pp->default_inactivity);
+ parport_set_timeout(pp->pdev, pp->default_inactivity);
- kfree (kbuffer);
- pp_enable_irq (pp);
+ kfree(kbuffer);
+ pp_enable_irq(pp);
return bytes_written;
}
-static void pp_irq (void *private)
+static void pp_irq(void *private)
{
struct pp_struct *pp = private;
if (pp->irqresponse) {
- parport_write_control (pp->pdev->port, pp->irqctl);
+ parport_write_control(pp->pdev->port, pp->irqctl);
pp->irqresponse = 0;
}
- atomic_inc (&pp->irqc);
- wake_up_interruptible (&pp->irq_wait);
+ atomic_inc(&pp->irqc);
+ wake_up_interruptible(&pp->irq_wait);
}
-static int register_device (int minor, struct pp_struct *pp)
+static int register_device(int minor, struct pp_struct *pp)
{
struct parport *port;
- struct pardevice * pdev = NULL;
+ struct pardevice *pdev = NULL;
char *name;
- int fl;
+ struct pardev_cb ppdev_cb;
name = kasprintf(GFP_KERNEL, CHRDEV "%x", minor);
if (name == NULL)
return -ENOMEM;
- port = parport_find_number (minor);
+ port = parport_find_number(minor);
if (!port) {
- printk (KERN_WARNING "%s: no associated port!\n", name);
- kfree (name);
+ printk(KERN_WARNING "%s: no associated port!\n", name);
+ kfree(name);
return -ENXIO;
}
- fl = (pp->flags & PP_EXCL) ? PARPORT_FLAG_EXCL : 0;
- pdev = parport_register_device (port, name, NULL,
- NULL, pp_irq, fl, pp);
- parport_put_port (port);
+ memset(&ppdev_cb, 0, sizeof(ppdev_cb));
+ ppdev_cb.irq_func = pp_irq;
+ ppdev_cb.flags = (pp->flags & PP_EXCL) ? PARPORT_FLAG_EXCL : 0;
+ ppdev_cb.private = pp;
+ pdev = parport_register_dev_model(port, name, &ppdev_cb, minor);
+ parport_put_port(port);
if (!pdev) {
- printk (KERN_WARNING "%s: failed to register device!\n", name);
- kfree (name);
+ printk(KERN_WARNING "%s: failed to register device!\n", name);
+ kfree(name);
return -ENXIO;
}
pp->pdev = pdev;
- pr_debug("%s: registered pardevice\n", name);
+ dev_dbg(&pdev->dev, "registered pardevice\n");
return 0;
}
-static enum ieee1284_phase init_phase (int mode)
+static enum ieee1284_phase init_phase(int mode)
{
switch (mode & ~(IEEE1284_DEVICEID
| IEEE1284_ADDR)) {
@@ -322,11 +328,27 @@
return IEEE1284_PH_FWD_IDLE;
}
+static int pp_set_timeout(struct pardevice *pdev, long tv_sec, int tv_usec)
+{
+ long to_jiffies;
+
+ if ((tv_sec < 0) || (tv_usec < 0))
+ return -EINVAL;
+
+ to_jiffies = usecs_to_jiffies(tv_usec);
+ to_jiffies += tv_sec * HZ;
+ if (to_jiffies <= 0)
+ return -EINVAL;
+
+ pdev->timeout = to_jiffies;
+ return 0;
+}
+
static int pp_do_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
{
unsigned int minor = iminor(file_inode(file));
struct pp_struct *pp = file->private_data;
- struct parport * port;
+ struct parport *port;
void __user *argp = (void __user *)arg;
/* First handle the cases that don't take arguments. */
@@ -337,19 +359,19 @@
int ret;
if (pp->flags & PP_CLAIMED) {
- pr_debug(CHRDEV "%x: you've already got it!\n", minor);
+ dev_dbg(&pp->pdev->dev, "you've already got it!\n");
return -EINVAL;
}
/* Deferred device registration. */
if (!pp->pdev) {
- int err = register_device (minor, pp);
- if (err) {
+ int err = register_device(minor, pp);
+
+ if (err)
return err;
- }
}
- ret = parport_claim_or_block (pp->pdev);
+ ret = parport_claim_or_block(pp->pdev);
if (ret < 0)
return ret;
@@ -357,7 +379,7 @@
/* For interrupt-reporting to work, we need to be
* informed of each interrupt. */
- pp_enable_irq (pp);
+ pp_enable_irq(pp);
/* We may need to fix up the state machine. */
info = &pp->pdev->port->ieee1284;
@@ -365,15 +387,15 @@
pp->saved_state.phase = info->phase;
info->mode = pp->state.mode;
info->phase = pp->state.phase;
- pp->default_inactivity = parport_set_timeout (pp->pdev, 0);
- parport_set_timeout (pp->pdev, pp->default_inactivity);
+ pp->default_inactivity = parport_set_timeout(pp->pdev, 0);
+ parport_set_timeout(pp->pdev, pp->default_inactivity);
return 0;
}
case PPEXCL:
if (pp->pdev) {
- pr_debug(CHRDEV "%x: too late for PPEXCL; "
- "already registered\n", minor);
+ dev_dbg(&pp->pdev->dev,
+ "too late for PPEXCL; already registered\n");
if (pp->flags & PP_EXCL)
/* But it's not really an error. */
return 0;
@@ -388,11 +410,12 @@
case PPSETMODE:
{
int mode;
- if (copy_from_user (&mode, argp, sizeof (mode)))
+
+ if (copy_from_user(&mode, argp, sizeof(mode)))
return -EFAULT;
/* FIXME: validate mode */
pp->state.mode = mode;
- pp->state.phase = init_phase (mode);
+ pp->state.phase = init_phase(mode);
if (pp->flags & PP_CLAIMED) {
pp->pdev->port->ieee1284.mode = mode;
@@ -405,28 +428,27 @@
{
int mode;
- if (pp->flags & PP_CLAIMED) {
+ if (pp->flags & PP_CLAIMED)
mode = pp->pdev->port->ieee1284.mode;
- } else {
+ else
mode = pp->state.mode;
- }
- if (copy_to_user (argp, &mode, sizeof (mode))) {
+
+ if (copy_to_user(argp, &mode, sizeof(mode)))
return -EFAULT;
- }
return 0;
}
case PPSETPHASE:
{
int phase;
- if (copy_from_user (&phase, argp, sizeof (phase))) {
+
+ if (copy_from_user(&phase, argp, sizeof(phase)))
return -EFAULT;
- }
+
/* FIXME: validate phase */
pp->state.phase = phase;
- if (pp->flags & PP_CLAIMED) {
+ if (pp->flags & PP_CLAIMED)
pp->pdev->port->ieee1284.phase = phase;
- }
return 0;
}
@@ -434,38 +456,34 @@
{
int phase;
- if (pp->flags & PP_CLAIMED) {
+ if (pp->flags & PP_CLAIMED)
phase = pp->pdev->port->ieee1284.phase;
- } else {
+ else
phase = pp->state.phase;
- }
- if (copy_to_user (argp, &phase, sizeof (phase))) {
+ if (copy_to_user(argp, &phase, sizeof(phase)))
return -EFAULT;
- }
return 0;
}
case PPGETMODES:
{
unsigned int modes;
- port = parport_find_number (minor);
+ port = parport_find_number(minor);
if (!port)
return -ENODEV;
modes = port->modes;
parport_put_port(port);
- if (copy_to_user (argp, &modes, sizeof (modes))) {
+ if (copy_to_user(argp, &modes, sizeof(modes)))
return -EFAULT;
- }
return 0;
}
case PPSETFLAGS:
{
int uflags;
- if (copy_from_user (&uflags, argp, sizeof (uflags))) {
+ if (copy_from_user(&uflags, argp, sizeof(uflags)))
return -EFAULT;
- }
pp->flags &= ~PP_FLAGMASK;
pp->flags |= (uflags & PP_FLAGMASK);
return 0;
@@ -475,9 +493,8 @@
int uflags;
uflags = pp->flags & PP_FLAGMASK;
- if (copy_to_user (argp, &uflags, sizeof (uflags))) {
+ if (copy_to_user(argp, &uflags, sizeof(uflags)))
return -EFAULT;
- }
return 0;
}
} /* end switch() */
@@ -495,27 +512,28 @@
unsigned char reg;
unsigned char mask;
int mode;
+ s32 time32[2];
+ s64 time64[2];
+ struct timespec64 ts;
int ret;
- struct timeval par_timeout;
- long to_jiffies;
case PPRSTATUS:
- reg = parport_read_status (port);
- if (copy_to_user (argp, ®, sizeof (reg)))
+ reg = parport_read_status(port);
+ if (copy_to_user(argp, ®, sizeof(reg)))
return -EFAULT;
return 0;
case PPRDATA:
- reg = parport_read_data (port);
- if (copy_to_user (argp, ®, sizeof (reg)))
+ reg = parport_read_data(port);
+ if (copy_to_user(argp, ®, sizeof(reg)))
return -EFAULT;
return 0;
case PPRCONTROL:
- reg = parport_read_control (port);
- if (copy_to_user (argp, ®, sizeof (reg)))
+ reg = parport_read_control(port);
+ if (copy_to_user(argp, ®, sizeof(reg)))
return -EFAULT;
return 0;
case PPYIELD:
- parport_yield_blocking (pp->pdev);
+ parport_yield_blocking(pp->pdev);
return 0;
case PPRELEASE:
@@ -525,45 +543,45 @@
pp->state.phase = info->phase;
info->mode = pp->saved_state.mode;
info->phase = pp->saved_state.phase;
- parport_release (pp->pdev);
+ parport_release(pp->pdev);
pp->flags &= ~PP_CLAIMED;
return 0;
case PPWCONTROL:
- if (copy_from_user (®, argp, sizeof (reg)))
+ if (copy_from_user(®, argp, sizeof(reg)))
return -EFAULT;
- parport_write_control (port, reg);
+ parport_write_control(port, reg);
return 0;
case PPWDATA:
- if (copy_from_user (®, argp, sizeof (reg)))
+ if (copy_from_user(®, argp, sizeof(reg)))
return -EFAULT;
- parport_write_data (port, reg);
+ parport_write_data(port, reg);
return 0;
case PPFCONTROL:
- if (copy_from_user (&mask, argp,
- sizeof (mask)))
+ if (copy_from_user(&mask, argp,
+ sizeof(mask)))
return -EFAULT;
- if (copy_from_user (®, 1 + (unsigned char __user *) arg,
- sizeof (reg)))
+ if (copy_from_user(®, 1 + (unsigned char __user *) arg,
+ sizeof(reg)))
return -EFAULT;
- parport_frob_control (port, mask, reg);
+ parport_frob_control(port, mask, reg);
return 0;
case PPDATADIR:
- if (copy_from_user (&mode, argp, sizeof (mode)))
+ if (copy_from_user(&mode, argp, sizeof(mode)))
return -EFAULT;
if (mode)
- port->ops->data_reverse (port);
+ port->ops->data_reverse(port);
else
- port->ops->data_forward (port);
+ port->ops->data_forward(port);
return 0;
case PPNEGOT:
- if (copy_from_user (&mode, argp, sizeof (mode)))
+ if (copy_from_user(&mode, argp, sizeof(mode)))
return -EFAULT;
- switch ((ret = parport_negotiate (port, mode))) {
+ switch ((ret = parport_negotiate(port, mode))) {
case 0: break;
case -1: /* handshake failed, peripheral not IEEE 1284 */
ret = -EIO;
@@ -572,11 +590,11 @@
ret = -ENXIO;
break;
}
- pp_enable_irq (pp);
+ pp_enable_irq(pp);
return ret;
case PPWCTLONIRQ:
- if (copy_from_user (®, argp, sizeof (reg)))
+ if (copy_from_user(®, argp, sizeof(reg)))
return -EFAULT;
/* Remember what to set the control lines to, for next
@@ -586,39 +604,50 @@
return 0;
case PPCLRIRQ:
- ret = atomic_read (&pp->irqc);
- if (copy_to_user (argp, &ret, sizeof (ret)))
+ ret = atomic_read(&pp->irqc);
+ if (copy_to_user(argp, &ret, sizeof(ret)))
return -EFAULT;
- atomic_sub (ret, &pp->irqc);
+ atomic_sub(ret, &pp->irqc);
return 0;
- case PPSETTIME:
- if (copy_from_user (&par_timeout, argp, sizeof(struct timeval))) {
+ case PPSETTIME32:
+ if (copy_from_user(time32, argp, sizeof(time32)))
return -EFAULT;
- }
- /* Convert to jiffies, place in pp->pdev->timeout */
- if ((par_timeout.tv_sec < 0) || (par_timeout.tv_usec < 0)) {
+
+ return pp_set_timeout(pp->pdev, time32[0], time32[1]);
+
+ case PPSETTIME64:
+ if (copy_from_user(time64, argp, sizeof(time64)))
+ return -EFAULT;
+
+ return pp_set_timeout(pp->pdev, time64[0], time64[1]);
+
+ case PPGETTIME32:
+ jiffies_to_timespec64(pp->pdev->timeout, &ts);
+ time32[0] = ts.tv_sec;
+ time32[1] = ts.tv_nsec / NSEC_PER_USEC;
+ if ((time32[0] < 0) || (time32[1] < 0))
return -EINVAL;
- }
- to_jiffies = ROUND_UP(par_timeout.tv_usec, 1000000/HZ);
- to_jiffies += par_timeout.tv_sec * (long)HZ;
- if (to_jiffies <= 0) {
- return -EINVAL;
- }
- pp->pdev->timeout = to_jiffies;
+
+ if (copy_to_user(argp, time32, sizeof(time32)))
+ return -EFAULT;
+
return 0;
- case PPGETTIME:
- to_jiffies = pp->pdev->timeout;
- memset(&par_timeout, 0, sizeof(par_timeout));
- par_timeout.tv_sec = to_jiffies / HZ;
- par_timeout.tv_usec = (to_jiffies % (long)HZ) * (1000000/HZ);
- if (copy_to_user (argp, &par_timeout, sizeof(struct timeval)))
+ case PPGETTIME64:
+ jiffies_to_timespec64(pp->pdev->timeout, &ts);
+ time64[0] = ts.tv_sec;
+ time64[1] = ts.tv_nsec / NSEC_PER_USEC;
+ if ((time64[0] < 0) || (time64[1] < 0))
+ return -EINVAL;
+
+ if (copy_to_user(argp, time64, sizeof(time64)))
return -EFAULT;
+
return 0;
default:
- pr_debug(CHRDEV "%x: What? (cmd=0x%x)\n", minor, cmd);
+ dev_dbg(&pp->pdev->dev, "What? (cmd=0x%x)\n", cmd);
return -EINVAL;
}
@@ -629,13 +658,22 @@
static long pp_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
{
long ret;
+
mutex_lock(&pp_do_mutex);
ret = pp_do_ioctl(file, cmd, arg);
mutex_unlock(&pp_do_mutex);
return ret;
}
-static int pp_open (struct inode * inode, struct file * file)
+#ifdef CONFIG_COMPAT
+static long pp_compat_ioctl(struct file *file, unsigned int cmd,
+ unsigned long arg)
+{
+ return pp_ioctl(file, cmd, (unsigned long)compat_ptr(arg));
+}
+#endif
+
+static int pp_open(struct inode *inode, struct file *file)
{
unsigned int minor = iminor(inode);
struct pp_struct *pp;
@@ -643,16 +681,16 @@
if (minor >= PARPORT_MAX)
return -ENXIO;
- pp = kmalloc (sizeof (struct pp_struct), GFP_KERNEL);
+ pp = kmalloc(sizeof(struct pp_struct), GFP_KERNEL);
if (!pp)
return -ENOMEM;
pp->state.mode = IEEE1284_MODE_COMPAT;
- pp->state.phase = init_phase (pp->state.mode);
+ pp->state.phase = init_phase(pp->state.mode);
pp->flags = 0;
pp->irqresponse = 0;
- atomic_set (&pp->irqc, 0);
- init_waitqueue_head (&pp->irq_wait);
+ atomic_set(&pp->irqc, 0);
+ init_waitqueue_head(&pp->irq_wait);
/* Defer the actual device registration until the first claim.
* That way, we know whether or not the driver wants to have
@@ -664,7 +702,7 @@
return 0;
}
-static int pp_release (struct inode * inode, struct file * file)
+static int pp_release(struct inode *inode, struct file *file)
{
unsigned int minor = iminor(inode);
struct pp_struct *pp = file->private_data;
@@ -673,10 +711,10 @@
compat_negot = 0;
if (!(pp->flags & PP_CLAIMED) && pp->pdev &&
(pp->state.mode != IEEE1284_MODE_COMPAT)) {
- struct ieee1284_info *info;
+ struct ieee1284_info *info;
/* parport released, but not in compatibility mode */
- parport_claim_or_block (pp->pdev);
+ parport_claim_or_block(pp->pdev);
pp->flags |= PP_CLAIMED;
info = &pp->pdev->port->ieee1284;
pp->saved_state.mode = info->mode;
@@ -689,9 +727,9 @@
compat_negot = 2;
}
if (compat_negot) {
- parport_negotiate (pp->pdev->port, IEEE1284_MODE_COMPAT);
- pr_debug(CHRDEV "%x: negotiated back to compatibility "
- "mode because user-space forgot\n", minor);
+ parport_negotiate(pp->pdev->port, IEEE1284_MODE_COMPAT);
+ dev_dbg(&pp->pdev->dev,
+ "negotiated back to compatibility mode because user-space forgot\n");
}
if (pp->flags & PP_CLAIMED) {
@@ -702,7 +740,7 @@
pp->state.phase = info->phase;
info->mode = pp->saved_state.mode;
info->phase = pp->saved_state.phase;
- parport_release (pp->pdev);
+ parport_release(pp->pdev);
if (compat_negot != 1) {
pr_debug(CHRDEV "%x: released pardevice "
"because user-space forgot\n", minor);
@@ -711,25 +749,26 @@
if (pp->pdev) {
const char *name = pp->pdev->name;
- parport_unregister_device (pp->pdev);
- kfree (name);
+
+ parport_unregister_device(pp->pdev);
+ kfree(name);
pp->pdev = NULL;
pr_debug(CHRDEV "%x: unregistered pardevice\n", minor);
}
- kfree (pp);
+ kfree(pp);
return 0;
}
/* No kernel lock held - fine */
-static unsigned int pp_poll (struct file * file, poll_table * wait)
+static unsigned int pp_poll(struct file *file, poll_table *wait)
{
struct pp_struct *pp = file->private_data;
unsigned int mask = 0;
- poll_wait (file, &pp->irq_wait, wait);
- if (atomic_read (&pp->irqc))
+ poll_wait(file, &pp->irq_wait, wait);
+ if (atomic_read(&pp->irqc))
mask |= POLLIN | POLLRDNORM;
return mask;
@@ -744,6 +783,9 @@
.write = pp_write,
.poll = pp_poll,
.unlocked_ioctl = pp_ioctl,
+#ifdef CONFIG_COMPAT
+ .compat_ioctl = pp_compat_ioctl,
+#endif
.open = pp_open,
.release = pp_release,
};
@@ -759,19 +801,32 @@
device_destroy(ppdev_class, MKDEV(PP_MAJOR, port->number));
}
+static int pp_probe(struct pardevice *par_dev)
+{
+ struct device_driver *drv = par_dev->dev.driver;
+ int len = strlen(drv->name);
+
+ if (strncmp(par_dev->name, drv->name, len))
+ return -ENODEV;
+
+ return 0;
+}
+
static struct parport_driver pp_driver = {
.name = CHRDEV,
- .attach = pp_attach,
+ .probe = pp_probe,
+ .match_port = pp_attach,
.detach = pp_detach,
+ .devmodel = true,
};
-static int __init ppdev_init (void)
+static int __init ppdev_init(void)
{
int err = 0;
- if (register_chrdev (PP_MAJOR, CHRDEV, &pp_fops)) {
- printk (KERN_WARNING CHRDEV ": unable to get major %d\n",
- PP_MAJOR);
+ if (register_chrdev(PP_MAJOR, CHRDEV, &pp_fops)) {
+ printk(KERN_WARNING CHRDEV ": unable to get major %d\n",
+ PP_MAJOR);
return -EIO;
}
ppdev_class = class_create(THIS_MODULE, CHRDEV);
@@ -781,11 +836,11 @@
}
err = parport_register_driver(&pp_driver);
if (err < 0) {
- printk (KERN_WARNING CHRDEV ": unable to register with parport\n");
+ printk(KERN_WARNING CHRDEV ": unable to register with parport\n");
goto out_class;
}
- printk (KERN_INFO PP_VERSION "\n");
+ printk(KERN_INFO PP_VERSION "\n");
goto out;
out_class:
@@ -796,12 +851,12 @@
return err;
}
-static void __exit ppdev_cleanup (void)
+static void __exit ppdev_cleanup(void)
{
/* Clean up all parport stuff */
parport_unregister_driver(&pp_driver);
class_destroy(ppdev_class);
- unregister_chrdev (PP_MAJOR, CHRDEV);
+ unregister_chrdev(PP_MAJOR, CHRDEV);
}
module_init(ppdev_init);
diff --git a/drivers/char/random.c b/drivers/char/random.c
index d0da5d8..b583e53 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -1819,6 +1819,28 @@
EXPORT_SYMBOL(get_random_int);
/*
+ * Same as get_random_int(), but returns unsigned long.
+ */
+unsigned long get_random_long(void)
+{
+ __u32 *hash;
+ unsigned long ret;
+
+ if (arch_get_random_long(&ret))
+ return ret;
+
+ hash = get_cpu_var(get_random_int_hash);
+
+ hash[0] += current->pid + jiffies + random_get_entropy();
+ md5_transform(hash, random_int_secret);
+ ret = *(unsigned long *)hash;
+ put_cpu_var(get_random_int_hash);
+
+ return ret;
+}
+EXPORT_SYMBOL(get_random_long);
+
+/*
* randomize_range() returns a start address such that
*
* [...... <range> .....]
diff --git a/drivers/char/raw.c b/drivers/char/raw.c
index 9b9809b..e83b2ad 100644
--- a/drivers/char/raw.c
+++ b/drivers/char/raw.c
@@ -334,10 +334,8 @@
cdev_init(&raw_cdev, &raw_fops);
ret = cdev_add(&raw_cdev, dev, max_raw_minors);
- if (ret) {
+ if (ret)
goto error_region;
- }
-
raw_class = class_create(THIS_MODULE, "raw");
if (IS_ERR(raw_class)) {
printk(KERN_ERR "Error creating raw class.\n");
diff --git a/drivers/clk/Makefile b/drivers/clk/Makefile
index b038e36..bae4be6 100644
--- a/drivers/clk/Makefile
+++ b/drivers/clk/Makefile
@@ -43,7 +43,7 @@
obj-$(CONFIG_COMMON_CLK_SI570) += clk-si570.o
obj-$(CONFIG_COMMON_CLK_CDCE925) += clk-cdce925.o
obj-$(CONFIG_ARCH_STM32) += clk-stm32f4.o
-obj-$(CONFIG_ARCH_TANGOX) += clk-tango4.o
+obj-$(CONFIG_ARCH_TANGO) += clk-tango4.o
obj-$(CONFIG_CLK_TWL6040) += clk-twl6040.o
obj-$(CONFIG_ARCH_U300) += clk-u300.o
obj-$(CONFIG_ARCH_VT8500) += clk-vt8500.o
diff --git a/drivers/clk/clk-gpio.c b/drivers/clk/clk-gpio.c
index 19fed65..7b09a26 100644
--- a/drivers/clk/clk-gpio.c
+++ b/drivers/clk/clk-gpio.c
@@ -289,7 +289,7 @@
num_parents = of_clk_get_parent_count(node);
if (num_parents < 0)
- return;
+ num_parents = 0;
data = kzalloc(sizeof(*data), GFP_KERNEL);
if (!data)
diff --git a/drivers/clk/clk-scpi.c b/drivers/clk/clk-scpi.c
index cd0f272..89e9ca7 100644
--- a/drivers/clk/clk-scpi.c
+++ b/drivers/clk/clk-scpi.c
@@ -299,7 +299,7 @@
/* Add the virtual cpufreq device */
cpufreq_dev = platform_device_register_simple("scpi-cpufreq",
-1, NULL, 0);
- if (!cpufreq_dev)
+ if (IS_ERR(cpufreq_dev))
pr_warn("unable to register cpufreq device");
return 0;
diff --git a/drivers/clk/mvebu/dove-divider.c b/drivers/clk/mvebu/dove-divider.c
index d5c5bfa..3e0b52d 100644
--- a/drivers/clk/mvebu/dove-divider.c
+++ b/drivers/clk/mvebu/dove-divider.c
@@ -247,7 +247,7 @@
void __init dove_divider_clk_init(struct device_node *np)
{
- void *base;
+ void __iomem *base;
base = of_iomap(np, 0);
if (WARN_ON(!base))
diff --git a/drivers/clk/qcom/gcc-apq8084.c b/drivers/clk/qcom/gcc-apq8084.c
index cf73e53..070037a 100644
--- a/drivers/clk/qcom/gcc-apq8084.c
+++ b/drivers/clk/qcom/gcc-apq8084.c
@@ -3587,7 +3587,6 @@
.val_bits = 32,
.max_register = 0x1fc0,
.fast_io = true,
- .val_format_endian = REGMAP_ENDIAN_LITTLE,
};
static const struct qcom_cc_desc gcc_apq8084_desc = {
diff --git a/drivers/clk/qcom/gcc-ipq806x.c b/drivers/clk/qcom/gcc-ipq806x.c
index b692ae8..dd5402b 100644
--- a/drivers/clk/qcom/gcc-ipq806x.c
+++ b/drivers/clk/qcom/gcc-ipq806x.c
@@ -3005,7 +3005,6 @@
.val_bits = 32,
.max_register = 0x3e40,
.fast_io = true,
- .val_format_endian = REGMAP_ENDIAN_LITTLE,
};
static const struct qcom_cc_desc gcc_ipq806x_desc = {
diff --git a/drivers/clk/qcom/gcc-msm8660.c b/drivers/clk/qcom/gcc-msm8660.c
index f6a2b14..ad41303 100644
--- a/drivers/clk/qcom/gcc-msm8660.c
+++ b/drivers/clk/qcom/gcc-msm8660.c
@@ -2702,7 +2702,6 @@
.val_bits = 32,
.max_register = 0x363c,
.fast_io = true,
- .val_format_endian = REGMAP_ENDIAN_LITTLE,
};
static const struct qcom_cc_desc gcc_msm8660_desc = {
diff --git a/drivers/clk/qcom/gcc-msm8916.c b/drivers/clk/qcom/gcc-msm8916.c
index e3bf09d..8cc9b28 100644
--- a/drivers/clk/qcom/gcc-msm8916.c
+++ b/drivers/clk/qcom/gcc-msm8916.c
@@ -3336,7 +3336,6 @@
.val_bits = 32,
.max_register = 0x80000,
.fast_io = true,
- .val_format_endian = REGMAP_ENDIAN_LITTLE,
};
static const struct qcom_cc_desc gcc_msm8916_desc = {
diff --git a/drivers/clk/qcom/gcc-msm8960.c b/drivers/clk/qcom/gcc-msm8960.c
index f31111e..983dd7d 100644
--- a/drivers/clk/qcom/gcc-msm8960.c
+++ b/drivers/clk/qcom/gcc-msm8960.c
@@ -3468,7 +3468,6 @@
.val_bits = 32,
.max_register = 0x3660,
.fast_io = true,
- .val_format_endian = REGMAP_ENDIAN_LITTLE,
};
static const struct regmap_config gcc_apq8064_regmap_config = {
@@ -3477,7 +3476,6 @@
.val_bits = 32,
.max_register = 0x3880,
.fast_io = true,
- .val_format_endian = REGMAP_ENDIAN_LITTLE,
};
static const struct qcom_cc_desc gcc_msm8960_desc = {
diff --git a/drivers/clk/qcom/gcc-msm8974.c b/drivers/clk/qcom/gcc-msm8974.c
index df164d6..335952d 100644
--- a/drivers/clk/qcom/gcc-msm8974.c
+++ b/drivers/clk/qcom/gcc-msm8974.c
@@ -2680,7 +2680,6 @@
.val_bits = 32,
.max_register = 0x1fc0,
.fast_io = true,
- .val_format_endian = REGMAP_ENDIAN_LITTLE,
};
static const struct qcom_cc_desc gcc_msm8974_desc = {
diff --git a/drivers/clk/qcom/lcc-ipq806x.c b/drivers/clk/qcom/lcc-ipq806x.c
index 62e79fa..db3998e 100644
--- a/drivers/clk/qcom/lcc-ipq806x.c
+++ b/drivers/clk/qcom/lcc-ipq806x.c
@@ -419,7 +419,6 @@
.val_bits = 32,
.max_register = 0xfc,
.fast_io = true,
- .val_format_endian = REGMAP_ENDIAN_LITTLE,
};
static const struct qcom_cc_desc lcc_ipq806x_desc = {
diff --git a/drivers/clk/qcom/lcc-msm8960.c b/drivers/clk/qcom/lcc-msm8960.c
index bf95bb0..4fcf9d1 100644
--- a/drivers/clk/qcom/lcc-msm8960.c
+++ b/drivers/clk/qcom/lcc-msm8960.c
@@ -524,7 +524,6 @@
.val_bits = 32,
.max_register = 0xfc,
.fast_io = true,
- .val_format_endian = REGMAP_ENDIAN_LITTLE,
};
static const struct qcom_cc_desc lcc_msm8960_desc = {
diff --git a/drivers/clk/qcom/mmcc-apq8084.c b/drivers/clk/qcom/mmcc-apq8084.c
index 1e703fd..30777f9 100644
--- a/drivers/clk/qcom/mmcc-apq8084.c
+++ b/drivers/clk/qcom/mmcc-apq8084.c
@@ -3368,7 +3368,6 @@
.val_bits = 32,
.max_register = 0x5104,
.fast_io = true,
- .val_format_endian = REGMAP_ENDIAN_LITTLE,
};
static const struct qcom_cc_desc mmcc_apq8084_desc = {
diff --git a/drivers/clk/qcom/mmcc-msm8960.c b/drivers/clk/qcom/mmcc-msm8960.c
index d73a048..00e3619 100644
--- a/drivers/clk/qcom/mmcc-msm8960.c
+++ b/drivers/clk/qcom/mmcc-msm8960.c
@@ -3029,7 +3029,6 @@
.val_bits = 32,
.max_register = 0x334,
.fast_io = true,
- .val_format_endian = REGMAP_ENDIAN_LITTLE,
};
static const struct regmap_config mmcc_apq8064_regmap_config = {
@@ -3038,7 +3037,6 @@
.val_bits = 32,
.max_register = 0x350,
.fast_io = true,
- .val_format_endian = REGMAP_ENDIAN_LITTLE,
};
static const struct qcom_cc_desc mmcc_msm8960_desc = {
diff --git a/drivers/clk/qcom/mmcc-msm8974.c b/drivers/clk/qcom/mmcc-msm8974.c
index bbe28ed..9d790bc 100644
--- a/drivers/clk/qcom/mmcc-msm8974.c
+++ b/drivers/clk/qcom/mmcc-msm8974.c
@@ -2594,7 +2594,6 @@
.val_bits = 32,
.max_register = 0x5104,
.fast_io = true,
- .val_format_endian = REGMAP_ENDIAN_LITTLE,
};
static const struct qcom_cc_desc mmcc_msm8974_desc = {
diff --git a/drivers/clk/rockchip/clk-rk3036.c b/drivers/clk/rockchip/clk-rk3036.c
index ebce980..bc7fbac 100644
--- a/drivers/clk/rockchip/clk-rk3036.c
+++ b/drivers/clk/rockchip/clk-rk3036.c
@@ -133,7 +133,7 @@
PNAME(mux_uart0_p) = { "uart0_src", "uart0_frac", "xin24m" };
PNAME(mux_uart1_p) = { "uart1_src", "uart1_frac", "xin24m" };
PNAME(mux_uart2_p) = { "uart2_src", "uart2_frac", "xin24m" };
-PNAME(mux_mac_p) = { "mac_pll_src", "ext_gmac" };
+PNAME(mux_mac_p) = { "mac_pll_src", "rmii_clkin" };
PNAME(mux_dclk_p) = { "dclk_lcdc", "dclk_cru" };
static struct rockchip_pll_clock rk3036_pll_clks[] __initdata = {
@@ -224,16 +224,16 @@
RK2928_CLKGATE_CON(2), 2, GFLAGS),
COMPOSITE_NODIV(SCLK_TIMER0, "sclk_timer0", mux_timer_p, CLK_IGNORE_UNUSED,
- RK2928_CLKSEL_CON(2), 4, 1, DFLAGS,
+ RK2928_CLKSEL_CON(2), 4, 1, MFLAGS,
RK2928_CLKGATE_CON(1), 0, GFLAGS),
COMPOSITE_NODIV(SCLK_TIMER1, "sclk_timer1", mux_timer_p, CLK_IGNORE_UNUSED,
- RK2928_CLKSEL_CON(2), 5, 1, DFLAGS,
+ RK2928_CLKSEL_CON(2), 5, 1, MFLAGS,
RK2928_CLKGATE_CON(1), 1, GFLAGS),
COMPOSITE_NODIV(SCLK_TIMER2, "sclk_timer2", mux_timer_p, CLK_IGNORE_UNUSED,
- RK2928_CLKSEL_CON(2), 6, 1, DFLAGS,
+ RK2928_CLKSEL_CON(2), 6, 1, MFLAGS,
RK2928_CLKGATE_CON(2), 4, GFLAGS),
COMPOSITE_NODIV(SCLK_TIMER3, "sclk_timer3", mux_timer_p, CLK_IGNORE_UNUSED,
- RK2928_CLKSEL_CON(2), 7, 1, DFLAGS,
+ RK2928_CLKSEL_CON(2), 7, 1, MFLAGS,
RK2928_CLKGATE_CON(2), 5, GFLAGS),
MUX(0, "uart_pll_clk", mux_pll_src_apll_dpll_gpll_usb480m_p, 0,
@@ -242,11 +242,11 @@
RK2928_CLKSEL_CON(13), 0, 7, DFLAGS,
RK2928_CLKGATE_CON(1), 8, GFLAGS),
COMPOSITE_NOMUX(0, "uart1_src", "uart_pll_clk", 0,
- RK2928_CLKSEL_CON(13), 0, 7, DFLAGS,
- RK2928_CLKGATE_CON(1), 8, GFLAGS),
+ RK2928_CLKSEL_CON(14), 0, 7, DFLAGS,
+ RK2928_CLKGATE_CON(1), 10, GFLAGS),
COMPOSITE_NOMUX(0, "uart2_src", "uart_pll_clk", 0,
- RK2928_CLKSEL_CON(13), 0, 7, DFLAGS,
- RK2928_CLKGATE_CON(1), 8, GFLAGS),
+ RK2928_CLKSEL_CON(15), 0, 7, DFLAGS,
+ RK2928_CLKGATE_CON(1), 12, GFLAGS),
COMPOSITE_FRACMUX(0, "uart0_frac", "uart0_src", CLK_SET_RATE_PARENT,
RK2928_CLKSEL_CON(17), 0,
RK2928_CLKGATE_CON(1), 9, GFLAGS,
@@ -279,13 +279,13 @@
RK2928_CLKGATE_CON(3), 2, GFLAGS),
COMPOSITE_NODIV(0, "sclk_sdmmc_src", mux_mmc_src_p, 0,
- RK2928_CLKSEL_CON(12), 8, 2, DFLAGS,
+ RK2928_CLKSEL_CON(12), 8, 2, MFLAGS,
RK2928_CLKGATE_CON(2), 11, GFLAGS),
DIV(SCLK_SDMMC, "sclk_sdmmc", "sclk_sdmmc_src", 0,
RK2928_CLKSEL_CON(11), 0, 7, DFLAGS),
COMPOSITE_NODIV(0, "sclk_sdio_src", mux_mmc_src_p, 0,
- RK2928_CLKSEL_CON(12), 10, 2, DFLAGS,
+ RK2928_CLKSEL_CON(12), 10, 2, MFLAGS,
RK2928_CLKGATE_CON(2), 13, GFLAGS),
DIV(SCLK_SDIO, "sclk_sdio", "sclk_sdio_src", 0,
RK2928_CLKSEL_CON(11), 8, 7, DFLAGS),
@@ -344,12 +344,12 @@
RK2928_CLKGATE_CON(10), 5, GFLAGS),
COMPOSITE_NOGATE(0, "mac_pll_src", mux_pll_src_3plls_p, 0,
- RK2928_CLKSEL_CON(21), 0, 2, MFLAGS, 4, 5, DFLAGS),
+ RK2928_CLKSEL_CON(21), 0, 2, MFLAGS, 9, 5, DFLAGS),
MUX(SCLK_MACREF, "mac_clk_ref", mux_mac_p, CLK_SET_RATE_PARENT,
RK2928_CLKSEL_CON(21), 3, 1, MFLAGS),
COMPOSITE_NOMUX(SCLK_MAC, "mac_clk", "mac_clk_ref", 0,
- RK2928_CLKSEL_CON(21), 9, 5, DFLAGS,
+ RK2928_CLKSEL_CON(21), 4, 5, DFLAGS,
RK2928_CLKGATE_CON(2), 6, GFLAGS),
MUX(SCLK_HDMI, "dclk_hdmi", mux_dclk_p, 0,
diff --git a/drivers/clk/rockchip/clk-rk3368.c b/drivers/clk/rockchip/clk-rk3368.c
index be0ede5..21f3ea9 100644
--- a/drivers/clk/rockchip/clk-rk3368.c
+++ b/drivers/clk/rockchip/clk-rk3368.c
@@ -780,13 +780,13 @@
GATE(PCLK_TSADC, "pclk_tsadc", "pclk_peri", 0, RK3368_CLKGATE_CON(20), 0, GFLAGS),
/* pclk_pd_alive gates */
- GATE(PCLK_TIMER1, "pclk_timer1", "pclk_pd_alive", 0, RK3368_CLKGATE_CON(14), 8, GFLAGS),
- GATE(PCLK_TIMER0, "pclk_timer0", "pclk_pd_alive", 0, RK3368_CLKGATE_CON(14), 7, GFLAGS),
- GATE(0, "pclk_alive_niu", "pclk_pd_alive", CLK_IGNORE_UNUSED, RK3368_CLKGATE_CON(14), 12, GFLAGS),
- GATE(PCLK_GRF, "pclk_grf", "pclk_pd_alive", CLK_IGNORE_UNUSED, RK3368_CLKGATE_CON(14), 11, GFLAGS),
- GATE(PCLK_GPIO3, "pclk_gpio3", "pclk_pd_alive", 0, RK3368_CLKGATE_CON(14), 3, GFLAGS),
- GATE(PCLK_GPIO2, "pclk_gpio2", "pclk_pd_alive", 0, RK3368_CLKGATE_CON(14), 2, GFLAGS),
- GATE(PCLK_GPIO1, "pclk_gpio1", "pclk_pd_alive", 0, RK3368_CLKGATE_CON(14), 1, GFLAGS),
+ GATE(PCLK_TIMER1, "pclk_timer1", "pclk_pd_alive", 0, RK3368_CLKGATE_CON(22), 13, GFLAGS),
+ GATE(PCLK_TIMER0, "pclk_timer0", "pclk_pd_alive", 0, RK3368_CLKGATE_CON(22), 12, GFLAGS),
+ GATE(0, "pclk_alive_niu", "pclk_pd_alive", CLK_IGNORE_UNUSED, RK3368_CLKGATE_CON(22), 9, GFLAGS),
+ GATE(PCLK_GRF, "pclk_grf", "pclk_pd_alive", CLK_IGNORE_UNUSED, RK3368_CLKGATE_CON(22), 8, GFLAGS),
+ GATE(PCLK_GPIO3, "pclk_gpio3", "pclk_pd_alive", 0, RK3368_CLKGATE_CON(22), 3, GFLAGS),
+ GATE(PCLK_GPIO2, "pclk_gpio2", "pclk_pd_alive", 0, RK3368_CLKGATE_CON(22), 2, GFLAGS),
+ GATE(PCLK_GPIO1, "pclk_gpio1", "pclk_pd_alive", 0, RK3368_CLKGATE_CON(22), 1, GFLAGS),
/*
* pclk_vio gates
@@ -796,12 +796,12 @@
GATE(0, "pclk_dphytx", "hclk_vio", CLK_IGNORE_UNUSED, RK3368_CLKGATE_CON(14), 8, GFLAGS),
/* pclk_pd_pmu gates */
- GATE(PCLK_PMUGRF, "pclk_pmugrf", "pclk_pd_pmu", CLK_IGNORE_UNUSED, RK3368_CLKGATE_CON(17), 0, GFLAGS),
- GATE(PCLK_GPIO0, "pclk_gpio0", "pclk_pd_pmu", 0, RK3368_CLKGATE_CON(17), 4, GFLAGS),
- GATE(PCLK_SGRF, "pclk_sgrf", "pclk_pd_pmu", CLK_IGNORE_UNUSED, RK3368_CLKGATE_CON(17), 3, GFLAGS),
- GATE(0, "pclk_pmu_noc", "pclk_pd_pmu", CLK_IGNORE_UNUSED, RK3368_CLKGATE_CON(17), 2, GFLAGS),
- GATE(0, "pclk_intmem1", "pclk_pd_pmu", CLK_IGNORE_UNUSED, RK3368_CLKGATE_CON(17), 1, GFLAGS),
- GATE(PCLK_PMU, "pclk_pmu", "pclk_pd_pmu", CLK_IGNORE_UNUSED, RK3368_CLKGATE_CON(17), 2, GFLAGS),
+ GATE(PCLK_PMUGRF, "pclk_pmugrf", "pclk_pd_pmu", CLK_IGNORE_UNUSED, RK3368_CLKGATE_CON(23), 5, GFLAGS),
+ GATE(PCLK_GPIO0, "pclk_gpio0", "pclk_pd_pmu", 0, RK3368_CLKGATE_CON(23), 4, GFLAGS),
+ GATE(PCLK_SGRF, "pclk_sgrf", "pclk_pd_pmu", CLK_IGNORE_UNUSED, RK3368_CLKGATE_CON(23), 3, GFLAGS),
+ GATE(0, "pclk_pmu_noc", "pclk_pd_pmu", CLK_IGNORE_UNUSED, RK3368_CLKGATE_CON(23), 2, GFLAGS),
+ GATE(0, "pclk_intmem1", "pclk_pd_pmu", CLK_IGNORE_UNUSED, RK3368_CLKGATE_CON(23), 1, GFLAGS),
+ GATE(PCLK_PMU, "pclk_pmu", "pclk_pd_pmu", CLK_IGNORE_UNUSED, RK3368_CLKGATE_CON(23), 0, GFLAGS),
/* timer gates */
GATE(0, "sclk_timer15", "xin24m", CLK_IGNORE_UNUSED, RK3368_CLKGATE_CON(24), 11, GFLAGS),
diff --git a/drivers/clk/tegra/clk-emc.c b/drivers/clk/tegra/clk-emc.c
index e1fe8f3..74e7544 100644
--- a/drivers/clk/tegra/clk-emc.c
+++ b/drivers/clk/tegra/clk-emc.c
@@ -450,8 +450,10 @@
struct emc_timing *timing = tegra->timings + (i++);
err = load_one_timing_from_dt(tegra, timing, child);
- if (err)
+ if (err) {
+ of_node_put(child);
return err;
+ }
timing->ram_code = ram_code;
}
@@ -499,9 +501,9 @@
* fuses until the apbmisc driver is loaded.
*/
err = load_timings_from_dt(tegra, node, node_ram_code);
+ of_node_put(node);
if (err)
return ERR_PTR(err);
- of_node_put(node);
break;
}
diff --git a/drivers/clk/tegra/clk-id.h b/drivers/clk/tegra/clk-id.h
index 19ce073..62ea381 100644
--- a/drivers/clk/tegra/clk-id.h
+++ b/drivers/clk/tegra/clk-id.h
@@ -11,6 +11,7 @@
tegra_clk_afi,
tegra_clk_amx,
tegra_clk_amx1,
+ tegra_clk_apb2ape,
tegra_clk_apbdma,
tegra_clk_apbif,
tegra_clk_ape,
diff --git a/drivers/clk/tegra/clk-pll.c b/drivers/clk/tegra/clk-pll.c
index a534bfa..6ac3f84 100644
--- a/drivers/clk/tegra/clk-pll.c
+++ b/drivers/clk/tegra/clk-pll.c
@@ -86,15 +86,21 @@
#define PLLE_SS_DISABLE (PLLE_SS_CNTL_BYPASS_SS | PLLE_SS_CNTL_INTERP_RESET |\
PLLE_SS_CNTL_SSC_BYP)
#define PLLE_SS_MAX_MASK 0x1ff
-#define PLLE_SS_MAX_VAL 0x25
+#define PLLE_SS_MAX_VAL_TEGRA114 0x25
+#define PLLE_SS_MAX_VAL_TEGRA210 0x21
#define PLLE_SS_INC_MASK (0xff << 16)
#define PLLE_SS_INC_VAL (0x1 << 16)
#define PLLE_SS_INCINTRV_MASK (0x3f << 24)
-#define PLLE_SS_INCINTRV_VAL (0x20 << 24)
+#define PLLE_SS_INCINTRV_VAL_TEGRA114 (0x20 << 24)
+#define PLLE_SS_INCINTRV_VAL_TEGRA210 (0x23 << 24)
#define PLLE_SS_COEFFICIENTS_MASK \
(PLLE_SS_MAX_MASK | PLLE_SS_INC_MASK | PLLE_SS_INCINTRV_MASK)
-#define PLLE_SS_COEFFICIENTS_VAL \
- (PLLE_SS_MAX_VAL | PLLE_SS_INC_VAL | PLLE_SS_INCINTRV_VAL)
+#define PLLE_SS_COEFFICIENTS_VAL_TEGRA114 \
+ (PLLE_SS_MAX_VAL_TEGRA114 | PLLE_SS_INC_VAL |\
+ PLLE_SS_INCINTRV_VAL_TEGRA114)
+#define PLLE_SS_COEFFICIENTS_VAL_TEGRA210 \
+ (PLLE_SS_MAX_VAL_TEGRA210 | PLLE_SS_INC_VAL |\
+ PLLE_SS_INCINTRV_VAL_TEGRA210)
#define PLLE_AUX_PLLP_SEL BIT(2)
#define PLLE_AUX_USE_LOCKDET BIT(3)
@@ -880,7 +886,7 @@
static int clk_plle_enable(struct clk_hw *hw)
{
struct tegra_clk_pll *pll = to_clk_pll(hw);
- unsigned long input_rate = clk_get_rate(clk_get_parent(hw->clk));
+ unsigned long input_rate = clk_hw_get_rate(clk_hw_get_parent(hw));
struct tegra_clk_pll_freq_table sel;
u32 val;
int err;
@@ -1378,7 +1384,7 @@
u32 val;
int ret;
unsigned long flags = 0;
- unsigned long input_rate = clk_get_rate(clk_get_parent(hw->clk));
+ unsigned long input_rate = clk_hw_get_rate(clk_hw_get_parent(hw));
if (_get_table_rate(hw, &sel, pll->params->fixed_rate, input_rate))
return -EINVAL;
@@ -1401,7 +1407,7 @@
val |= PLLE_MISC_IDDQ_SW_CTRL;
val &= ~PLLE_MISC_IDDQ_SW_VALUE;
val |= PLLE_MISC_PLLE_PTS;
- val |= PLLE_MISC_VREG_BG_CTRL_MASK | PLLE_MISC_VREG_CTRL_MASK;
+ val &= ~(PLLE_MISC_VREG_BG_CTRL_MASK | PLLE_MISC_VREG_CTRL_MASK);
pll_writel_misc(val, pll);
udelay(5);
@@ -1428,7 +1434,7 @@
val = pll_readl(PLLE_SS_CTRL, pll);
val &= ~(PLLE_SS_CNTL_CENTER | PLLE_SS_CNTL_INVERT);
val &= ~PLLE_SS_COEFFICIENTS_MASK;
- val |= PLLE_SS_COEFFICIENTS_VAL;
+ val |= PLLE_SS_COEFFICIENTS_VAL_TEGRA114;
pll_writel(val, PLLE_SS_CTRL, pll);
val &= ~(PLLE_SS_CNTL_SSC_BYP | PLLE_SS_CNTL_BYPASS_SS);
pll_writel(val, PLLE_SS_CTRL, pll);
@@ -2012,9 +2018,9 @@
struct tegra_clk_pll *pll = to_clk_pll(hw);
struct tegra_clk_pll_freq_table sel;
u32 val;
- int ret;
+ int ret = 0;
unsigned long flags = 0;
- unsigned long input_rate = clk_get_rate(clk_get_parent(hw->clk));
+ unsigned long input_rate = clk_hw_get_rate(clk_hw_get_parent(hw));
if (_get_table_rate(hw, &sel, pll->params->fixed_rate, input_rate))
return -EINVAL;
@@ -2022,22 +2028,20 @@
if (pll->lock)
spin_lock_irqsave(pll->lock, flags);
+ val = pll_readl(pll->params->aux_reg, pll);
+ if (val & PLLE_AUX_SEQ_ENABLE)
+ goto out;
+
val = pll_readl_base(pll);
val &= ~BIT(30); /* Disable lock override */
pll_writel_base(val, pll);
- val = pll_readl(pll->params->aux_reg, pll);
- val |= PLLE_AUX_ENABLE_SWCTL;
- val &= ~PLLE_AUX_SEQ_ENABLE;
- pll_writel(val, pll->params->aux_reg, pll);
- udelay(1);
-
val = pll_readl_misc(pll);
val |= PLLE_MISC_LOCK_ENABLE;
val |= PLLE_MISC_IDDQ_SW_CTRL;
val &= ~PLLE_MISC_IDDQ_SW_VALUE;
val |= PLLE_MISC_PLLE_PTS;
- val |= PLLE_MISC_VREG_BG_CTRL_MASK | PLLE_MISC_VREG_CTRL_MASK;
+ val &= ~(PLLE_MISC_VREG_BG_CTRL_MASK | PLLE_MISC_VREG_CTRL_MASK);
pll_writel_misc(val, pll);
udelay(5);
@@ -2067,7 +2071,7 @@
val = pll_readl(PLLE_SS_CTRL, pll);
val &= ~(PLLE_SS_CNTL_CENTER | PLLE_SS_CNTL_INVERT);
val &= ~PLLE_SS_COEFFICIENTS_MASK;
- val |= PLLE_SS_COEFFICIENTS_VAL;
+ val |= PLLE_SS_COEFFICIENTS_VAL_TEGRA210;
pll_writel(val, PLLE_SS_CTRL, pll);
val &= ~(PLLE_SS_CNTL_SSC_BYP | PLLE_SS_CNTL_BYPASS_SS);
pll_writel(val, PLLE_SS_CTRL, pll);
@@ -2104,15 +2108,25 @@
if (pll->lock)
spin_lock_irqsave(pll->lock, flags);
+ /* If PLLE HW sequencer is enabled, SW should not disable PLLE */
+ val = pll_readl(pll->params->aux_reg, pll);
+ if (val & PLLE_AUX_SEQ_ENABLE)
+ goto out;
+
val = pll_readl_base(pll);
val &= ~PLLE_BASE_ENABLE;
pll_writel_base(val, pll);
+ val = pll_readl(pll->params->aux_reg, pll);
+ val |= PLLE_AUX_ENABLE_SWCTL | PLLE_AUX_SS_SWCTL;
+ pll_writel(val, pll->params->aux_reg, pll);
+
val = pll_readl_misc(pll);
val |= PLLE_MISC_IDDQ_SW_CTRL | PLLE_MISC_IDDQ_SW_VALUE;
pll_writel_misc(val, pll);
udelay(1);
+out:
if (pll->lock)
spin_unlock_irqrestore(pll->lock, flags);
}
diff --git a/drivers/clk/tegra/clk-tegra-periph.c b/drivers/clk/tegra/clk-tegra-periph.c
index 6ad381a..ea2b9cbf 100644
--- a/drivers/clk/tegra/clk-tegra-periph.c
+++ b/drivers/clk/tegra/clk-tegra-periph.c
@@ -773,7 +773,7 @@
XUSB("xusb_dev_src", mux_clkm_pllp_pllc_pllre, CLK_SOURCE_XUSB_DEV_SRC, 95, TEGRA_PERIPH_ON_APB | TEGRA_PERIPH_NO_RESET, tegra_clk_xusb_dev_src),
XUSB("xusb_dev_src", mux_clkm_pllp_pllre, CLK_SOURCE_XUSB_DEV_SRC, 95, TEGRA_PERIPH_ON_APB | TEGRA_PERIPH_NO_RESET, tegra_clk_xusb_dev_src_8),
MUX8("dbgapb", mux_pllp_clkm_2, CLK_SOURCE_DBGAPB, 185, TEGRA_PERIPH_NO_RESET, tegra_clk_dbgapb),
- MUX8("msenc", mux_pllc2_c_c3_pllp_plla1_clkm, CLK_SOURCE_NVENC, 219, 0, tegra_clk_nvenc),
+ MUX8("nvenc", mux_pllc2_c_c3_pllp_plla1_clkm, CLK_SOURCE_NVENC, 219, 0, tegra_clk_nvenc),
MUX8("nvdec", mux_pllc2_c_c3_pllp_plla1_clkm, CLK_SOURCE_NVDEC, 194, 0, tegra_clk_nvdec),
MUX8("nvjpg", mux_pllc2_c_c3_pllp_plla1_clkm, CLK_SOURCE_NVJPG, 195, 0, tegra_clk_nvjpg),
MUX8("ape", mux_plla_pllc4_out0_pllc_pllc4_out1_pllp_pllc4_out2_clkm, CLK_SOURCE_APE, 198, TEGRA_PERIPH_ON_APB, tegra_clk_ape),
@@ -782,7 +782,7 @@
NODIV("sor1", mux_clkm_sor1_brick_sor1_src, CLK_SOURCE_SOR1, 15, MASK(1), 183, 0, tegra_clk_sor1, &sor1_lock),
MUX8("sdmmc_legacy", mux_pllp_out3_clkm_pllp_pllc4, CLK_SOURCE_SDMMC_LEGACY, 193, TEGRA_PERIPH_ON_APB | TEGRA_PERIPH_NO_RESET, tegra_clk_sdmmc_legacy),
MUX8("qspi", mux_pllp_pllc_pllc_out1_pllc4_out2_pllc4_out1_clkm_pllc4_out0, CLK_SOURCE_QSPI, 211, TEGRA_PERIPH_ON_APB, tegra_clk_qspi),
- MUX("vii2c", mux_pllp_pllc_clkm, CLK_SOURCE_VI_I2C, 208, TEGRA_PERIPH_ON_APB, tegra_clk_vi_i2c),
+ I2C("vii2c", mux_pllp_pllc_clkm, CLK_SOURCE_VI_I2C, 208, tegra_clk_vi_i2c),
MUX("mipibif", mux_pllp_clkm, CLK_SOURCE_MIPIBIF, 173, TEGRA_PERIPH_ON_APB, tegra_clk_mipibif),
MUX("uartape", mux_pllp_pllc_clkm, CLK_SOURCE_UARTAPE, 212, TEGRA_PERIPH_ON_APB | TEGRA_PERIPH_NO_RESET, tegra_clk_uartape),
MUX8("tsecb", mux_pllp_pllc2_c_c3_clkm, CLK_SOURCE_TSECB, 206, 0, tegra_clk_tsecb),
@@ -829,6 +829,7 @@
GATE("xusb_gate", "osc", 143, 0, tegra_clk_xusb_gate, 0),
GATE("pll_p_out_cpu", "pll_p", 223, 0, tegra_clk_pll_p_out_cpu, 0),
GATE("pll_p_out_adsp", "pll_p", 187, 0, tegra_clk_pll_p_out_adsp, 0),
+ GATE("apb2ape", "clk_m", 107, 0, tegra_clk_apb2ape, 0),
};
static struct tegra_periph_init_data div_clks[] = {
diff --git a/drivers/clk/tegra/clk-tegra-super-gen4.c b/drivers/clk/tegra/clk-tegra-super-gen4.c
index 4559a20..474de0f 100644
--- a/drivers/clk/tegra/clk-tegra-super-gen4.c
+++ b/drivers/clk/tegra/clk-tegra-super-gen4.c
@@ -67,7 +67,7 @@
"pll_p", "pll_p_out4", "unused",
"unused", "pll_x", "pll_x_out0" };
-const struct tegra_super_gen_info tegra_super_gen_info_gen4 = {
+static const struct tegra_super_gen_info tegra_super_gen_info_gen4 = {
.gen = gen4,
.sclk_parents = sclk_parents,
.cclk_g_parents = cclk_g_parents,
@@ -93,7 +93,7 @@
"unused", "unused", "unused", "unused",
"dfllCPU_out" };
-const struct tegra_super_gen_info tegra_super_gen_info_gen5 = {
+static const struct tegra_super_gen_info tegra_super_gen_info_gen5 = {
.gen = gen5,
.sclk_parents = sclk_parents_gen5,
.cclk_g_parents = cclk_g_parents_gen5,
@@ -171,7 +171,7 @@
*dt_clk = clk;
}
-void __init tegra_super_clk_init(void __iomem *clk_base,
+static void __init tegra_super_clk_init(void __iomem *clk_base,
void __iomem *pmc_base,
struct tegra_clk *tegra_clks,
struct tegra_clk_pll_params *params,
diff --git a/drivers/clk/tegra/clk-tegra210.c b/drivers/clk/tegra/clk-tegra210.c
index 58514c4..637041f 100644
--- a/drivers/clk/tegra/clk-tegra210.c
+++ b/drivers/clk/tegra/clk-tegra210.c
@@ -59,8 +59,8 @@
#define PLLC3_MISC3 0x50c
#define PLLM_BASE 0x90
-#define PLLM_MISC0 0x9c
#define PLLM_MISC1 0x98
+#define PLLM_MISC2 0x9c
#define PLLP_BASE 0xa0
#define PLLP_MISC0 0xac
#define PLLP_MISC1 0x680
@@ -99,7 +99,7 @@
#define PLLC4_MISC0 0x5a8
#define PLLC4_OUT 0x5e4
#define PLLMB_BASE 0x5e8
-#define PLLMB_MISC0 0x5ec
+#define PLLMB_MISC1 0x5ec
#define PLLA1_BASE 0x6a4
#define PLLA1_MISC0 0x6a8
#define PLLA1_MISC1 0x6ac
@@ -243,7 +243,8 @@
};
static const char *mux_pllmcp_clkm[] = {
- "pll_m", "pll_c", "pll_p", "clk_m", "pll_m_ud", "pll_c2", "pll_c3",
+ "pll_m", "pll_c", "pll_p", "clk_m", "pll_m_ud", "pll_mb", "pll_mb",
+ "pll_p",
};
#define mux_pllmcp_clkm_idx NULL
@@ -367,12 +368,12 @@
/* PLLMB */
#define PLLMB_BASE_LOCK (1 << 27)
-#define PLLMB_MISC0_LOCK_OVERRIDE (1 << 18)
-#define PLLMB_MISC0_IDDQ (1 << 17)
-#define PLLMB_MISC0_LOCK_ENABLE (1 << 16)
+#define PLLMB_MISC1_LOCK_OVERRIDE (1 << 18)
+#define PLLMB_MISC1_IDDQ (1 << 17)
+#define PLLMB_MISC1_LOCK_ENABLE (1 << 16)
-#define PLLMB_MISC0_DEFAULT_VALUE 0x00030000
-#define PLLMB_MISC0_WRITE_MASK 0x0007ffff
+#define PLLMB_MISC1_DEFAULT_VALUE 0x00030000
+#define PLLMB_MISC1_WRITE_MASK 0x0007ffff
/* PLLP */
#define PLLP_BASE_OVERRIDE (1 << 28)
@@ -457,7 +458,8 @@
PLLCX_MISC3_WRITE_MASK);
}
-void tegra210_pllcx_set_defaults(const char *name, struct tegra_clk_pll *pllcx)
+static void tegra210_pllcx_set_defaults(const char *name,
+ struct tegra_clk_pll *pllcx)
{
pllcx->params->defaults_set = true;
@@ -482,22 +484,22 @@
udelay(1);
}
-void _pllc_set_defaults(struct tegra_clk_pll *pllcx)
+static void _pllc_set_defaults(struct tegra_clk_pll *pllcx)
{
tegra210_pllcx_set_defaults("PLL_C", pllcx);
}
-void _pllc2_set_defaults(struct tegra_clk_pll *pllcx)
+static void _pllc2_set_defaults(struct tegra_clk_pll *pllcx)
{
tegra210_pllcx_set_defaults("PLL_C2", pllcx);
}
-void _pllc3_set_defaults(struct tegra_clk_pll *pllcx)
+static void _pllc3_set_defaults(struct tegra_clk_pll *pllcx)
{
tegra210_pllcx_set_defaults("PLL_C3", pllcx);
}
-void _plla1_set_defaults(struct tegra_clk_pll *pllcx)
+static void _plla1_set_defaults(struct tegra_clk_pll *pllcx)
{
tegra210_pllcx_set_defaults("PLL_A1", pllcx);
}
@@ -507,7 +509,7 @@
* PLL with dynamic ramp and fractional SDM. Dynamic ramp is not used.
* Fractional SDM is allowed to provide exact audio rates.
*/
-void tegra210_plla_set_defaults(struct tegra_clk_pll *plla)
+static void tegra210_plla_set_defaults(struct tegra_clk_pll *plla)
{
u32 mask;
u32 val = readl_relaxed(clk_base + plla->params->base_reg);
@@ -559,7 +561,7 @@
* PLLD
* PLL with fractional SDM.
*/
-void tegra210_plld_set_defaults(struct tegra_clk_pll *plld)
+static void tegra210_plld_set_defaults(struct tegra_clk_pll *plld)
{
u32 val;
u32 mask = 0xffff;
@@ -698,7 +700,7 @@
udelay(1);
}
-void tegra210_plld2_set_defaults(struct tegra_clk_pll *plld2)
+static void tegra210_plld2_set_defaults(struct tegra_clk_pll *plld2)
{
plldss_defaults("PLL_D2", plld2, PLLD2_MISC0_DEFAULT_VALUE,
PLLD2_MISC1_CFG_DEFAULT_VALUE,
@@ -706,7 +708,7 @@
PLLD2_MISC3_CTRL2_DEFAULT_VALUE);
}
-void tegra210_plldp_set_defaults(struct tegra_clk_pll *plldp)
+static void tegra210_plldp_set_defaults(struct tegra_clk_pll *plldp)
{
plldss_defaults("PLL_DP", plldp, PLLDP_MISC0_DEFAULT_VALUE,
PLLDP_MISC1_CFG_DEFAULT_VALUE,
@@ -719,7 +721,7 @@
* Base and misc0 layout is the same as PLLD2/PLLDP, but no SDM/SSC support.
* VCO is exposed to the clock tree via fixed 1/3 and 1/5 dividers.
*/
-void tegra210_pllc4_set_defaults(struct tegra_clk_pll *pllc4)
+static void tegra210_pllc4_set_defaults(struct tegra_clk_pll *pllc4)
{
plldss_defaults("PLL_C4", pllc4, PLLC4_MISC0_DEFAULT_VALUE, 0, 0, 0);
}
@@ -728,7 +730,7 @@
* PLLRE
* VCO is exposed to the clock tree directly along with post-divider output
*/
-void tegra210_pllre_set_defaults(struct tegra_clk_pll *pllre)
+static void tegra210_pllre_set_defaults(struct tegra_clk_pll *pllre)
{
u32 mask;
u32 val = readl_relaxed(clk_base + pllre->params->base_reg);
@@ -780,13 +782,13 @@
{
unsigned long input_rate;
- if (!IS_ERR_OR_NULL(hw->clk)) {
+ /* cf rate */
+ if (!IS_ERR_OR_NULL(hw->clk))
input_rate = clk_hw_get_rate(clk_hw_get_parent(hw));
- /* cf rate */
- input_rate /= tegra_pll_get_fixed_mdiv(hw, input_rate);
- } else {
+ else
input_rate = 38400000;
- }
+
+ input_rate /= tegra_pll_get_fixed_mdiv(hw, input_rate);
switch (input_rate) {
case 12000000:
@@ -841,7 +843,7 @@
PLLX_MISC5_WRITE_MASK);
}
-void tegra210_pllx_set_defaults(struct tegra_clk_pll *pllx)
+static void tegra210_pllx_set_defaults(struct tegra_clk_pll *pllx)
{
u32 val;
u32 step_a, step_b;
@@ -901,7 +903,7 @@
}
/* PLLMB */
-void tegra210_pllmb_set_defaults(struct tegra_clk_pll *pllmb)
+static void tegra210_pllmb_set_defaults(struct tegra_clk_pll *pllmb)
{
u32 mask, val = readl_relaxed(clk_base + pllmb->params->base_reg);
@@ -914,15 +916,15 @@
* PLL is ON: check if defaults already set, then set those
* that can be updated in flight.
*/
- val = PLLMB_MISC0_DEFAULT_VALUE & (~PLLMB_MISC0_IDDQ);
- mask = PLLMB_MISC0_LOCK_ENABLE | PLLMB_MISC0_LOCK_OVERRIDE;
+ val = PLLMB_MISC1_DEFAULT_VALUE & (~PLLMB_MISC1_IDDQ);
+ mask = PLLMB_MISC1_LOCK_ENABLE | PLLMB_MISC1_LOCK_OVERRIDE;
_pll_misc_chk_default(clk_base, pllmb->params, 0, val,
- ~mask & PLLMB_MISC0_WRITE_MASK);
+ ~mask & PLLMB_MISC1_WRITE_MASK);
/* Enable lock detect */
val = readl_relaxed(clk_base + pllmb->params->ext_misc_reg[0]);
val &= ~mask;
- val |= PLLMB_MISC0_DEFAULT_VALUE & mask;
+ val |= PLLMB_MISC1_DEFAULT_VALUE & mask;
writel_relaxed(val, clk_base + pllmb->params->ext_misc_reg[0]);
udelay(1);
@@ -930,7 +932,7 @@
}
/* set IDDQ, enable lock detect */
- writel_relaxed(PLLMB_MISC0_DEFAULT_VALUE,
+ writel_relaxed(PLLMB_MISC1_DEFAULT_VALUE,
clk_base + pllmb->params->ext_misc_reg[0]);
udelay(1);
}
@@ -960,7 +962,7 @@
~mask & PLLP_MISC1_WRITE_MASK);
}
-void tegra210_pllp_set_defaults(struct tegra_clk_pll *pllp)
+static void tegra210_pllp_set_defaults(struct tegra_clk_pll *pllp)
{
u32 mask;
u32 val = readl_relaxed(clk_base + pllp->params->base_reg);
@@ -1022,7 +1024,7 @@
~mask & PLLU_MISC1_WRITE_MASK);
}
-void tegra210_pllu_set_defaults(struct tegra_clk_pll *pllu)
+static void tegra210_pllu_set_defaults(struct tegra_clk_pll *pllu)
{
u32 val = readl_relaxed(clk_base + pllu->params->base_reg);
@@ -1212,8 +1214,9 @@
cfg->m *= PLL_SDM_COEFF;
}
-unsigned long tegra210_clk_adjust_vco_min(struct tegra_clk_pll_params *params,
- unsigned long parent_rate)
+static unsigned long
+tegra210_clk_adjust_vco_min(struct tegra_clk_pll_params *params,
+ unsigned long parent_rate)
{
unsigned long vco_min = params->vco_min;
@@ -1386,7 +1389,7 @@
.mdiv_default = 3,
.div_nmp = &pllc_nmp,
.freq_table = pll_cx_freq_table,
- .flags = TEGRA_PLL_USE_LOCK | TEGRA_PLL_HAS_LOCK_ENABLE,
+ .flags = TEGRA_PLL_USE_LOCK,
.set_defaults = _pllc_set_defaults,
.calc_rate = tegra210_pll_fixed_mdiv_cfg,
};
@@ -1425,7 +1428,7 @@
.ext_misc_reg[2] = PLLC2_MISC2,
.ext_misc_reg[3] = PLLC2_MISC3,
.freq_table = pll_cx_freq_table,
- .flags = TEGRA_PLL_USE_LOCK | TEGRA_PLL_HAS_LOCK_ENABLE,
+ .flags = TEGRA_PLL_USE_LOCK,
.set_defaults = _pllc2_set_defaults,
.calc_rate = tegra210_pll_fixed_mdiv_cfg,
};
@@ -1455,7 +1458,7 @@
.ext_misc_reg[2] = PLLC3_MISC2,
.ext_misc_reg[3] = PLLC3_MISC3,
.freq_table = pll_cx_freq_table,
- .flags = TEGRA_PLL_USE_LOCK | TEGRA_PLL_HAS_LOCK_ENABLE,
+ .flags = TEGRA_PLL_USE_LOCK,
.set_defaults = _pllc3_set_defaults,
.calc_rate = tegra210_pll_fixed_mdiv_cfg,
};
@@ -1505,7 +1508,6 @@
.base_reg = PLLC4_BASE,
.misc_reg = PLLC4_MISC0,
.lock_mask = PLL_BASE_LOCK,
- .lock_enable_bit_idx = PLLSS_MISC_LOCK_ENABLE,
.lock_delay = 300,
.max_p = PLL_QLIN_PDIV_MAX,
.ext_misc_reg[0] = PLLC4_MISC0,
@@ -1517,8 +1519,7 @@
.div_nmp = &pllss_nmp,
.freq_table = pll_c4_vco_freq_table,
.set_defaults = tegra210_pllc4_set_defaults,
- .flags = TEGRA_PLL_USE_LOCK | TEGRA_PLL_HAS_LOCK_ENABLE |
- TEGRA_PLL_VCO_OUT,
+ .flags = TEGRA_PLL_USE_LOCK | TEGRA_PLL_VCO_OUT,
.calc_rate = tegra210_pll_fixed_mdiv_cfg,
};
@@ -1559,15 +1560,15 @@
.vco_min = 800000000,
.vco_max = 1866000000,
.base_reg = PLLM_BASE,
- .misc_reg = PLLM_MISC1,
+ .misc_reg = PLLM_MISC2,
.lock_mask = PLL_BASE_LOCK,
.lock_enable_bit_idx = PLLM_MISC_LOCK_ENABLE,
.lock_delay = 300,
- .iddq_reg = PLLM_MISC0,
+ .iddq_reg = PLLM_MISC2,
.iddq_bit_idx = PLLM_IDDQ_BIT,
.max_p = PLL_QLIN_PDIV_MAX,
- .ext_misc_reg[0] = PLLM_MISC0,
- .ext_misc_reg[0] = PLLM_MISC1,
+ .ext_misc_reg[0] = PLLM_MISC2,
+ .ext_misc_reg[1] = PLLM_MISC1,
.round_p_to_pdiv = pll_qlin_p_to_pdiv,
.pdiv_tohw = pll_qlin_pdiv_to_hw,
.div_nmp = &pllm_nmp,
@@ -1586,19 +1587,18 @@
.vco_min = 800000000,
.vco_max = 1866000000,
.base_reg = PLLMB_BASE,
- .misc_reg = PLLMB_MISC0,
+ .misc_reg = PLLMB_MISC1,
.lock_mask = PLL_BASE_LOCK,
- .lock_enable_bit_idx = PLLMB_MISC_LOCK_ENABLE,
.lock_delay = 300,
- .iddq_reg = PLLMB_MISC0,
+ .iddq_reg = PLLMB_MISC1,
.iddq_bit_idx = PLLMB_IDDQ_BIT,
.max_p = PLL_QLIN_PDIV_MAX,
- .ext_misc_reg[0] = PLLMB_MISC0,
+ .ext_misc_reg[0] = PLLMB_MISC1,
.round_p_to_pdiv = pll_qlin_p_to_pdiv,
.pdiv_tohw = pll_qlin_pdiv_to_hw,
.div_nmp = &pllm_nmp,
.freq_table = pll_m_freq_table,
- .flags = TEGRA_PLL_USE_LOCK | TEGRA_PLL_HAS_LOCK_ENABLE,
+ .flags = TEGRA_PLL_USE_LOCK,
.set_defaults = tegra210_pllmb_set_defaults,
.calc_rate = tegra210_pll_fixed_mdiv_cfg,
};
@@ -1671,7 +1671,6 @@
.base_reg = PLLRE_BASE,
.misc_reg = PLLRE_MISC0,
.lock_mask = PLLRE_MISC_LOCK,
- .lock_enable_bit_idx = PLLRE_MISC_LOCK_ENABLE,
.lock_delay = 300,
.max_p = PLL_QLIN_PDIV_MAX,
.ext_misc_reg[0] = PLLRE_MISC0,
@@ -1681,8 +1680,7 @@
.pdiv_tohw = pll_qlin_pdiv_to_hw,
.div_nmp = &pllre_nmp,
.freq_table = pll_re_vco_freq_table,
- .flags = TEGRA_PLL_USE_LOCK | TEGRA_PLL_LOCK_MISC |
- TEGRA_PLL_HAS_LOCK_ENABLE | TEGRA_PLL_VCO_OUT,
+ .flags = TEGRA_PLL_USE_LOCK | TEGRA_PLL_LOCK_MISC | TEGRA_PLL_VCO_OUT,
.set_defaults = tegra210_pllre_set_defaults,
.calc_rate = tegra210_pll_fixed_mdiv_cfg,
};
@@ -1712,7 +1710,6 @@
.base_reg = PLLP_BASE,
.misc_reg = PLLP_MISC0,
.lock_mask = PLL_BASE_LOCK,
- .lock_enable_bit_idx = PLLP_MISC_LOCK_ENABLE,
.lock_delay = 300,
.iddq_reg = PLLP_MISC0,
.iddq_bit_idx = PLLXP_IDDQ_BIT,
@@ -1721,8 +1718,7 @@
.div_nmp = &pllp_nmp,
.freq_table = pll_p_freq_table,
.fixed_rate = 408000000,
- .flags = TEGRA_PLL_FIXED | TEGRA_PLL_USE_LOCK |
- TEGRA_PLL_HAS_LOCK_ENABLE | TEGRA_PLL_VCO_OUT,
+ .flags = TEGRA_PLL_FIXED | TEGRA_PLL_USE_LOCK | TEGRA_PLL_VCO_OUT,
.set_defaults = tegra210_pllp_set_defaults,
.calc_rate = tegra210_pll_fixed_mdiv_cfg,
};
@@ -1750,7 +1746,7 @@
.ext_misc_reg[2] = PLLA1_MISC2,
.ext_misc_reg[3] = PLLA1_MISC3,
.freq_table = pll_cx_freq_table,
- .flags = TEGRA_PLL_USE_LOCK | TEGRA_PLL_HAS_LOCK_ENABLE,
+ .flags = TEGRA_PLL_USE_LOCK,
.set_defaults = _plla1_set_defaults,
.calc_rate = tegra210_pll_fixed_mdiv_cfg,
};
@@ -1787,7 +1783,6 @@
.base_reg = PLLA_BASE,
.misc_reg = PLLA_MISC0,
.lock_mask = PLL_BASE_LOCK,
- .lock_enable_bit_idx = PLLA_MISC_LOCK_ENABLE,
.lock_delay = 300,
.round_p_to_pdiv = pll_qlin_p_to_pdiv,
.pdiv_tohw = pll_qlin_pdiv_to_hw,
@@ -1802,8 +1797,7 @@
.ext_misc_reg[1] = PLLA_MISC1,
.ext_misc_reg[2] = PLLA_MISC2,
.freq_table = pll_a_freq_table,
- .flags = TEGRA_PLL_USE_LOCK | TEGRA_MDIV_NEW |
- TEGRA_PLL_HAS_LOCK_ENABLE,
+ .flags = TEGRA_PLL_USE_LOCK | TEGRA_MDIV_NEW,
.set_defaults = tegra210_plla_set_defaults,
.calc_rate = tegra210_pll_fixed_mdiv_cfg,
.set_gain = tegra210_clk_pll_set_gain,
@@ -1836,7 +1830,6 @@
.base_reg = PLLD_BASE,
.misc_reg = PLLD_MISC0,
.lock_mask = PLL_BASE_LOCK,
- .lock_enable_bit_idx = PLLD_MISC_LOCK_ENABLE,
.lock_delay = 1000,
.iddq_reg = PLLD_MISC0,
.iddq_bit_idx = PLLD_IDDQ_BIT,
@@ -1850,7 +1843,7 @@
.ext_misc_reg[0] = PLLD_MISC0,
.ext_misc_reg[1] = PLLD_MISC1,
.freq_table = pll_d_freq_table,
- .flags = TEGRA_PLL_USE_LOCK | TEGRA_PLL_HAS_LOCK_ENABLE,
+ .flags = TEGRA_PLL_USE_LOCK,
.mdiv_default = 1,
.set_defaults = tegra210_plld_set_defaults,
.calc_rate = tegra210_pll_fixed_mdiv_cfg,
@@ -1876,7 +1869,6 @@
.base_reg = PLLD2_BASE,
.misc_reg = PLLD2_MISC0,
.lock_mask = PLL_BASE_LOCK,
- .lock_enable_bit_idx = PLLSS_MISC_LOCK_ENABLE,
.lock_delay = 300,
.iddq_reg = PLLD2_BASE,
.iddq_bit_idx = PLLSS_IDDQ_BIT,
@@ -1897,7 +1889,7 @@
.mdiv_default = 1,
.freq_table = tegra210_pll_d2_freq_table,
.set_defaults = tegra210_plld2_set_defaults,
- .flags = TEGRA_PLL_USE_LOCK | TEGRA_PLL_HAS_LOCK_ENABLE,
+ .flags = TEGRA_PLL_USE_LOCK,
.calc_rate = tegra210_pll_fixed_mdiv_cfg,
.set_gain = tegra210_clk_pll_set_gain,
.adjust_vco = tegra210_clk_adjust_vco_min,
@@ -1920,7 +1912,6 @@
.base_reg = PLLDP_BASE,
.misc_reg = PLLDP_MISC,
.lock_mask = PLL_BASE_LOCK,
- .lock_enable_bit_idx = PLLSS_MISC_LOCK_ENABLE,
.lock_delay = 300,
.iddq_reg = PLLDP_BASE,
.iddq_bit_idx = PLLSS_IDDQ_BIT,
@@ -1941,7 +1932,7 @@
.mdiv_default = 1,
.freq_table = pll_dp_freq_table,
.set_defaults = tegra210_plldp_set_defaults,
- .flags = TEGRA_PLL_USE_LOCK | TEGRA_PLL_HAS_LOCK_ENABLE,
+ .flags = TEGRA_PLL_USE_LOCK,
.calc_rate = tegra210_pll_fixed_mdiv_cfg,
.set_gain = tegra210_clk_pll_set_gain,
.adjust_vco = tegra210_clk_adjust_vco_min,
@@ -1973,7 +1964,6 @@
.base_reg = PLLU_BASE,
.misc_reg = PLLU_MISC0,
.lock_mask = PLL_BASE_LOCK,
- .lock_enable_bit_idx = PLLU_MISC_LOCK_ENABLE,
.lock_delay = 1000,
.iddq_reg = PLLU_MISC0,
.iddq_bit_idx = PLLU_IDDQ_BIT,
@@ -1983,8 +1973,7 @@
.pdiv_tohw = pll_qlin_pdiv_to_hw,
.div_nmp = &pllu_nmp,
.freq_table = pll_u_freq_table,
- .flags = TEGRA_PLLU | TEGRA_PLL_USE_LOCK | TEGRA_PLL_HAS_LOCK_ENABLE |
- TEGRA_PLL_VCO_OUT,
+ .flags = TEGRA_PLLU | TEGRA_PLL_USE_LOCK | TEGRA_PLL_VCO_OUT,
.set_defaults = tegra210_pllu_set_defaults,
.calc_rate = tegra210_pll_fixed_mdiv_cfg,
};
@@ -2218,6 +2207,7 @@
[tegra_clk_pll_c4_out1] = { .dt_id = TEGRA210_CLK_PLL_C4_OUT1, .present = true },
[tegra_clk_pll_c4_out2] = { .dt_id = TEGRA210_CLK_PLL_C4_OUT2, .present = true },
[tegra_clk_pll_c4_out3] = { .dt_id = TEGRA210_CLK_PLL_C4_OUT3, .present = true },
+ [tegra_clk_apb2ape] = { .dt_id = TEGRA210_CLK_APB2APE, .present = true },
};
static struct tegra_devclk devclks[] __initdata = {
@@ -2519,7 +2509,7 @@
/* PLLU_VCO */
val = readl(clk_base + pll_u_vco_params.base_reg);
- val &= ~BIT(24); /* disable PLLU_OVERRIDE */
+ val &= ~PLLU_BASE_OVERRIDE; /* disable PLLU_OVERRIDE */
writel(val, clk_base + pll_u_vco_params.base_reg);
clk = tegra_clk_register_pllre("pll_u_vco", "pll_ref", clk_base, pmc,
@@ -2738,8 +2728,6 @@
{ TEGRA210_CLK_DFLL_REF, TEGRA210_CLK_PLL_P, 51000000, 1 },
{ TEGRA210_CLK_SBC4, TEGRA210_CLK_PLL_P, 12000000, 1 },
{ TEGRA210_CLK_PLL_RE_VCO, TEGRA210_CLK_CLK_MAX, 672000000, 1 },
- { TEGRA210_CLK_PLL_U_OUT1, TEGRA210_CLK_CLK_MAX, 48000000, 1 },
- { TEGRA210_CLK_PLL_U_OUT2, TEGRA210_CLK_CLK_MAX, 60000000, 1 },
{ TEGRA210_CLK_XUSB_GATE, TEGRA210_CLK_CLK_MAX, 0, 1 },
{ TEGRA210_CLK_XUSB_SS_SRC, TEGRA210_CLK_PLL_U_480M, 120000000, 0 },
{ TEGRA210_CLK_XUSB_FS_SRC, TEGRA210_CLK_PLL_U_48M, 48000000, 0 },
diff --git a/drivers/clk/ti/dpll3xxx.c b/drivers/clk/ti/dpll3xxx.c
index 1c30038..cc73929 100644
--- a/drivers/clk/ti/dpll3xxx.c
+++ b/drivers/clk/ti/dpll3xxx.c
@@ -460,7 +460,8 @@
parent = clk_hw_get_parent(hw);
- if (clk_hw_get_rate(hw) == clk_get_rate(dd->clk_bypass)) {
+ if (clk_hw_get_rate(hw) ==
+ clk_hw_get_rate(__clk_get_hw(dd->clk_bypass))) {
WARN_ON(parent != __clk_get_hw(dd->clk_bypass));
r = _omap3_noncore_dpll_bypass(clk);
} else {
diff --git a/drivers/clk/versatile/clk-icst.c b/drivers/clk/versatile/clk-icst.c
index e62f8cb..3bca438 100644
--- a/drivers/clk/versatile/clk-icst.c
+++ b/drivers/clk/versatile/clk-icst.c
@@ -78,6 +78,9 @@
ret = regmap_read(icst->map, icst->vcoreg_off, &val);
if (ret)
return ret;
+
+ /* Mask the 18 bits used by the VCO */
+ val &= ~0x7ffff;
val |= vco.v | (vco.r << 9) | (vco.s << 16);
/* This magic unlocks the VCO so it can be controlled */
diff --git a/drivers/crypto/atmel-sha.c b/drivers/crypto/atmel-sha.c
index 20de861..8bf9914 100644
--- a/drivers/crypto/atmel-sha.c
+++ b/drivers/crypto/atmel-sha.c
@@ -782,7 +782,7 @@
dd->flags &= ~(SHA_FLAGS_BUSY | SHA_FLAGS_FINAL | SHA_FLAGS_CPU |
SHA_FLAGS_DMA_READY | SHA_FLAGS_OUTPUT_READY);
- clk_disable_unprepare(dd->iclk);
+ clk_disable(dd->iclk);
if (req->base.complete)
req->base.complete(&req->base, err);
@@ -795,7 +795,7 @@
{
int err;
- err = clk_prepare_enable(dd->iclk);
+ err = clk_enable(dd->iclk);
if (err)
return err;
@@ -822,7 +822,7 @@
dev_info(dd->dev,
"version: 0x%x\n", dd->hw_version);
- clk_disable_unprepare(dd->iclk);
+ clk_disable(dd->iclk);
}
static int atmel_sha_handle_queue(struct atmel_sha_dev *dd,
@@ -1410,6 +1410,10 @@
goto res_err;
}
+ err = clk_prepare(sha_dd->iclk);
+ if (err)
+ goto res_err;
+
atmel_sha_hw_version_init(sha_dd);
atmel_sha_get_cap(sha_dd);
@@ -1421,12 +1425,12 @@
if (IS_ERR(pdata)) {
dev_err(&pdev->dev, "platform data not available\n");
err = PTR_ERR(pdata);
- goto res_err;
+ goto iclk_unprepare;
}
}
if (!pdata->dma_slave) {
err = -ENXIO;
- goto res_err;
+ goto iclk_unprepare;
}
err = atmel_sha_dma_init(sha_dd, pdata);
if (err)
@@ -1457,6 +1461,8 @@
if (sha_dd->caps.has_dma)
atmel_sha_dma_cleanup(sha_dd);
err_sha_dma:
+iclk_unprepare:
+ clk_unprepare(sha_dd->iclk);
res_err:
tasklet_kill(&sha_dd->done_task);
sha_dd_err:
@@ -1483,12 +1489,7 @@
if (sha_dd->caps.has_dma)
atmel_sha_dma_cleanup(sha_dd);
- iounmap(sha_dd->io_base);
-
- clk_put(sha_dd->iclk);
-
- if (sha_dd->irq >= 0)
- free_irq(sha_dd->irq, sha_dd);
+ clk_unprepare(sha_dd->iclk);
return 0;
}
diff --git a/drivers/crypto/marvell/cesa.c b/drivers/crypto/marvell/cesa.c
index 0643e33..c0656e7 100644
--- a/drivers/crypto/marvell/cesa.c
+++ b/drivers/crypto/marvell/cesa.c
@@ -306,7 +306,7 @@
return -ENOMEM;
dma->padding_pool = dmam_pool_create("cesa_padding", dev, 72, 1, 0);
- if (!dma->cache_pool)
+ if (!dma->padding_pool)
return -ENOMEM;
cesa->dma = dma;
diff --git a/drivers/devfreq/tegra-devfreq.c b/drivers/devfreq/tegra-devfreq.c
index 848b93ee..fe9dce0 100644
--- a/drivers/devfreq/tegra-devfreq.c
+++ b/drivers/devfreq/tegra-devfreq.c
@@ -500,6 +500,8 @@
clk_set_min_rate(tegra->emc_clock, rate);
clk_set_rate(tegra->emc_clock, 0);
+ *freq = rate;
+
return 0;
}
diff --git a/drivers/dma/dw/core.c b/drivers/dma/dw/core.c
index e893318..5ad0ec1 100644
--- a/drivers/dma/dw/core.c
+++ b/drivers/dma/dw/core.c
@@ -156,7 +156,6 @@
/* Enable interrupts */
channel_set_bit(dw, MASK.XFER, dwc->mask);
- channel_set_bit(dw, MASK.BLOCK, dwc->mask);
channel_set_bit(dw, MASK.ERROR, dwc->mask);
dwc->initialized = true;
@@ -588,6 +587,9 @@
spin_unlock_irqrestore(&dwc->lock, flags);
}
+
+ /* Re-enable interrupts */
+ channel_set_bit(dw, MASK.BLOCK, dwc->mask);
}
/* ------------------------------------------------------------------------- */
@@ -618,11 +620,8 @@
dwc_scan_descriptors(dw, dwc);
}
- /*
- * Re-enable interrupts.
- */
+ /* Re-enable interrupts */
channel_set_bit(dw, MASK.XFER, dw->all_chan_mask);
- channel_set_bit(dw, MASK.BLOCK, dw->all_chan_mask);
channel_set_bit(dw, MASK.ERROR, dw->all_chan_mask);
}
@@ -1261,6 +1260,7 @@
int dw_dma_cyclic_start(struct dma_chan *chan)
{
struct dw_dma_chan *dwc = to_dw_dma_chan(chan);
+ struct dw_dma *dw = to_dw_dma(chan->device);
unsigned long flags;
if (!test_bit(DW_DMA_IS_CYCLIC, &dwc->flags)) {
@@ -1269,7 +1269,12 @@
}
spin_lock_irqsave(&dwc->lock, flags);
+
+ /* Enable interrupts to perform cyclic transfer */
+ channel_set_bit(dw, MASK.BLOCK, dwc->mask);
+
dwc_dostart(dwc, dwc->cdesc->desc[0]);
+
spin_unlock_irqrestore(&dwc->lock, flags);
return 0;
diff --git a/drivers/dma/dw/pci.c b/drivers/dma/dw/pci.c
index 4c30fdd..358f968 100644
--- a/drivers/dma/dw/pci.c
+++ b/drivers/dma/dw/pci.c
@@ -108,6 +108,10 @@
/* Haswell */
{ PCI_VDEVICE(INTEL, 0x9c60) },
+
+ /* Broadwell */
+ { PCI_VDEVICE(INTEL, 0x9ce0) },
+
{ }
};
MODULE_DEVICE_TABLE(pci, dw_pci_id_table);
diff --git a/drivers/dma/edma.c b/drivers/dma/edma.c
index d92d655..e3d7fcb 100644
--- a/drivers/dma/edma.c
+++ b/drivers/dma/edma.c
@@ -113,6 +113,9 @@
#define GET_NUM_REGN(x) ((x & 0x300000) >> 20) /* bits 20-21 */
#define CHMAP_EXIST BIT(24)
+/* CCSTAT register */
+#define EDMA_CCSTAT_ACTV BIT(4)
+
/*
* Max of 20 segments per channel to conserve PaRAM slots
* Also note that MAX_NR_SG should be atleast the no.of periods
@@ -1680,9 +1683,20 @@
spin_unlock_irqrestore(&echan->vchan.lock, flags);
}
+/*
+ * This limit exists to avoid a possible infinite loop when waiting for proof
+ * that a particular transfer is completed. This limit can be hit if there
+ * are large bursts to/from slow devices or the CPU is never able to catch
+ * the DMA hardware idle. On an AM335x transfering 48 bytes from the UART
+ * RX-FIFO, as many as 55 loops have been seen.
+ */
+#define EDMA_MAX_TR_WAIT_LOOPS 1000
+
static u32 edma_residue(struct edma_desc *edesc)
{
bool dst = edesc->direction == DMA_DEV_TO_MEM;
+ int loop_count = EDMA_MAX_TR_WAIT_LOOPS;
+ struct edma_chan *echan = edesc->echan;
struct edma_pset *pset = edesc->pset;
dma_addr_t done, pos;
int i;
@@ -1691,7 +1705,32 @@
* We always read the dst/src position from the first RamPar
* pset. That's the one which is active now.
*/
- pos = edma_get_position(edesc->echan->ecc, edesc->echan->slot[0], dst);
+ pos = edma_get_position(echan->ecc, echan->slot[0], dst);
+
+ /*
+ * "pos" may represent a transfer request that is still being
+ * processed by the EDMACC or EDMATC. We will busy wait until
+ * any one of the situations occurs:
+ * 1. the DMA hardware is idle
+ * 2. a new transfer request is setup
+ * 3. we hit the loop limit
+ */
+ while (edma_read(echan->ecc, EDMA_CCSTAT) & EDMA_CCSTAT_ACTV) {
+ /* check if a new transfer request is setup */
+ if (edma_get_position(echan->ecc,
+ echan->slot[0], dst) != pos) {
+ break;
+ }
+
+ if (!--loop_count) {
+ dev_dbg_ratelimited(echan->vchan.chan.device->dev,
+ "%s: timeout waiting for PaRAM update\n",
+ __func__);
+ break;
+ }
+
+ cpu_relax();
+ }
/*
* Cyclic is simple. Just subtract pset[0].addr from pos.
diff --git a/drivers/dma/ioat/dma.c b/drivers/dma/ioat/dma.c
index 1d5df2e..21539d5 100644
--- a/drivers/dma/ioat/dma.c
+++ b/drivers/dma/ioat/dma.c
@@ -861,32 +861,42 @@
return;
}
+ spin_lock_bh(&ioat_chan->cleanup_lock);
+
+ /* handle the no-actives case */
+ if (!ioat_ring_active(ioat_chan)) {
+ spin_lock_bh(&ioat_chan->prep_lock);
+ check_active(ioat_chan);
+ spin_unlock_bh(&ioat_chan->prep_lock);
+ spin_unlock_bh(&ioat_chan->cleanup_lock);
+ return;
+ }
+
/* if we haven't made progress and we have already
* acknowledged a pending completion once, then be more
* forceful with a restart
*/
- spin_lock_bh(&ioat_chan->cleanup_lock);
if (ioat_cleanup_preamble(ioat_chan, &phys_complete))
__cleanup(ioat_chan, phys_complete);
else if (test_bit(IOAT_COMPLETION_ACK, &ioat_chan->state)) {
+ u32 chanerr;
+
+ chanerr = readl(ioat_chan->reg_base + IOAT_CHANERR_OFFSET);
+ dev_warn(to_dev(ioat_chan), "Restarting channel...\n");
+ dev_warn(to_dev(ioat_chan), "CHANSTS: %#Lx CHANERR: %#x\n",
+ status, chanerr);
+ dev_warn(to_dev(ioat_chan), "Active descriptors: %d\n",
+ ioat_ring_active(ioat_chan));
+
spin_lock_bh(&ioat_chan->prep_lock);
ioat_restart_channel(ioat_chan);
spin_unlock_bh(&ioat_chan->prep_lock);
spin_unlock_bh(&ioat_chan->cleanup_lock);
return;
- } else {
+ } else
set_bit(IOAT_COMPLETION_ACK, &ioat_chan->state);
- mod_timer(&ioat_chan->timer, jiffies + COMPLETION_TIMEOUT);
- }
-
- if (ioat_ring_active(ioat_chan))
- mod_timer(&ioat_chan->timer, jiffies + COMPLETION_TIMEOUT);
- else {
- spin_lock_bh(&ioat_chan->prep_lock);
- check_active(ioat_chan);
- spin_unlock_bh(&ioat_chan->prep_lock);
- }
+ mod_timer(&ioat_chan->timer, jiffies + COMPLETION_TIMEOUT);
spin_unlock_bh(&ioat_chan->cleanup_lock);
}
diff --git a/drivers/firmware/efi/efivars.c b/drivers/firmware/efi/efivars.c
index 756eca8..10e6774 100644
--- a/drivers/firmware/efi/efivars.c
+++ b/drivers/firmware/efi/efivars.c
@@ -221,7 +221,7 @@
}
if ((attributes & ~EFI_VARIABLE_MASK) != 0 ||
- efivar_validate(name, data, size) == false) {
+ efivar_validate(vendor, name, data, size) == false) {
printk(KERN_ERR "efivars: Malformed variable content\n");
return -EINVAL;
}
@@ -447,7 +447,8 @@
}
if ((attributes & ~EFI_VARIABLE_MASK) != 0 ||
- efivar_validate(name, data, size) == false) {
+ efivar_validate(new_var->VendorGuid, name, data,
+ size) == false) {
printk(KERN_ERR "efivars: Malformed variable content\n");
return -EINVAL;
}
@@ -540,38 +541,30 @@
static int
efivar_create_sysfs_entry(struct efivar_entry *new_var)
{
- int i, short_name_size;
+ int short_name_size;
char *short_name;
- unsigned long variable_name_size;
- efi_char16_t *variable_name;
+ unsigned long utf8_name_size;
+ efi_char16_t *variable_name = new_var->var.VariableName;
int ret;
- variable_name = new_var->var.VariableName;
- variable_name_size = ucs2_strlen(variable_name) * sizeof(efi_char16_t);
-
/*
- * Length of the variable bytes in ASCII, plus the '-' separator,
+ * Length of the variable bytes in UTF8, plus the '-' separator,
* plus the GUID, plus trailing NUL
*/
- short_name_size = variable_name_size / sizeof(efi_char16_t)
- + 1 + EFI_VARIABLE_GUID_LEN + 1;
+ utf8_name_size = ucs2_utf8size(variable_name);
+ short_name_size = utf8_name_size + 1 + EFI_VARIABLE_GUID_LEN + 1;
- short_name = kzalloc(short_name_size, GFP_KERNEL);
-
+ short_name = kmalloc(short_name_size, GFP_KERNEL);
if (!short_name)
return -ENOMEM;
- /* Convert Unicode to normal chars (assume top bits are 0),
- ala UTF-8 */
- for (i=0; i < (int)(variable_name_size / sizeof(efi_char16_t)); i++) {
- short_name[i] = variable_name[i] & 0xFF;
- }
+ ucs2_as_utf8(short_name, variable_name, short_name_size);
+
/* This is ugly, but necessary to separate one vendor's
private variables from another's. */
-
- *(short_name + strlen(short_name)) = '-';
+ short_name[utf8_name_size] = '-';
efi_guid_to_str(&new_var->var.VendorGuid,
- short_name + strlen(short_name));
+ short_name + utf8_name_size + 1);
new_var->kobj.kset = efivars_kset;
diff --git a/drivers/firmware/efi/vars.c b/drivers/firmware/efi/vars.c
index 70a0fb1..7f2ea21 100644
--- a/drivers/firmware/efi/vars.c
+++ b/drivers/firmware/efi/vars.c
@@ -165,67 +165,133 @@
}
struct variable_validate {
+ efi_guid_t vendor;
char *name;
bool (*validate)(efi_char16_t *var_name, int match, u8 *data,
unsigned long len);
};
+/*
+ * This is the list of variables we need to validate, as well as the
+ * whitelist for what we think is safe not to default to immutable.
+ *
+ * If it has a validate() method that's not NULL, it'll go into the
+ * validation routine. If not, it is assumed valid, but still used for
+ * whitelisting.
+ *
+ * Note that it's sorted by {vendor,name}, but globbed names must come after
+ * any other name with the same prefix.
+ */
static const struct variable_validate variable_validate[] = {
- { "BootNext", validate_uint16 },
- { "BootOrder", validate_boot_order },
- { "DriverOrder", validate_boot_order },
- { "Boot*", validate_load_option },
- { "Driver*", validate_load_option },
- { "ConIn", validate_device_path },
- { "ConInDev", validate_device_path },
- { "ConOut", validate_device_path },
- { "ConOutDev", validate_device_path },
- { "ErrOut", validate_device_path },
- { "ErrOutDev", validate_device_path },
- { "Timeout", validate_uint16 },
- { "Lang", validate_ascii_string },
- { "PlatformLang", validate_ascii_string },
- { "", NULL },
+ { EFI_GLOBAL_VARIABLE_GUID, "BootNext", validate_uint16 },
+ { EFI_GLOBAL_VARIABLE_GUID, "BootOrder", validate_boot_order },
+ { EFI_GLOBAL_VARIABLE_GUID, "Boot*", validate_load_option },
+ { EFI_GLOBAL_VARIABLE_GUID, "DriverOrder", validate_boot_order },
+ { EFI_GLOBAL_VARIABLE_GUID, "Driver*", validate_load_option },
+ { EFI_GLOBAL_VARIABLE_GUID, "ConIn", validate_device_path },
+ { EFI_GLOBAL_VARIABLE_GUID, "ConInDev", validate_device_path },
+ { EFI_GLOBAL_VARIABLE_GUID, "ConOut", validate_device_path },
+ { EFI_GLOBAL_VARIABLE_GUID, "ConOutDev", validate_device_path },
+ { EFI_GLOBAL_VARIABLE_GUID, "ErrOut", validate_device_path },
+ { EFI_GLOBAL_VARIABLE_GUID, "ErrOutDev", validate_device_path },
+ { EFI_GLOBAL_VARIABLE_GUID, "Lang", validate_ascii_string },
+ { EFI_GLOBAL_VARIABLE_GUID, "OsIndications", NULL },
+ { EFI_GLOBAL_VARIABLE_GUID, "PlatformLang", validate_ascii_string },
+ { EFI_GLOBAL_VARIABLE_GUID, "Timeout", validate_uint16 },
+ { LINUX_EFI_CRASH_GUID, "*", NULL },
+ { NULL_GUID, "", NULL },
};
+static bool
+variable_matches(const char *var_name, size_t len, const char *match_name,
+ int *match)
+{
+ for (*match = 0; ; (*match)++) {
+ char c = match_name[*match];
+ char u = var_name[*match];
+
+ /* Wildcard in the matching name means we've matched */
+ if (c == '*')
+ return true;
+
+ /* Case sensitive match */
+ if (!c && *match == len)
+ return true;
+
+ if (c != u)
+ return false;
+
+ if (!c)
+ return true;
+ }
+ return true;
+}
+
bool
-efivar_validate(efi_char16_t *var_name, u8 *data, unsigned long len)
+efivar_validate(efi_guid_t vendor, efi_char16_t *var_name, u8 *data,
+ unsigned long data_size)
{
int i;
- u16 *unicode_name = var_name;
+ unsigned long utf8_size;
+ u8 *utf8_name;
- for (i = 0; variable_validate[i].validate != NULL; i++) {
+ utf8_size = ucs2_utf8size(var_name);
+ utf8_name = kmalloc(utf8_size + 1, GFP_KERNEL);
+ if (!utf8_name)
+ return false;
+
+ ucs2_as_utf8(utf8_name, var_name, utf8_size);
+ utf8_name[utf8_size] = '\0';
+
+ for (i = 0; variable_validate[i].name[0] != '\0'; i++) {
const char *name = variable_validate[i].name;
- int match;
+ int match = 0;
- for (match = 0; ; match++) {
- char c = name[match];
- u16 u = unicode_name[match];
+ if (efi_guidcmp(vendor, variable_validate[i].vendor))
+ continue;
- /* All special variables are plain ascii */
- if (u > 127)
- return true;
-
- /* Wildcard in the matching name means we've matched */
- if (c == '*')
- return variable_validate[i].validate(var_name,
- match, data, len);
-
- /* Case sensitive match */
- if (c != u)
+ if (variable_matches(utf8_name, utf8_size+1, name, &match)) {
+ if (variable_validate[i].validate == NULL)
break;
-
- /* Reached the end of the string while matching */
- if (!c)
- return variable_validate[i].validate(var_name,
- match, data, len);
+ kfree(utf8_name);
+ return variable_validate[i].validate(var_name, match,
+ data, data_size);
}
}
-
+ kfree(utf8_name);
return true;
}
EXPORT_SYMBOL_GPL(efivar_validate);
+bool
+efivar_variable_is_removable(efi_guid_t vendor, const char *var_name,
+ size_t len)
+{
+ int i;
+ bool found = false;
+ int match = 0;
+
+ /*
+ * Check if our variable is in the validated variables list
+ */
+ for (i = 0; variable_validate[i].name[0] != '\0'; i++) {
+ if (efi_guidcmp(variable_validate[i].vendor, vendor))
+ continue;
+
+ if (variable_matches(var_name, len,
+ variable_validate[i].name, &match)) {
+ found = true;
+ break;
+ }
+ }
+
+ /*
+ * If it's in our list, it is removable.
+ */
+ return found;
+}
+EXPORT_SYMBOL_GPL(efivar_variable_is_removable);
+
static efi_status_t
check_var_size(u32 attributes, unsigned long size)
{
@@ -852,7 +918,7 @@
*set = false;
- if (efivar_validate(name, data, *size) == false)
+ if (efivar_validate(*vendor, name, data, *size) == false)
return -EINVAL;
/*
diff --git a/drivers/gpio/gpio-altera.c b/drivers/gpio/gpio-altera.c
index 2aeaebd..3f87a03 100644
--- a/drivers/gpio/gpio-altera.c
+++ b/drivers/gpio/gpio-altera.c
@@ -312,8 +312,8 @@
handle_simple_irq, IRQ_TYPE_NONE);
if (ret) {
- dev_info(&pdev->dev, "could not add irqchip\n");
- return ret;
+ dev_err(&pdev->dev, "could not add irqchip\n");
+ goto teardown;
}
gpiochip_set_chained_irqchip(&altera_gc->mmchip.gc,
@@ -326,6 +326,7 @@
skip_irq:
return 0;
teardown:
+ of_mm_gpiochip_remove(&altera_gc->mmchip);
pr_err("%s: registration failed with status %d\n",
node->full_name, ret);
diff --git a/drivers/gpio/gpio-davinci.c b/drivers/gpio/gpio-davinci.c
index ec58f42..cd007a6 100644
--- a/drivers/gpio/gpio-davinci.c
+++ b/drivers/gpio/gpio-davinci.c
@@ -195,7 +195,7 @@
static int davinci_gpio_probe(struct platform_device *pdev)
{
int i, base;
- unsigned ngpio;
+ unsigned ngpio, nbank;
struct davinci_gpio_controller *chips;
struct davinci_gpio_platform_data *pdata;
struct davinci_gpio_regs __iomem *regs;
@@ -224,8 +224,9 @@
if (WARN_ON(ARCH_NR_GPIOS < ngpio))
ngpio = ARCH_NR_GPIOS;
+ nbank = DIV_ROUND_UP(ngpio, 32);
chips = devm_kzalloc(dev,
- ngpio * sizeof(struct davinci_gpio_controller),
+ nbank * sizeof(struct davinci_gpio_controller),
GFP_KERNEL);
if (!chips)
return -ENOMEM;
@@ -511,7 +512,7 @@
return irq;
}
- irq_domain = irq_domain_add_legacy(NULL, ngpio, irq, 0,
+ irq_domain = irq_domain_add_legacy(dev->of_node, ngpio, irq, 0,
&davinci_gpio_irq_ops,
chips);
if (!irq_domain) {
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index 82edf95..5e7770f 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -87,6 +87,8 @@
extern int amdgpu_sched_hw_submission;
extern int amdgpu_enable_semaphores;
extern int amdgpu_powerplay;
+extern unsigned amdgpu_pcie_gen_cap;
+extern unsigned amdgpu_pcie_lane_cap;
#define AMDGPU_WAIT_IDLE_TIMEOUT_IN_MS 3000
#define AMDGPU_MAX_USEC_TIMEOUT 100000 /* 100 ms */
@@ -132,47 +134,6 @@
#define AMDGPU_RESET_VCE (1 << 13)
#define AMDGPU_RESET_VCE1 (1 << 14)
-/* CG block flags */
-#define AMDGPU_CG_BLOCK_GFX (1 << 0)
-#define AMDGPU_CG_BLOCK_MC (1 << 1)
-#define AMDGPU_CG_BLOCK_SDMA (1 << 2)
-#define AMDGPU_CG_BLOCK_UVD (1 << 3)
-#define AMDGPU_CG_BLOCK_VCE (1 << 4)
-#define AMDGPU_CG_BLOCK_HDP (1 << 5)
-#define AMDGPU_CG_BLOCK_BIF (1 << 6)
-
-/* CG flags */
-#define AMDGPU_CG_SUPPORT_GFX_MGCG (1 << 0)
-#define AMDGPU_CG_SUPPORT_GFX_MGLS (1 << 1)
-#define AMDGPU_CG_SUPPORT_GFX_CGCG (1 << 2)
-#define AMDGPU_CG_SUPPORT_GFX_CGLS (1 << 3)
-#define AMDGPU_CG_SUPPORT_GFX_CGTS (1 << 4)
-#define AMDGPU_CG_SUPPORT_GFX_CGTS_LS (1 << 5)
-#define AMDGPU_CG_SUPPORT_GFX_CP_LS (1 << 6)
-#define AMDGPU_CG_SUPPORT_GFX_RLC_LS (1 << 7)
-#define AMDGPU_CG_SUPPORT_MC_LS (1 << 8)
-#define AMDGPU_CG_SUPPORT_MC_MGCG (1 << 9)
-#define AMDGPU_CG_SUPPORT_SDMA_LS (1 << 10)
-#define AMDGPU_CG_SUPPORT_SDMA_MGCG (1 << 11)
-#define AMDGPU_CG_SUPPORT_BIF_LS (1 << 12)
-#define AMDGPU_CG_SUPPORT_UVD_MGCG (1 << 13)
-#define AMDGPU_CG_SUPPORT_VCE_MGCG (1 << 14)
-#define AMDGPU_CG_SUPPORT_HDP_LS (1 << 15)
-#define AMDGPU_CG_SUPPORT_HDP_MGCG (1 << 16)
-
-/* PG flags */
-#define AMDGPU_PG_SUPPORT_GFX_PG (1 << 0)
-#define AMDGPU_PG_SUPPORT_GFX_SMG (1 << 1)
-#define AMDGPU_PG_SUPPORT_GFX_DMG (1 << 2)
-#define AMDGPU_PG_SUPPORT_UVD (1 << 3)
-#define AMDGPU_PG_SUPPORT_VCE (1 << 4)
-#define AMDGPU_PG_SUPPORT_CP (1 << 5)
-#define AMDGPU_PG_SUPPORT_GDS (1 << 6)
-#define AMDGPU_PG_SUPPORT_RLC_SMU_HS (1 << 7)
-#define AMDGPU_PG_SUPPORT_SDMA (1 << 8)
-#define AMDGPU_PG_SUPPORT_ACP (1 << 9)
-#define AMDGPU_PG_SUPPORT_SAMU (1 << 10)
-
/* GFX current status */
#define AMDGPU_GFX_NORMAL_MODE 0x00000000L
#define AMDGPU_GFX_SAFE_MODE 0x00000001L
@@ -606,8 +567,6 @@
uint32_t align;
};
-struct amdgpu_sa_bo;
-
/* sub-allocation buffer */
struct amdgpu_sa_bo {
struct list_head olist;
@@ -2360,6 +2319,8 @@
int amdgpu_ttm_tt_set_userptr(struct ttm_tt *ttm, uint64_t addr,
uint32_t flags);
bool amdgpu_ttm_tt_has_userptr(struct ttm_tt *ttm);
+bool amdgpu_ttm_tt_affect_userptr(struct ttm_tt *ttm, unsigned long start,
+ unsigned long end);
bool amdgpu_ttm_tt_is_readonly(struct ttm_tt *ttm);
uint32_t amdgpu_ttm_tt_pte_flags(struct amdgpu_device *adev, struct ttm_tt *ttm,
struct ttm_mem_reg *mem);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c
index a081dda..7a4b101 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c
@@ -795,6 +795,12 @@
case CGS_SYSTEM_INFO_PCIE_MLW:
sys_info->value = adev->pm.pcie_mlw_mask;
break;
+ case CGS_SYSTEM_INFO_CG_FLAGS:
+ sys_info->value = adev->cg_flags;
+ break;
+ case CGS_SYSTEM_INFO_PG_FLAGS:
+ sys_info->value = adev->pg_flags;
+ break;
default:
return -ENODEV;
}
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 6553146..51bfc11 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -1795,15 +1795,20 @@
}
/* post card */
- amdgpu_atom_asic_init(adev->mode_info.atom_context);
+ if (!amdgpu_card_posted(adev))
+ amdgpu_atom_asic_init(adev->mode_info.atom_context);
r = amdgpu_resume(adev);
+ if (r)
+ DRM_ERROR("amdgpu_resume failed (%d).\n", r);
amdgpu_fence_driver_resume(adev);
- r = amdgpu_ib_ring_tests(adev);
- if (r)
- DRM_ERROR("ib ring test failed (%d).\n", r);
+ if (resume) {
+ r = amdgpu_ib_ring_tests(adev);
+ if (r)
+ DRM_ERROR("ib ring test failed (%d).\n", r);
+ }
r = amdgpu_late_init(adev);
if (r)
@@ -1933,80 +1938,97 @@
return r;
}
+#define AMDGPU_DEFAULT_PCIE_GEN_MASK 0x30007 /* gen: chipset 1/2, asic 1/2/3 */
+#define AMDGPU_DEFAULT_PCIE_MLW_MASK 0x2f0000 /* 1/2/4/8/16 lanes */
+
void amdgpu_get_pcie_info(struct amdgpu_device *adev)
{
u32 mask;
int ret;
- if (pci_is_root_bus(adev->pdev->bus))
+ if (amdgpu_pcie_gen_cap)
+ adev->pm.pcie_gen_mask = amdgpu_pcie_gen_cap;
+
+ if (amdgpu_pcie_lane_cap)
+ adev->pm.pcie_mlw_mask = amdgpu_pcie_lane_cap;
+
+ /* covers APUs as well */
+ if (pci_is_root_bus(adev->pdev->bus)) {
+ if (adev->pm.pcie_gen_mask == 0)
+ adev->pm.pcie_gen_mask = AMDGPU_DEFAULT_PCIE_GEN_MASK;
+ if (adev->pm.pcie_mlw_mask == 0)
+ adev->pm.pcie_mlw_mask = AMDGPU_DEFAULT_PCIE_MLW_MASK;
return;
-
- if (amdgpu_pcie_gen2 == 0)
- return;
-
- if (adev->flags & AMD_IS_APU)
- return;
-
- ret = drm_pcie_get_speed_cap_mask(adev->ddev, &mask);
- if (!ret) {
- adev->pm.pcie_gen_mask = (CAIL_ASIC_PCIE_LINK_SPEED_SUPPORT_GEN1 |
- CAIL_ASIC_PCIE_LINK_SPEED_SUPPORT_GEN2 |
- CAIL_ASIC_PCIE_LINK_SPEED_SUPPORT_GEN3);
-
- if (mask & DRM_PCIE_SPEED_25)
- adev->pm.pcie_gen_mask |= CAIL_PCIE_LINK_SPEED_SUPPORT_GEN1;
- if (mask & DRM_PCIE_SPEED_50)
- adev->pm.pcie_gen_mask |= CAIL_PCIE_LINK_SPEED_SUPPORT_GEN2;
- if (mask & DRM_PCIE_SPEED_80)
- adev->pm.pcie_gen_mask |= CAIL_PCIE_LINK_SPEED_SUPPORT_GEN3;
}
- ret = drm_pcie_get_max_link_width(adev->ddev, &mask);
- if (!ret) {
- switch (mask) {
- case 32:
- adev->pm.pcie_mlw_mask = (CAIL_PCIE_LINK_WIDTH_SUPPORT_X32 |
- CAIL_PCIE_LINK_WIDTH_SUPPORT_X16 |
- CAIL_PCIE_LINK_WIDTH_SUPPORT_X12 |
- CAIL_PCIE_LINK_WIDTH_SUPPORT_X8 |
- CAIL_PCIE_LINK_WIDTH_SUPPORT_X4 |
- CAIL_PCIE_LINK_WIDTH_SUPPORT_X2 |
- CAIL_PCIE_LINK_WIDTH_SUPPORT_X1);
- break;
- case 16:
- adev->pm.pcie_mlw_mask = (CAIL_PCIE_LINK_WIDTH_SUPPORT_X16 |
- CAIL_PCIE_LINK_WIDTH_SUPPORT_X12 |
- CAIL_PCIE_LINK_WIDTH_SUPPORT_X8 |
- CAIL_PCIE_LINK_WIDTH_SUPPORT_X4 |
- CAIL_PCIE_LINK_WIDTH_SUPPORT_X2 |
- CAIL_PCIE_LINK_WIDTH_SUPPORT_X1);
- break;
- case 12:
- adev->pm.pcie_mlw_mask = (CAIL_PCIE_LINK_WIDTH_SUPPORT_X12 |
- CAIL_PCIE_LINK_WIDTH_SUPPORT_X8 |
- CAIL_PCIE_LINK_WIDTH_SUPPORT_X4 |
- CAIL_PCIE_LINK_WIDTH_SUPPORT_X2 |
- CAIL_PCIE_LINK_WIDTH_SUPPORT_X1);
- break;
- case 8:
- adev->pm.pcie_mlw_mask = (CAIL_PCIE_LINK_WIDTH_SUPPORT_X8 |
- CAIL_PCIE_LINK_WIDTH_SUPPORT_X4 |
- CAIL_PCIE_LINK_WIDTH_SUPPORT_X2 |
- CAIL_PCIE_LINK_WIDTH_SUPPORT_X1);
- break;
- case 4:
- adev->pm.pcie_mlw_mask = (CAIL_PCIE_LINK_WIDTH_SUPPORT_X4 |
- CAIL_PCIE_LINK_WIDTH_SUPPORT_X2 |
- CAIL_PCIE_LINK_WIDTH_SUPPORT_X1);
- break;
- case 2:
- adev->pm.pcie_mlw_mask = (CAIL_PCIE_LINK_WIDTH_SUPPORT_X2 |
- CAIL_PCIE_LINK_WIDTH_SUPPORT_X1);
- break;
- case 1:
- adev->pm.pcie_mlw_mask = CAIL_PCIE_LINK_WIDTH_SUPPORT_X1;
- break;
- default:
- break;
+
+ if (adev->pm.pcie_gen_mask == 0) {
+ ret = drm_pcie_get_speed_cap_mask(adev->ddev, &mask);
+ if (!ret) {
+ adev->pm.pcie_gen_mask = (CAIL_ASIC_PCIE_LINK_SPEED_SUPPORT_GEN1 |
+ CAIL_ASIC_PCIE_LINK_SPEED_SUPPORT_GEN2 |
+ CAIL_ASIC_PCIE_LINK_SPEED_SUPPORT_GEN3);
+
+ if (mask & DRM_PCIE_SPEED_25)
+ adev->pm.pcie_gen_mask |= CAIL_PCIE_LINK_SPEED_SUPPORT_GEN1;
+ if (mask & DRM_PCIE_SPEED_50)
+ adev->pm.pcie_gen_mask |= CAIL_PCIE_LINK_SPEED_SUPPORT_GEN2;
+ if (mask & DRM_PCIE_SPEED_80)
+ adev->pm.pcie_gen_mask |= CAIL_PCIE_LINK_SPEED_SUPPORT_GEN3;
+ } else {
+ adev->pm.pcie_gen_mask = AMDGPU_DEFAULT_PCIE_GEN_MASK;
+ }
+ }
+ if (adev->pm.pcie_mlw_mask == 0) {
+ ret = drm_pcie_get_max_link_width(adev->ddev, &mask);
+ if (!ret) {
+ switch (mask) {
+ case 32:
+ adev->pm.pcie_mlw_mask = (CAIL_PCIE_LINK_WIDTH_SUPPORT_X32 |
+ CAIL_PCIE_LINK_WIDTH_SUPPORT_X16 |
+ CAIL_PCIE_LINK_WIDTH_SUPPORT_X12 |
+ CAIL_PCIE_LINK_WIDTH_SUPPORT_X8 |
+ CAIL_PCIE_LINK_WIDTH_SUPPORT_X4 |
+ CAIL_PCIE_LINK_WIDTH_SUPPORT_X2 |
+ CAIL_PCIE_LINK_WIDTH_SUPPORT_X1);
+ break;
+ case 16:
+ adev->pm.pcie_mlw_mask = (CAIL_PCIE_LINK_WIDTH_SUPPORT_X16 |
+ CAIL_PCIE_LINK_WIDTH_SUPPORT_X12 |
+ CAIL_PCIE_LINK_WIDTH_SUPPORT_X8 |
+ CAIL_PCIE_LINK_WIDTH_SUPPORT_X4 |
+ CAIL_PCIE_LINK_WIDTH_SUPPORT_X2 |
+ CAIL_PCIE_LINK_WIDTH_SUPPORT_X1);
+ break;
+ case 12:
+ adev->pm.pcie_mlw_mask = (CAIL_PCIE_LINK_WIDTH_SUPPORT_X12 |
+ CAIL_PCIE_LINK_WIDTH_SUPPORT_X8 |
+ CAIL_PCIE_LINK_WIDTH_SUPPORT_X4 |
+ CAIL_PCIE_LINK_WIDTH_SUPPORT_X2 |
+ CAIL_PCIE_LINK_WIDTH_SUPPORT_X1);
+ break;
+ case 8:
+ adev->pm.pcie_mlw_mask = (CAIL_PCIE_LINK_WIDTH_SUPPORT_X8 |
+ CAIL_PCIE_LINK_WIDTH_SUPPORT_X4 |
+ CAIL_PCIE_LINK_WIDTH_SUPPORT_X2 |
+ CAIL_PCIE_LINK_WIDTH_SUPPORT_X1);
+ break;
+ case 4:
+ adev->pm.pcie_mlw_mask = (CAIL_PCIE_LINK_WIDTH_SUPPORT_X4 |
+ CAIL_PCIE_LINK_WIDTH_SUPPORT_X2 |
+ CAIL_PCIE_LINK_WIDTH_SUPPORT_X1);
+ break;
+ case 2:
+ adev->pm.pcie_mlw_mask = (CAIL_PCIE_LINK_WIDTH_SUPPORT_X2 |
+ CAIL_PCIE_LINK_WIDTH_SUPPORT_X1);
+ break;
+ case 1:
+ adev->pm.pcie_mlw_mask = CAIL_PCIE_LINK_WIDTH_SUPPORT_X1;
+ break;
+ default:
+ break;
+ }
+ } else {
+ adev->pm.pcie_mlw_mask = AMDGPU_DEFAULT_PCIE_MLW_MASK;
}
}
}
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
index acd066d0..8297bc3 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
@@ -72,8 +72,8 @@
struct drm_crtc *crtc = &amdgpuCrtc->base;
unsigned long flags;
- unsigned i;
- int vpos, hpos, stat, min_udelay;
+ unsigned i, repcnt = 4;
+ int vpos, hpos, stat, min_udelay = 0;
struct drm_vblank_crtc *vblank = &crtc->dev->vblank[work->crtc_id];
amdgpu_flip_wait_fence(adev, &work->excl);
@@ -96,7 +96,7 @@
* In practice this won't execute very often unless on very fast
* machines because the time window for this to happen is very small.
*/
- for (;;) {
+ while (amdgpuCrtc->enabled && repcnt--) {
/* GET_DISTANCE_TO_VBLANKSTART returns distance to real vblank
* start in hpos, and to the "fudged earlier" vblank start in
* vpos.
@@ -114,10 +114,22 @@
/* Sleep at least until estimated real start of hw vblank */
spin_unlock_irqrestore(&crtc->dev->event_lock, flags);
min_udelay = (-hpos + 1) * max(vblank->linedur_ns / 1000, 5);
+ if (min_udelay > vblank->framedur_ns / 2000) {
+ /* Don't wait ridiculously long - something is wrong */
+ repcnt = 0;
+ break;
+ }
usleep_range(min_udelay, 2 * min_udelay);
spin_lock_irqsave(&crtc->dev->event_lock, flags);
};
+ if (!repcnt)
+ DRM_DEBUG_DRIVER("Delay problem on crtc %d: min_udelay %d, "
+ "framedur %d, linedur %d, stat %d, vpos %d, "
+ "hpos %d\n", work->crtc_id, min_udelay,
+ vblank->framedur_ns / 1000,
+ vblank->linedur_ns / 1000, stat, vpos, hpos);
+
/* do the flip (mmio) */
adev->mode_info.funcs->page_flip(adev, work->crtc_id, work->base);
/* set the flip status */
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
index 9c1af89..9ef1db8 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
@@ -83,6 +83,8 @@
int amdgpu_sched_hw_submission = 2;
int amdgpu_enable_semaphores = 0;
int amdgpu_powerplay = -1;
+unsigned amdgpu_pcie_gen_cap = 0;
+unsigned amdgpu_pcie_lane_cap = 0;
MODULE_PARM_DESC(vramlimit, "Restrict VRAM for testing, in megabytes");
module_param_named(vramlimit, amdgpu_vram_limit, int, 0600);
@@ -170,6 +172,12 @@
module_param_named(powerplay, amdgpu_powerplay, int, 0444);
#endif
+MODULE_PARM_DESC(pcie_gen_cap, "PCIE Gen Caps (0: autodetect (default))");
+module_param_named(pcie_gen_cap, amdgpu_pcie_gen_cap, uint, 0444);
+
+MODULE_PARM_DESC(pcie_lane_cap, "PCIE Lane Caps (0: autodetect (default))");
+module_param_named(pcie_lane_cap, amdgpu_pcie_lane_cap, uint, 0444);
+
static struct pci_device_id pciidlist[] = {
#ifdef CONFIG_DRM_AMDGPU_CIK
/* Kaveri */
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
index 7380f78..d20c2a8 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
@@ -596,7 +596,8 @@
break;
}
ttm_eu_backoff_reservation(&ticket, &list);
- if (!r && !(args->flags & AMDGPU_VM_DELAY_UPDATE))
+ if (!r && !(args->flags & AMDGPU_VM_DELAY_UPDATE) &&
+ !amdgpu_vm_debug)
amdgpu_gem_va_update_vm(adev, bo_va, args->operation);
drm_gem_object_unreference_unlocked(gobj);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
index b1969f2..d4e2780 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
@@ -142,7 +142,8 @@
list_for_each_entry(bo, &node->bos, mn_list) {
- if (!bo->tbo.ttm || bo->tbo.ttm->state != tt_bound)
+ if (!amdgpu_ttm_tt_affect_userptr(bo->tbo.ttm, start,
+ end))
continue;
r = amdgpu_bo_reserve(bo, true);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
index 7d8d84e..66855b6 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
@@ -113,6 +113,10 @@
struct drm_device *ddev = dev_get_drvdata(dev);
struct amdgpu_device *adev = ddev->dev_private;
+ if ((adev->flags & AMD_IS_PX) &&
+ (ddev->switch_power_state != DRM_SWITCH_POWER_ON))
+ return snprintf(buf, PAGE_SIZE, "off\n");
+
if (adev->pp_enabled) {
enum amd_dpm_forced_level level;
@@ -140,6 +144,11 @@
enum amdgpu_dpm_forced_level level;
int ret = 0;
+ /* Can't force performance level when the card is off */
+ if ((adev->flags & AMD_IS_PX) &&
+ (ddev->switch_power_state != DRM_SWITCH_POWER_ON))
+ return -EINVAL;
+
if (strncmp("low", buf, strlen("low")) == 0) {
level = AMDGPU_DPM_FORCED_LEVEL_LOW;
} else if (strncmp("high", buf, strlen("high")) == 0) {
@@ -157,6 +166,7 @@
mutex_lock(&adev->pm.mutex);
if (adev->pm.dpm.thermal_active) {
count = -EINVAL;
+ mutex_unlock(&adev->pm.mutex);
goto fail;
}
ret = amdgpu_dpm_force_performance_level(adev, level);
@@ -167,8 +177,6 @@
mutex_unlock(&adev->pm.mutex);
}
fail:
- mutex_unlock(&adev->pm.mutex);
-
return count;
}
@@ -182,8 +190,14 @@
char *buf)
{
struct amdgpu_device *adev = dev_get_drvdata(dev);
+ struct drm_device *ddev = adev->ddev;
int temp;
+ /* Can't get temperature when the card is off */
+ if ((adev->flags & AMD_IS_PX) &&
+ (ddev->switch_power_state != DRM_SWITCH_POWER_ON))
+ return -EINVAL;
+
if (!adev->pp_enabled && !adev->pm.funcs->get_temperature)
temp = 0;
else
@@ -634,8 +648,6 @@
/* update display watermarks based on new power state */
amdgpu_display_bandwidth_update(adev);
- /* update displays */
- amdgpu_dpm_display_configuration_changed(adev);
adev->pm.dpm.current_active_crtcs = adev->pm.dpm.new_active_crtcs;
adev->pm.dpm.current_active_crtc_count = adev->pm.dpm.new_active_crtc_count;
@@ -655,6 +667,9 @@
amdgpu_dpm_post_set_power_state(adev);
+ /* update displays */
+ amdgpu_dpm_display_configuration_changed(adev);
+
if (adev->pm.funcs->force_performance_level) {
if (adev->pm.dpm.thermal_active) {
enum amdgpu_dpm_forced_level level = adev->pm.dpm.forced_level;
@@ -847,12 +862,16 @@
struct drm_info_node *node = (struct drm_info_node *) m->private;
struct drm_device *dev = node->minor->dev;
struct amdgpu_device *adev = dev->dev_private;
+ struct drm_device *ddev = adev->ddev;
if (!adev->pm.dpm_enabled) {
seq_printf(m, "dpm not enabled\n");
return 0;
}
- if (adev->pp_enabled) {
+ if ((adev->flags & AMD_IS_PX) &&
+ (ddev->switch_power_state != DRM_SWITCH_POWER_ON)) {
+ seq_printf(m, "PX asic powered off\n");
+ } else if (adev->pp_enabled) {
amdgpu_dpm_debugfs_print_current_performance_level(adev, m);
} else {
mutex_lock(&adev->pm.mutex);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_sa.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_sa.c
index 8b88edb..ca72a2e 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_sa.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_sa.c
@@ -354,12 +354,15 @@
for (i = 0, count = 0; i < AMDGPU_MAX_RINGS; ++i)
if (fences[i])
- fences[count++] = fences[i];
+ fences[count++] = fence_get(fences[i]);
if (count) {
spin_unlock(&sa_manager->wq.lock);
t = fence_wait_any_timeout(fences, count, false,
MAX_SCHEDULE_TIMEOUT);
+ for (i = 0; i < count; ++i)
+ fence_put(fences[i]);
+
r = (t > 0) ? 0 : t;
spin_lock(&sa_manager->wq.lock);
} else {
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
index 55cf05e..1cbb16e 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
@@ -712,7 +712,7 @@
0, PAGE_SIZE,
PCI_DMA_BIDIRECTIONAL);
if (pci_dma_mapping_error(adev->pdev, gtt->ttm.dma_address[i])) {
- while (--i) {
+ while (i--) {
pci_unmap_page(adev->pdev, gtt->ttm.dma_address[i],
PAGE_SIZE, PCI_DMA_BIDIRECTIONAL);
gtt->ttm.dma_address[i] = 0;
@@ -783,6 +783,25 @@
return !!gtt->userptr;
}
+bool amdgpu_ttm_tt_affect_userptr(struct ttm_tt *ttm, unsigned long start,
+ unsigned long end)
+{
+ struct amdgpu_ttm_tt *gtt = (void *)ttm;
+ unsigned long size;
+
+ if (gtt == NULL)
+ return false;
+
+ if (gtt->ttm.ttm.state != tt_bound || !gtt->userptr)
+ return false;
+
+ size = (unsigned long)gtt->ttm.ttm.num_pages * PAGE_SIZE;
+ if (gtt->userptr > end || gtt->userptr + size <= start)
+ return false;
+
+ return true;
+}
+
bool amdgpu_ttm_tt_is_readonly(struct ttm_tt *ttm)
{
struct amdgpu_ttm_tt *gtt = (void *)ttm;
diff --git a/drivers/gpu/drm/amd/amdgpu/ci_dpm.c b/drivers/gpu/drm/amd/amdgpu/ci_dpm.c
index 8b4731d..474ca02 100644
--- a/drivers/gpu/drm/amd/amdgpu/ci_dpm.c
+++ b/drivers/gpu/drm/amd/amdgpu/ci_dpm.c
@@ -31,6 +31,7 @@
#include "ci_dpm.h"
#include "gfx_v7_0.h"
#include "atom.h"
+#include "amd_pcie.h"
#include <linux/seq_file.h>
#include "smu/smu_7_0_1_d.h"
@@ -5835,18 +5836,16 @@
u8 frev, crev;
struct ci_power_info *pi;
int ret;
- u32 mask;
pi = kzalloc(sizeof(struct ci_power_info), GFP_KERNEL);
if (pi == NULL)
return -ENOMEM;
adev->pm.dpm.priv = pi;
- ret = drm_pcie_get_speed_cap_mask(adev->ddev, &mask);
- if (ret)
- pi->sys_pcie_mask = 0;
- else
- pi->sys_pcie_mask = mask;
+ pi->sys_pcie_mask =
+ (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_MASK) >>
+ CAIL_PCIE_LINK_SPEED_SUPPORT_SHIFT;
+
pi->force_pcie_gen = AMDGPU_PCIE_GEN_INVALID;
pi->pcie_gen_performance.max = AMDGPU_PCIE_GEN1;
diff --git a/drivers/gpu/drm/amd/amdgpu/cik.c b/drivers/gpu/drm/amd/amdgpu/cik.c
index fd9c958..155965e 100644
--- a/drivers/gpu/drm/amd/amdgpu/cik.c
+++ b/drivers/gpu/drm/amd/amdgpu/cik.c
@@ -1762,6 +1762,9 @@
if (amdgpu_aspm == 0)
return;
+ if (pci_is_root_bus(adev->pdev->bus))
+ return;
+
/* XXX double check APUs */
if (adev->flags & AMD_IS_APU)
return;
@@ -2332,72 +2335,72 @@
switch (adev->asic_type) {
case CHIP_BONAIRE:
adev->cg_flags =
- AMDGPU_CG_SUPPORT_GFX_MGCG |
- AMDGPU_CG_SUPPORT_GFX_MGLS |
- /*AMDGPU_CG_SUPPORT_GFX_CGCG |*/
- AMDGPU_CG_SUPPORT_GFX_CGLS |
- AMDGPU_CG_SUPPORT_GFX_CGTS |
- AMDGPU_CG_SUPPORT_GFX_CGTS_LS |
- AMDGPU_CG_SUPPORT_GFX_CP_LS |
- AMDGPU_CG_SUPPORT_MC_LS |
- AMDGPU_CG_SUPPORT_MC_MGCG |
- AMDGPU_CG_SUPPORT_SDMA_MGCG |
- AMDGPU_CG_SUPPORT_SDMA_LS |
- AMDGPU_CG_SUPPORT_BIF_LS |
- AMDGPU_CG_SUPPORT_VCE_MGCG |
- AMDGPU_CG_SUPPORT_UVD_MGCG |
- AMDGPU_CG_SUPPORT_HDP_LS |
- AMDGPU_CG_SUPPORT_HDP_MGCG;
+ AMD_CG_SUPPORT_GFX_MGCG |
+ AMD_CG_SUPPORT_GFX_MGLS |
+ /*AMD_CG_SUPPORT_GFX_CGCG |*/
+ AMD_CG_SUPPORT_GFX_CGLS |
+ AMD_CG_SUPPORT_GFX_CGTS |
+ AMD_CG_SUPPORT_GFX_CGTS_LS |
+ AMD_CG_SUPPORT_GFX_CP_LS |
+ AMD_CG_SUPPORT_MC_LS |
+ AMD_CG_SUPPORT_MC_MGCG |
+ AMD_CG_SUPPORT_SDMA_MGCG |
+ AMD_CG_SUPPORT_SDMA_LS |
+ AMD_CG_SUPPORT_BIF_LS |
+ AMD_CG_SUPPORT_VCE_MGCG |
+ AMD_CG_SUPPORT_UVD_MGCG |
+ AMD_CG_SUPPORT_HDP_LS |
+ AMD_CG_SUPPORT_HDP_MGCG;
adev->pg_flags = 0;
adev->external_rev_id = adev->rev_id + 0x14;
break;
case CHIP_HAWAII:
adev->cg_flags =
- AMDGPU_CG_SUPPORT_GFX_MGCG |
- AMDGPU_CG_SUPPORT_GFX_MGLS |
- /*AMDGPU_CG_SUPPORT_GFX_CGCG |*/
- AMDGPU_CG_SUPPORT_GFX_CGLS |
- AMDGPU_CG_SUPPORT_GFX_CGTS |
- AMDGPU_CG_SUPPORT_GFX_CP_LS |
- AMDGPU_CG_SUPPORT_MC_LS |
- AMDGPU_CG_SUPPORT_MC_MGCG |
- AMDGPU_CG_SUPPORT_SDMA_MGCG |
- AMDGPU_CG_SUPPORT_SDMA_LS |
- AMDGPU_CG_SUPPORT_BIF_LS |
- AMDGPU_CG_SUPPORT_VCE_MGCG |
- AMDGPU_CG_SUPPORT_UVD_MGCG |
- AMDGPU_CG_SUPPORT_HDP_LS |
- AMDGPU_CG_SUPPORT_HDP_MGCG;
+ AMD_CG_SUPPORT_GFX_MGCG |
+ AMD_CG_SUPPORT_GFX_MGLS |
+ /*AMD_CG_SUPPORT_GFX_CGCG |*/
+ AMD_CG_SUPPORT_GFX_CGLS |
+ AMD_CG_SUPPORT_GFX_CGTS |
+ AMD_CG_SUPPORT_GFX_CP_LS |
+ AMD_CG_SUPPORT_MC_LS |
+ AMD_CG_SUPPORT_MC_MGCG |
+ AMD_CG_SUPPORT_SDMA_MGCG |
+ AMD_CG_SUPPORT_SDMA_LS |
+ AMD_CG_SUPPORT_BIF_LS |
+ AMD_CG_SUPPORT_VCE_MGCG |
+ AMD_CG_SUPPORT_UVD_MGCG |
+ AMD_CG_SUPPORT_HDP_LS |
+ AMD_CG_SUPPORT_HDP_MGCG;
adev->pg_flags = 0;
adev->external_rev_id = 0x28;
break;
case CHIP_KAVERI:
adev->cg_flags =
- AMDGPU_CG_SUPPORT_GFX_MGCG |
- AMDGPU_CG_SUPPORT_GFX_MGLS |
- /*AMDGPU_CG_SUPPORT_GFX_CGCG |*/
- AMDGPU_CG_SUPPORT_GFX_CGLS |
- AMDGPU_CG_SUPPORT_GFX_CGTS |
- AMDGPU_CG_SUPPORT_GFX_CGTS_LS |
- AMDGPU_CG_SUPPORT_GFX_CP_LS |
- AMDGPU_CG_SUPPORT_SDMA_MGCG |
- AMDGPU_CG_SUPPORT_SDMA_LS |
- AMDGPU_CG_SUPPORT_BIF_LS |
- AMDGPU_CG_SUPPORT_VCE_MGCG |
- AMDGPU_CG_SUPPORT_UVD_MGCG |
- AMDGPU_CG_SUPPORT_HDP_LS |
- AMDGPU_CG_SUPPORT_HDP_MGCG;
+ AMD_CG_SUPPORT_GFX_MGCG |
+ AMD_CG_SUPPORT_GFX_MGLS |
+ /*AMD_CG_SUPPORT_GFX_CGCG |*/
+ AMD_CG_SUPPORT_GFX_CGLS |
+ AMD_CG_SUPPORT_GFX_CGTS |
+ AMD_CG_SUPPORT_GFX_CGTS_LS |
+ AMD_CG_SUPPORT_GFX_CP_LS |
+ AMD_CG_SUPPORT_SDMA_MGCG |
+ AMD_CG_SUPPORT_SDMA_LS |
+ AMD_CG_SUPPORT_BIF_LS |
+ AMD_CG_SUPPORT_VCE_MGCG |
+ AMD_CG_SUPPORT_UVD_MGCG |
+ AMD_CG_SUPPORT_HDP_LS |
+ AMD_CG_SUPPORT_HDP_MGCG;
adev->pg_flags =
- /*AMDGPU_PG_SUPPORT_GFX_PG |
- AMDGPU_PG_SUPPORT_GFX_SMG |
- AMDGPU_PG_SUPPORT_GFX_DMG |*/
- AMDGPU_PG_SUPPORT_UVD |
- /*AMDGPU_PG_SUPPORT_VCE |
- AMDGPU_PG_SUPPORT_CP |
- AMDGPU_PG_SUPPORT_GDS |
- AMDGPU_PG_SUPPORT_RLC_SMU_HS |
- AMDGPU_PG_SUPPORT_ACP |
- AMDGPU_PG_SUPPORT_SAMU |*/
+ /*AMD_PG_SUPPORT_GFX_PG |
+ AMD_PG_SUPPORT_GFX_SMG |
+ AMD_PG_SUPPORT_GFX_DMG |*/
+ AMD_PG_SUPPORT_UVD |
+ /*AMD_PG_SUPPORT_VCE |
+ AMD_PG_SUPPORT_CP |
+ AMD_PG_SUPPORT_GDS |
+ AMD_PG_SUPPORT_RLC_SMU_HS |
+ AMD_PG_SUPPORT_ACP |
+ AMD_PG_SUPPORT_SAMU |*/
0;
if (adev->pdev->device == 0x1312 ||
adev->pdev->device == 0x1316 ||
@@ -2409,29 +2412,29 @@
case CHIP_KABINI:
case CHIP_MULLINS:
adev->cg_flags =
- AMDGPU_CG_SUPPORT_GFX_MGCG |
- AMDGPU_CG_SUPPORT_GFX_MGLS |
- /*AMDGPU_CG_SUPPORT_GFX_CGCG |*/
- AMDGPU_CG_SUPPORT_GFX_CGLS |
- AMDGPU_CG_SUPPORT_GFX_CGTS |
- AMDGPU_CG_SUPPORT_GFX_CGTS_LS |
- AMDGPU_CG_SUPPORT_GFX_CP_LS |
- AMDGPU_CG_SUPPORT_SDMA_MGCG |
- AMDGPU_CG_SUPPORT_SDMA_LS |
- AMDGPU_CG_SUPPORT_BIF_LS |
- AMDGPU_CG_SUPPORT_VCE_MGCG |
- AMDGPU_CG_SUPPORT_UVD_MGCG |
- AMDGPU_CG_SUPPORT_HDP_LS |
- AMDGPU_CG_SUPPORT_HDP_MGCG;
+ AMD_CG_SUPPORT_GFX_MGCG |
+ AMD_CG_SUPPORT_GFX_MGLS |
+ /*AMD_CG_SUPPORT_GFX_CGCG |*/
+ AMD_CG_SUPPORT_GFX_CGLS |
+ AMD_CG_SUPPORT_GFX_CGTS |
+ AMD_CG_SUPPORT_GFX_CGTS_LS |
+ AMD_CG_SUPPORT_GFX_CP_LS |
+ AMD_CG_SUPPORT_SDMA_MGCG |
+ AMD_CG_SUPPORT_SDMA_LS |
+ AMD_CG_SUPPORT_BIF_LS |
+ AMD_CG_SUPPORT_VCE_MGCG |
+ AMD_CG_SUPPORT_UVD_MGCG |
+ AMD_CG_SUPPORT_HDP_LS |
+ AMD_CG_SUPPORT_HDP_MGCG;
adev->pg_flags =
- /*AMDGPU_PG_SUPPORT_GFX_PG |
- AMDGPU_PG_SUPPORT_GFX_SMG | */
- AMDGPU_PG_SUPPORT_UVD |
- /*AMDGPU_PG_SUPPORT_VCE |
- AMDGPU_PG_SUPPORT_CP |
- AMDGPU_PG_SUPPORT_GDS |
- AMDGPU_PG_SUPPORT_RLC_SMU_HS |
- AMDGPU_PG_SUPPORT_SAMU |*/
+ /*AMD_PG_SUPPORT_GFX_PG |
+ AMD_PG_SUPPORT_GFX_SMG | */
+ AMD_PG_SUPPORT_UVD |
+ /*AMD_PG_SUPPORT_VCE |
+ AMD_PG_SUPPORT_CP |
+ AMD_PG_SUPPORT_GDS |
+ AMD_PG_SUPPORT_RLC_SMU_HS |
+ AMD_PG_SUPPORT_SAMU |*/
0;
if (adev->asic_type == CHIP_KABINI) {
if (adev->rev_id == 0)
diff --git a/drivers/gpu/drm/amd/amdgpu/cik_sdma.c b/drivers/gpu/drm/amd/amdgpu/cik_sdma.c
index 5f712ce..c55ecf0 100644
--- a/drivers/gpu/drm/amd/amdgpu/cik_sdma.c
+++ b/drivers/gpu/drm/amd/amdgpu/cik_sdma.c
@@ -885,7 +885,7 @@
{
u32 orig, data;
- if (enable && (adev->cg_flags & AMDGPU_CG_SUPPORT_SDMA_MGCG)) {
+ if (enable && (adev->cg_flags & AMD_CG_SUPPORT_SDMA_MGCG)) {
WREG32(mmSDMA0_CLK_CTRL + SDMA0_REGISTER_OFFSET, 0x00000100);
WREG32(mmSDMA0_CLK_CTRL + SDMA1_REGISTER_OFFSET, 0x00000100);
} else {
@@ -906,7 +906,7 @@
{
u32 orig, data;
- if (enable && (adev->cg_flags & AMDGPU_CG_SUPPORT_SDMA_LS)) {
+ if (enable && (adev->cg_flags & AMD_CG_SUPPORT_SDMA_LS)) {
orig = data = RREG32(mmSDMA0_POWER_CNTL + SDMA0_REGISTER_OFFSET);
data |= 0x100;
if (orig != data)
diff --git a/drivers/gpu/drm/amd/amdgpu/cz_dpm.c b/drivers/gpu/drm/amd/amdgpu/cz_dpm.c
index 4dd17f2..9056355 100644
--- a/drivers/gpu/drm/amd/amdgpu/cz_dpm.c
+++ b/drivers/gpu/drm/amd/amdgpu/cz_dpm.c
@@ -445,13 +445,13 @@
pi->gfx_pg_threshold = 500;
pi->caps_fps = true;
/* uvd */
- pi->caps_uvd_pg = (adev->pg_flags & AMDGPU_PG_SUPPORT_UVD) ? true : false;
+ pi->caps_uvd_pg = (adev->pg_flags & AMD_PG_SUPPORT_UVD) ? true : false;
pi->caps_uvd_dpm = true;
/* vce */
- pi->caps_vce_pg = (adev->pg_flags & AMDGPU_PG_SUPPORT_VCE) ? true : false;
+ pi->caps_vce_pg = (adev->pg_flags & AMD_PG_SUPPORT_VCE) ? true : false;
pi->caps_vce_dpm = true;
/* acp */
- pi->caps_acp_pg = (adev->pg_flags & AMDGPU_PG_SUPPORT_ACP) ? true : false;
+ pi->caps_acp_pg = (adev->pg_flags & AMD_PG_SUPPORT_ACP) ? true : false;
pi->caps_acp_dpm = true;
pi->caps_stable_power_state = false;
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c
index 6c76139..7732059 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c
@@ -4109,7 +4109,7 @@
orig = data = RREG32(mmRLC_CGCG_CGLS_CTRL);
- if (enable && (adev->cg_flags & AMDGPU_CG_SUPPORT_GFX_CGCG)) {
+ if (enable && (adev->cg_flags & AMD_CG_SUPPORT_GFX_CGCG)) {
gfx_v7_0_enable_gui_idle_interrupt(adev, true);
tmp = gfx_v7_0_halt_rlc(adev);
@@ -4147,9 +4147,9 @@
{
u32 data, orig, tmp = 0;
- if (enable && (adev->cg_flags & AMDGPU_CG_SUPPORT_GFX_MGCG)) {
- if (adev->cg_flags & AMDGPU_CG_SUPPORT_GFX_MGLS) {
- if (adev->cg_flags & AMDGPU_CG_SUPPORT_GFX_CP_LS) {
+ if (enable && (adev->cg_flags & AMD_CG_SUPPORT_GFX_MGCG)) {
+ if (adev->cg_flags & AMD_CG_SUPPORT_GFX_MGLS) {
+ if (adev->cg_flags & AMD_CG_SUPPORT_GFX_CP_LS) {
orig = data = RREG32(mmCP_MEM_SLP_CNTL);
data |= CP_MEM_SLP_CNTL__CP_MEM_LS_EN_MASK;
if (orig != data)
@@ -4176,14 +4176,14 @@
gfx_v7_0_update_rlc(adev, tmp);
- if (adev->cg_flags & AMDGPU_CG_SUPPORT_GFX_CGTS) {
+ if (adev->cg_flags & AMD_CG_SUPPORT_GFX_CGTS) {
orig = data = RREG32(mmCGTS_SM_CTRL_REG);
data &= ~CGTS_SM_CTRL_REG__SM_MODE_MASK;
data |= (0x2 << CGTS_SM_CTRL_REG__SM_MODE__SHIFT);
data |= CGTS_SM_CTRL_REG__SM_MODE_ENABLE_MASK;
data &= ~CGTS_SM_CTRL_REG__OVERRIDE_MASK;
- if ((adev->cg_flags & AMDGPU_CG_SUPPORT_GFX_MGLS) &&
- (adev->cg_flags & AMDGPU_CG_SUPPORT_GFX_CGTS_LS))
+ if ((adev->cg_flags & AMD_CG_SUPPORT_GFX_MGLS) &&
+ (adev->cg_flags & AMD_CG_SUPPORT_GFX_CGTS_LS))
data &= ~CGTS_SM_CTRL_REG__LS_OVERRIDE_MASK;
data &= ~CGTS_SM_CTRL_REG__ON_MONITOR_ADD_MASK;
data |= CGTS_SM_CTRL_REG__ON_MONITOR_ADD_EN_MASK;
@@ -4249,7 +4249,7 @@
u32 data, orig;
orig = data = RREG32(mmRLC_PG_CNTL);
- if (enable && (adev->pg_flags & AMDGPU_PG_SUPPORT_RLC_SMU_HS))
+ if (enable && (adev->pg_flags & AMD_PG_SUPPORT_RLC_SMU_HS))
data |= RLC_PG_CNTL__SMU_CLK_SLOWDOWN_ON_PU_ENABLE_MASK;
else
data &= ~RLC_PG_CNTL__SMU_CLK_SLOWDOWN_ON_PU_ENABLE_MASK;
@@ -4263,7 +4263,7 @@
u32 data, orig;
orig = data = RREG32(mmRLC_PG_CNTL);
- if (enable && (adev->pg_flags & AMDGPU_PG_SUPPORT_RLC_SMU_HS))
+ if (enable && (adev->pg_flags & AMD_PG_SUPPORT_RLC_SMU_HS))
data |= RLC_PG_CNTL__SMU_CLK_SLOWDOWN_ON_PD_ENABLE_MASK;
else
data &= ~RLC_PG_CNTL__SMU_CLK_SLOWDOWN_ON_PD_ENABLE_MASK;
@@ -4276,7 +4276,7 @@
u32 data, orig;
orig = data = RREG32(mmRLC_PG_CNTL);
- if (enable && (adev->pg_flags & AMDGPU_PG_SUPPORT_CP))
+ if (enable && (adev->pg_flags & AMD_PG_SUPPORT_CP))
data &= ~0x8000;
else
data |= 0x8000;
@@ -4289,7 +4289,7 @@
u32 data, orig;
orig = data = RREG32(mmRLC_PG_CNTL);
- if (enable && (adev->pg_flags & AMDGPU_PG_SUPPORT_GDS))
+ if (enable && (adev->pg_flags & AMD_PG_SUPPORT_GDS))
data &= ~0x2000;
else
data |= 0x2000;
@@ -4370,7 +4370,7 @@
{
u32 data, orig;
- if (enable && (adev->pg_flags & AMDGPU_PG_SUPPORT_GFX_PG)) {
+ if (enable && (adev->pg_flags & AMD_PG_SUPPORT_GFX_PG)) {
orig = data = RREG32(mmRLC_PG_CNTL);
data |= RLC_PG_CNTL__GFX_POWER_GATING_ENABLE_MASK;
if (orig != data)
@@ -4442,7 +4442,7 @@
u32 data, orig;
orig = data = RREG32(mmRLC_PG_CNTL);
- if (enable && (adev->pg_flags & AMDGPU_PG_SUPPORT_GFX_SMG))
+ if (enable && (adev->pg_flags & AMD_PG_SUPPORT_GFX_SMG))
data |= RLC_PG_CNTL__STATIC_PER_CU_PG_ENABLE_MASK;
else
data &= ~RLC_PG_CNTL__STATIC_PER_CU_PG_ENABLE_MASK;
@@ -4456,7 +4456,7 @@
u32 data, orig;
orig = data = RREG32(mmRLC_PG_CNTL);
- if (enable && (adev->pg_flags & AMDGPU_PG_SUPPORT_GFX_DMG))
+ if (enable && (adev->pg_flags & AMD_PG_SUPPORT_GFX_DMG))
data |= RLC_PG_CNTL__DYN_PER_CU_PG_ENABLE_MASK;
else
data &= ~RLC_PG_CNTL__DYN_PER_CU_PG_ENABLE_MASK;
@@ -4623,15 +4623,15 @@
static void gfx_v7_0_init_pg(struct amdgpu_device *adev)
{
- if (adev->pg_flags & (AMDGPU_PG_SUPPORT_GFX_PG |
- AMDGPU_PG_SUPPORT_GFX_SMG |
- AMDGPU_PG_SUPPORT_GFX_DMG |
- AMDGPU_PG_SUPPORT_CP |
- AMDGPU_PG_SUPPORT_GDS |
- AMDGPU_PG_SUPPORT_RLC_SMU_HS)) {
+ if (adev->pg_flags & (AMD_PG_SUPPORT_GFX_PG |
+ AMD_PG_SUPPORT_GFX_SMG |
+ AMD_PG_SUPPORT_GFX_DMG |
+ AMD_PG_SUPPORT_CP |
+ AMD_PG_SUPPORT_GDS |
+ AMD_PG_SUPPORT_RLC_SMU_HS)) {
gfx_v7_0_enable_sclk_slowdown_on_pu(adev, true);
gfx_v7_0_enable_sclk_slowdown_on_pd(adev, true);
- if (adev->pg_flags & AMDGPU_PG_SUPPORT_GFX_PG) {
+ if (adev->pg_flags & AMD_PG_SUPPORT_GFX_PG) {
gfx_v7_0_init_gfx_cgpg(adev);
gfx_v7_0_enable_cp_pg(adev, true);
gfx_v7_0_enable_gds_pg(adev, true);
@@ -4643,14 +4643,14 @@
static void gfx_v7_0_fini_pg(struct amdgpu_device *adev)
{
- if (adev->pg_flags & (AMDGPU_PG_SUPPORT_GFX_PG |
- AMDGPU_PG_SUPPORT_GFX_SMG |
- AMDGPU_PG_SUPPORT_GFX_DMG |
- AMDGPU_PG_SUPPORT_CP |
- AMDGPU_PG_SUPPORT_GDS |
- AMDGPU_PG_SUPPORT_RLC_SMU_HS)) {
+ if (adev->pg_flags & (AMD_PG_SUPPORT_GFX_PG |
+ AMD_PG_SUPPORT_GFX_SMG |
+ AMD_PG_SUPPORT_GFX_DMG |
+ AMD_PG_SUPPORT_CP |
+ AMD_PG_SUPPORT_GDS |
+ AMD_PG_SUPPORT_RLC_SMU_HS)) {
gfx_v7_0_update_gfx_pg(adev, false);
- if (adev->pg_flags & AMDGPU_PG_SUPPORT_GFX_PG) {
+ if (adev->pg_flags & AMD_PG_SUPPORT_GFX_PG) {
gfx_v7_0_enable_cp_pg(adev, false);
gfx_v7_0_enable_gds_pg(adev, false);
}
@@ -5527,14 +5527,14 @@
if (state == AMD_PG_STATE_GATE)
gate = true;
- if (adev->pg_flags & (AMDGPU_PG_SUPPORT_GFX_PG |
- AMDGPU_PG_SUPPORT_GFX_SMG |
- AMDGPU_PG_SUPPORT_GFX_DMG |
- AMDGPU_PG_SUPPORT_CP |
- AMDGPU_PG_SUPPORT_GDS |
- AMDGPU_PG_SUPPORT_RLC_SMU_HS)) {
+ if (adev->pg_flags & (AMD_PG_SUPPORT_GFX_PG |
+ AMD_PG_SUPPORT_GFX_SMG |
+ AMD_PG_SUPPORT_GFX_DMG |
+ AMD_PG_SUPPORT_CP |
+ AMD_PG_SUPPORT_GDS |
+ AMD_PG_SUPPORT_RLC_SMU_HS)) {
gfx_v7_0_update_gfx_pg(adev, gate);
- if (adev->pg_flags & AMDGPU_PG_SUPPORT_GFX_PG) {
+ if (adev->pg_flags & AMD_PG_SUPPORT_GFX_PG) {
gfx_v7_0_enable_cp_pg(adev, gate);
gfx_v7_0_enable_gds_pg(adev, gate);
}
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
index 8f8ec37..1c40bd9 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
@@ -4995,7 +4995,7 @@
case AMDGPU_IRQ_STATE_ENABLE:
cp_int_cntl = RREG32(mmCP_INT_CNTL_RING0);
cp_int_cntl = REG_SET_FIELD(cp_int_cntl, CP_INT_CNTL_RING0,
- PRIV_REG_INT_ENABLE, 0);
+ PRIV_REG_INT_ENABLE, 1);
WREG32(mmCP_INT_CNTL_RING0, cp_int_cntl);
break;
default:
diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c
index 8aa2991..b806079 100644
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v7_0.c
@@ -792,7 +792,7 @@
for (i = 0; i < ARRAY_SIZE(mc_cg_registers); i++) {
orig = data = RREG32(mc_cg_registers[i]);
- if (enable && (adev->cg_flags & AMDGPU_CG_SUPPORT_MC_LS))
+ if (enable && (adev->cg_flags & AMD_CG_SUPPORT_MC_LS))
data |= mc_cg_ls_en[i];
else
data &= ~mc_cg_ls_en[i];
@@ -809,7 +809,7 @@
for (i = 0; i < ARRAY_SIZE(mc_cg_registers); i++) {
orig = data = RREG32(mc_cg_registers[i]);
- if (enable && (adev->cg_flags & AMDGPU_CG_SUPPORT_MC_MGCG))
+ if (enable && (adev->cg_flags & AMD_CG_SUPPORT_MC_MGCG))
data |= mc_cg_en[i];
else
data &= ~mc_cg_en[i];
@@ -825,7 +825,7 @@
orig = data = RREG32_PCIE(ixPCIE_CNTL2);
- if (enable && (adev->cg_flags & AMDGPU_CG_SUPPORT_BIF_LS)) {
+ if (enable && (adev->cg_flags & AMD_CG_SUPPORT_BIF_LS)) {
data = REG_SET_FIELD(data, PCIE_CNTL2, SLV_MEM_LS_EN, 1);
data = REG_SET_FIELD(data, PCIE_CNTL2, MST_MEM_LS_EN, 1);
data = REG_SET_FIELD(data, PCIE_CNTL2, REPLAY_MEM_LS_EN, 1);
@@ -848,7 +848,7 @@
orig = data = RREG32(mmHDP_HOST_PATH_CNTL);
- if (enable && (adev->cg_flags & AMDGPU_CG_SUPPORT_HDP_MGCG))
+ if (enable && (adev->cg_flags & AMD_CG_SUPPORT_HDP_MGCG))
data = REG_SET_FIELD(data, HDP_HOST_PATH_CNTL, CLOCK_GATING_DIS, 0);
else
data = REG_SET_FIELD(data, HDP_HOST_PATH_CNTL, CLOCK_GATING_DIS, 1);
@@ -864,7 +864,7 @@
orig = data = RREG32(mmHDP_MEM_POWER_LS);
- if (enable && (adev->cg_flags & AMDGPU_CG_SUPPORT_HDP_LS))
+ if (enable && (adev->cg_flags & AMD_CG_SUPPORT_HDP_LS))
data = REG_SET_FIELD(data, HDP_MEM_POWER_LS, LS_ENABLE, 1);
else
data = REG_SET_FIELD(data, HDP_MEM_POWER_LS, LS_ENABLE, 0);
diff --git a/drivers/gpu/drm/amd/amdgpu/kv_dpm.c b/drivers/gpu/drm/amd/amdgpu/kv_dpm.c
index 7e9154c..654d767 100644
--- a/drivers/gpu/drm/amd/amdgpu/kv_dpm.c
+++ b/drivers/gpu/drm/amd/amdgpu/kv_dpm.c
@@ -2859,11 +2859,11 @@
pi->voltage_drop_t = 0;
pi->caps_sclk_throttle_low_notification = false;
pi->caps_fps = false; /* true? */
- pi->caps_uvd_pg = (adev->pg_flags & AMDGPU_PG_SUPPORT_UVD) ? true : false;
+ pi->caps_uvd_pg = (adev->pg_flags & AMD_PG_SUPPORT_UVD) ? true : false;
pi->caps_uvd_dpm = true;
- pi->caps_vce_pg = (adev->pg_flags & AMDGPU_PG_SUPPORT_VCE) ? true : false;
- pi->caps_samu_pg = (adev->pg_flags & AMDGPU_PG_SUPPORT_SAMU) ? true : false;
- pi->caps_acp_pg = (adev->pg_flags & AMDGPU_PG_SUPPORT_ACP) ? true : false;
+ pi->caps_vce_pg = (adev->pg_flags & AMD_PG_SUPPORT_VCE) ? true : false;
+ pi->caps_samu_pg = (adev->pg_flags & AMD_PG_SUPPORT_SAMU) ? true : false;
+ pi->caps_acp_pg = (adev->pg_flags & AMD_PG_SUPPORT_ACP) ? true : false;
pi->caps_stable_p_state = false;
ret = kv_parse_sys_info_table(adev);
diff --git a/drivers/gpu/drm/amd/amdgpu/uvd_v4_2.c b/drivers/gpu/drm/amd/amdgpu/uvd_v4_2.c
index 5e9f73a..fbd3767 100644
--- a/drivers/gpu/drm/amd/amdgpu/uvd_v4_2.c
+++ b/drivers/gpu/drm/amd/amdgpu/uvd_v4_2.c
@@ -611,7 +611,7 @@
{
u32 orig, data;
- if (enable && (adev->cg_flags & AMDGPU_CG_SUPPORT_UVD_MGCG)) {
+ if (enable && (adev->cg_flags & AMD_CG_SUPPORT_UVD_MGCG)) {
data = RREG32_UVD_CTX(ixUVD_CGC_MEM_CTRL);
data = 0xfff;
WREG32_UVD_CTX(ixUVD_CGC_MEM_CTRL, data);
@@ -830,6 +830,9 @@
bool gate = false;
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+ if (!(adev->cg_flags & AMD_CG_SUPPORT_UVD_MGCG))
+ return 0;
+
if (state == AMD_CG_STATE_GATE)
gate = true;
@@ -848,7 +851,10 @@
* revisit this when there is a cleaner line between
* the smc and the hw blocks
*/
- struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+ struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+ if (!(adev->pg_flags & AMD_PG_SUPPORT_UVD))
+ return 0;
if (state == AMD_PG_STATE_GATE) {
uvd_v4_2_stop(adev);
diff --git a/drivers/gpu/drm/amd/amdgpu/uvd_v5_0.c b/drivers/gpu/drm/amd/amdgpu/uvd_v5_0.c
index 38864f56..57f1c5b 100644
--- a/drivers/gpu/drm/amd/amdgpu/uvd_v5_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/uvd_v5_0.c
@@ -774,6 +774,11 @@
static int uvd_v5_0_set_clockgating_state(void *handle,
enum amd_clockgating_state state)
{
+ struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+ if (!(adev->cg_flags & AMD_CG_SUPPORT_UVD_MGCG))
+ return 0;
+
return 0;
}
@@ -789,6 +794,9 @@
*/
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+ if (!(adev->pg_flags & AMD_PG_SUPPORT_UVD))
+ return 0;
+
if (state == AMD_PG_STATE_GATE) {
uvd_v5_0_stop(adev);
return 0;
diff --git a/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c b/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c
index 3d59139..0b365b7 100644
--- a/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c
@@ -532,7 +532,7 @@
uvd_v6_0_mc_resume(adev);
/* Set dynamic clock gating in S/W control mode */
- if (adev->cg_flags & AMDGPU_CG_SUPPORT_UVD_MGCG) {
+ if (adev->cg_flags & AMD_CG_SUPPORT_UVD_MGCG) {
if (adev->flags & AMD_IS_APU)
cz_set_uvd_clock_gating_branches(adev, false);
else
@@ -1000,7 +1000,7 @@
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
bool enable = (state == AMD_CG_STATE_GATE) ? true : false;
- if (!(adev->cg_flags & AMDGPU_CG_SUPPORT_UVD_MGCG))
+ if (!(adev->cg_flags & AMD_CG_SUPPORT_UVD_MGCG))
return 0;
if (enable) {
@@ -1030,6 +1030,9 @@
*/
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+ if (!(adev->pg_flags & AMD_PG_SUPPORT_UVD))
+ return 0;
+
if (state == AMD_PG_STATE_GATE) {
uvd_v6_0_stop(adev);
return 0;
diff --git a/drivers/gpu/drm/amd/amdgpu/vce_v2_0.c b/drivers/gpu/drm/amd/amdgpu/vce_v2_0.c
index 52ac7a8..a822eda 100644
--- a/drivers/gpu/drm/amd/amdgpu/vce_v2_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/vce_v2_0.c
@@ -373,7 +373,7 @@
{
bool sw_cg = false;
- if (enable && (adev->cg_flags & AMDGPU_CG_SUPPORT_VCE_MGCG)) {
+ if (enable && (adev->cg_flags & AMD_CG_SUPPORT_VCE_MGCG)) {
if (sw_cg)
vce_v2_0_set_sw_cg(adev, true);
else
@@ -608,6 +608,9 @@
*/
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+ if (!(adev->pg_flags & AMD_PG_SUPPORT_VCE))
+ return 0;
+
if (state == AMD_PG_STATE_GATE)
/* XXX do we need a vce_v2_0_stop()? */
return 0;
diff --git a/drivers/gpu/drm/amd/amdgpu/vce_v3_0.c b/drivers/gpu/drm/amd/amdgpu/vce_v3_0.c
index e99af81..d662fa9 100644
--- a/drivers/gpu/drm/amd/amdgpu/vce_v3_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/vce_v3_0.c
@@ -277,7 +277,7 @@
WREG32_P(mmVCE_STATUS, 0, ~1);
/* Set Clock-Gating off */
- if (adev->cg_flags & AMDGPU_CG_SUPPORT_VCE_MGCG)
+ if (adev->cg_flags & AMD_CG_SUPPORT_VCE_MGCG)
vce_v3_0_set_vce_sw_clock_gating(adev, false);
if (r) {
@@ -676,7 +676,7 @@
bool enable = (state == AMD_CG_STATE_GATE) ? true : false;
int i;
- if (!(adev->cg_flags & AMDGPU_CG_SUPPORT_VCE_MGCG))
+ if (!(adev->cg_flags & AMD_CG_SUPPORT_VCE_MGCG))
return 0;
mutex_lock(&adev->grbm_idx_mutex);
@@ -728,6 +728,9 @@
*/
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+ if (!(adev->pg_flags & AMD_PG_SUPPORT_VCE))
+ return 0;
+
if (state == AMD_PG_STATE_GATE)
/* XXX do we need a vce_v3_0_stop()? */
return 0;
diff --git a/drivers/gpu/drm/amd/amdgpu/vi.c b/drivers/gpu/drm/amd/amdgpu/vi.c
index 89f5a1f..0d14d10 100644
--- a/drivers/gpu/drm/amd/amdgpu/vi.c
+++ b/drivers/gpu/drm/amd/amdgpu/vi.c
@@ -1457,8 +1457,7 @@
case CHIP_STONEY:
adev->has_uvd = true;
adev->cg_flags = 0;
- /* Disable UVD pg */
- adev->pg_flags = /* AMDGPU_PG_SUPPORT_UVD | */AMDGPU_PG_SUPPORT_VCE;
+ adev->pg_flags = 0;
adev->external_rev_id = adev->rev_id + 0x1;
break;
default:
diff --git a/drivers/gpu/drm/amd/include/amd_shared.h b/drivers/gpu/drm/amd/include/amd_shared.h
index 1195d06f..dbf7e64 100644
--- a/drivers/gpu/drm/amd/include/amd_shared.h
+++ b/drivers/gpu/drm/amd/include/amd_shared.h
@@ -85,6 +85,38 @@
AMD_PG_STATE_UNGATE,
};
+/* CG flags */
+#define AMD_CG_SUPPORT_GFX_MGCG (1 << 0)
+#define AMD_CG_SUPPORT_GFX_MGLS (1 << 1)
+#define AMD_CG_SUPPORT_GFX_CGCG (1 << 2)
+#define AMD_CG_SUPPORT_GFX_CGLS (1 << 3)
+#define AMD_CG_SUPPORT_GFX_CGTS (1 << 4)
+#define AMD_CG_SUPPORT_GFX_CGTS_LS (1 << 5)
+#define AMD_CG_SUPPORT_GFX_CP_LS (1 << 6)
+#define AMD_CG_SUPPORT_GFX_RLC_LS (1 << 7)
+#define AMD_CG_SUPPORT_MC_LS (1 << 8)
+#define AMD_CG_SUPPORT_MC_MGCG (1 << 9)
+#define AMD_CG_SUPPORT_SDMA_LS (1 << 10)
+#define AMD_CG_SUPPORT_SDMA_MGCG (1 << 11)
+#define AMD_CG_SUPPORT_BIF_LS (1 << 12)
+#define AMD_CG_SUPPORT_UVD_MGCG (1 << 13)
+#define AMD_CG_SUPPORT_VCE_MGCG (1 << 14)
+#define AMD_CG_SUPPORT_HDP_LS (1 << 15)
+#define AMD_CG_SUPPORT_HDP_MGCG (1 << 16)
+
+/* PG flags */
+#define AMD_PG_SUPPORT_GFX_PG (1 << 0)
+#define AMD_PG_SUPPORT_GFX_SMG (1 << 1)
+#define AMD_PG_SUPPORT_GFX_DMG (1 << 2)
+#define AMD_PG_SUPPORT_UVD (1 << 3)
+#define AMD_PG_SUPPORT_VCE (1 << 4)
+#define AMD_PG_SUPPORT_CP (1 << 5)
+#define AMD_PG_SUPPORT_GDS (1 << 6)
+#define AMD_PG_SUPPORT_RLC_SMU_HS (1 << 7)
+#define AMD_PG_SUPPORT_SDMA (1 << 8)
+#define AMD_PG_SUPPORT_ACP (1 << 9)
+#define AMD_PG_SUPPORT_SAMU (1 << 10)
+
enum amd_pm_state_type {
/* not used for dpm */
POWER_STATE_TYPE_DEFAULT,
diff --git a/drivers/gpu/drm/amd/include/cgs_common.h b/drivers/gpu/drm/amd/include/cgs_common.h
index 713aec9..aec38fc 100644
--- a/drivers/gpu/drm/amd/include/cgs_common.h
+++ b/drivers/gpu/drm/amd/include/cgs_common.h
@@ -109,6 +109,8 @@
CGS_SYSTEM_INFO_ADAPTER_BDF_ID = 1,
CGS_SYSTEM_INFO_PCIE_GEN_INFO,
CGS_SYSTEM_INFO_PCIE_MLW,
+ CGS_SYSTEM_INFO_CG_FLAGS,
+ CGS_SYSTEM_INFO_PG_FLAGS,
CGS_SYSTEM_INFO_ID_MAXIMUM,
};
diff --git a/drivers/gpu/drm/amd/powerplay/eventmgr/eventmgr.c b/drivers/gpu/drm/amd/powerplay/eventmgr/eventmgr.c
index 52a3efc..46410e3 100644
--- a/drivers/gpu/drm/amd/powerplay/eventmgr/eventmgr.c
+++ b/drivers/gpu/drm/amd/powerplay/eventmgr/eventmgr.c
@@ -31,7 +31,7 @@
static int pem_init(struct pp_eventmgr *eventmgr)
{
int result = 0;
- struct pem_event_data event_data;
+ struct pem_event_data event_data = { {0} };
/* Initialize PowerPlay feature info */
pem_init_feature_info(eventmgr);
@@ -52,7 +52,7 @@
static void pem_fini(struct pp_eventmgr *eventmgr)
{
- struct pem_event_data event_data;
+ struct pem_event_data event_data = { {0} };
pem_uninit_featureInfo(eventmgr);
pem_unregister_interrupts(eventmgr);
diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/cz_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/cz_hwmgr.c
index 0874ab4..cf01177 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/cz_hwmgr.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/cz_hwmgr.c
@@ -174,6 +174,8 @@
{
struct cz_hwmgr *cz_hwmgr = (struct cz_hwmgr *)(hwmgr->backend);
uint32_t i;
+ struct cgs_system_info sys_info = {0};
+ int result;
cz_hwmgr->gfx_ramp_step = 256*25/100;
@@ -247,6 +249,22 @@
phm_cap_set(hwmgr->platform_descriptor.platformCaps,
PHM_PlatformCaps_DisableVoltageIsland);
+ phm_cap_unset(hwmgr->platform_descriptor.platformCaps,
+ PHM_PlatformCaps_UVDPowerGating);
+ phm_cap_unset(hwmgr->platform_descriptor.platformCaps,
+ PHM_PlatformCaps_VCEPowerGating);
+ sys_info.size = sizeof(struct cgs_system_info);
+ sys_info.info_id = CGS_SYSTEM_INFO_PG_FLAGS;
+ result = cgs_query_system_info(hwmgr->device, &sys_info);
+ if (!result) {
+ if (sys_info.value & AMD_PG_SUPPORT_UVD)
+ phm_cap_set(hwmgr->platform_descriptor.platformCaps,
+ PHM_PlatformCaps_UVDPowerGating);
+ if (sys_info.value & AMD_PG_SUPPORT_VCE)
+ phm_cap_set(hwmgr->platform_descriptor.platformCaps,
+ PHM_PlatformCaps_VCEPowerGating);
+ }
+
return 0;
}
diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/tonga_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/tonga_hwmgr.c
index 44a9250..980d3bf 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/tonga_hwmgr.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/tonga_hwmgr.c
@@ -4451,6 +4451,7 @@
pp_atomctrl_gpio_pin_assignment gpio_pin_assignment;
struct phm_ppt_v1_information *pptable_info = (struct phm_ppt_v1_information *)(hwmgr->pptable);
phw_tonga_ulv_parm *ulv;
+ struct cgs_system_info sys_info = {0};
PP_ASSERT_WITH_CODE((NULL != hwmgr),
"Invalid Parameter!", return -1;);
@@ -4615,9 +4616,23 @@
data->vddc_phase_shed_control = 0;
- if (0 == result) {
- struct cgs_system_info sys_info = {0};
+ phm_cap_unset(hwmgr->platform_descriptor.platformCaps,
+ PHM_PlatformCaps_UVDPowerGating);
+ phm_cap_unset(hwmgr->platform_descriptor.platformCaps,
+ PHM_PlatformCaps_VCEPowerGating);
+ sys_info.size = sizeof(struct cgs_system_info);
+ sys_info.info_id = CGS_SYSTEM_INFO_PG_FLAGS;
+ result = cgs_query_system_info(hwmgr->device, &sys_info);
+ if (!result) {
+ if (sys_info.value & AMD_PG_SUPPORT_UVD)
+ phm_cap_set(hwmgr->platform_descriptor.platformCaps,
+ PHM_PlatformCaps_UVDPowerGating);
+ if (sys_info.value & AMD_PG_SUPPORT_VCE)
+ phm_cap_set(hwmgr->platform_descriptor.platformCaps,
+ PHM_PlatformCaps_VCEPowerGating);
+ }
+ if (0 == result) {
data->is_tlu_enabled = 0;
hwmgr->platform_descriptor.hardwareActivityPerformanceLevels =
TONGA_MAX_HARDWARE_POWERLEVELS;
diff --git a/drivers/gpu/drm/drm_atomic.c b/drivers/gpu/drm/drm_atomic.c
index 3f74193..9a7b446 100644
--- a/drivers/gpu/drm/drm_atomic.c
+++ b/drivers/gpu/drm/drm_atomic.c
@@ -65,8 +65,6 @@
*/
state->allow_modeset = true;
- state->num_connector = ACCESS_ONCE(dev->mode_config.num_connector);
-
state->crtcs = kcalloc(dev->mode_config.num_crtc,
sizeof(*state->crtcs), GFP_KERNEL);
if (!state->crtcs)
@@ -83,16 +81,6 @@
sizeof(*state->plane_states), GFP_KERNEL);
if (!state->plane_states)
goto fail;
- state->connectors = kcalloc(state->num_connector,
- sizeof(*state->connectors),
- GFP_KERNEL);
- if (!state->connectors)
- goto fail;
- state->connector_states = kcalloc(state->num_connector,
- sizeof(*state->connector_states),
- GFP_KERNEL);
- if (!state->connector_states)
- goto fail;
state->dev = dev;
@@ -823,19 +811,27 @@
index = drm_connector_index(connector);
- /*
- * Construction of atomic state updates can race with a connector
- * hot-add which might overflow. In this case flip the table and just
- * restart the entire ioctl - no one is fast enough to livelock a cpu
- * with physical hotplug events anyway.
- *
- * Note that we only grab the indexes once we have the right lock to
- * prevent hotplug/unplugging of connectors. So removal is no problem,
- * at most the array is a bit too large.
- */
if (index >= state->num_connector) {
- DRM_DEBUG_ATOMIC("Hot-added connector would overflow state array, restarting\n");
- return ERR_PTR(-EAGAIN);
+ struct drm_connector **c;
+ struct drm_connector_state **cs;
+ int alloc = max(index + 1, config->num_connector);
+
+ c = krealloc(state->connectors, alloc * sizeof(*state->connectors), GFP_KERNEL);
+ if (!c)
+ return ERR_PTR(-ENOMEM);
+
+ state->connectors = c;
+ memset(&state->connectors[state->num_connector], 0,
+ sizeof(*state->connectors) * (alloc - state->num_connector));
+
+ cs = krealloc(state->connector_states, alloc * sizeof(*state->connector_states), GFP_KERNEL);
+ if (!cs)
+ return ERR_PTR(-ENOMEM);
+
+ state->connector_states = cs;
+ memset(&state->connector_states[state->num_connector], 0,
+ sizeof(*state->connector_states) * (alloc - state->num_connector));
+ state->num_connector = alloc;
}
if (state->connector_states[index])
diff --git a/drivers/gpu/drm/drm_atomic_helper.c b/drivers/gpu/drm/drm_atomic_helper.c
index 7c52306..4f2d3e1 100644
--- a/drivers/gpu/drm/drm_atomic_helper.c
+++ b/drivers/gpu/drm/drm_atomic_helper.c
@@ -1493,7 +1493,7 @@
{
int i;
- for (i = 0; i < dev->mode_config.num_connector; i++) {
+ for (i = 0; i < state->num_connector; i++) {
struct drm_connector *connector = state->connectors[i];
if (!connector)
diff --git a/drivers/gpu/drm/drm_crtc.c b/drivers/gpu/drm/drm_crtc.c
index d40bab2..f619121 100644
--- a/drivers/gpu/drm/drm_crtc.c
+++ b/drivers/gpu/drm/drm_crtc.c
@@ -918,12 +918,19 @@
connector->base.properties = &connector->properties;
connector->dev = dev;
connector->funcs = funcs;
+
+ connector->connector_id = ida_simple_get(&config->connector_ida, 0, 0, GFP_KERNEL);
+ if (connector->connector_id < 0) {
+ ret = connector->connector_id;
+ goto out_put;
+ }
+
connector->connector_type = connector_type;
connector->connector_type_id =
ida_simple_get(connector_ida, 1, 0, GFP_KERNEL);
if (connector->connector_type_id < 0) {
ret = connector->connector_type_id;
- goto out_put;
+ goto out_put_id;
}
connector->name =
kasprintf(GFP_KERNEL, "%s-%d",
@@ -931,7 +938,7 @@
connector->connector_type_id);
if (!connector->name) {
ret = -ENOMEM;
- goto out_put;
+ goto out_put_type_id;
}
INIT_LIST_HEAD(&connector->probed_modes);
@@ -959,7 +966,12 @@
}
connector->debugfs_entry = NULL;
-
+out_put_type_id:
+ if (ret)
+ ida_remove(connector_ida, connector->connector_type_id);
+out_put_id:
+ if (ret)
+ ida_remove(&config->connector_ida, connector->connector_id);
out_put:
if (ret)
drm_mode_object_put(dev, &connector->base);
@@ -996,6 +1008,9 @@
ida_remove(&drm_connector_enum_list[connector->connector_type].ida,
connector->connector_type_id);
+ ida_remove(&dev->mode_config.connector_ida,
+ connector->connector_id);
+
kfree(connector->display_info.bus_formats);
drm_mode_object_put(dev, &connector->base);
kfree(connector->name);
@@ -1013,32 +1028,6 @@
EXPORT_SYMBOL(drm_connector_cleanup);
/**
- * drm_connector_index - find the index of a registered connector
- * @connector: connector to find index for
- *
- * Given a registered connector, return the index of that connector within a DRM
- * device's list of connectors.
- */
-unsigned int drm_connector_index(struct drm_connector *connector)
-{
- unsigned int index = 0;
- struct drm_connector *tmp;
- struct drm_mode_config *config = &connector->dev->mode_config;
-
- WARN_ON(!drm_modeset_is_locked(&config->connection_mutex));
-
- drm_for_each_connector(tmp, connector->dev) {
- if (tmp == connector)
- return index;
-
- index++;
- }
-
- BUG();
-}
-EXPORT_SYMBOL(drm_connector_index);
-
-/**
* drm_connector_register - register a connector
* @connector: the connector to register
*
@@ -5789,6 +5778,7 @@
INIT_LIST_HEAD(&dev->mode_config.plane_list);
idr_init(&dev->mode_config.crtc_idr);
idr_init(&dev->mode_config.tile_idr);
+ ida_init(&dev->mode_config.connector_ida);
drm_modeset_lock_all(dev);
drm_mode_create_standard_properties(dev);
@@ -5869,6 +5859,7 @@
crtc->funcs->destroy(crtc);
}
+ ida_destroy(&dev->mode_config.connector_ida);
idr_destroy(&dev->mode_config.tile_idr);
idr_destroy(&dev->mode_config.crtc_idr);
drm_modeset_lock_fini(&dev->mode_config.connection_mutex);
diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c b/drivers/gpu/drm/drm_dp_mst_topology.c
index 8ae13de..27fbd79 100644
--- a/drivers/gpu/drm/drm_dp_mst_topology.c
+++ b/drivers/gpu/drm/drm_dp_mst_topology.c
@@ -1159,11 +1159,13 @@
drm_dp_put_port(port);
goto out;
}
-
- drm_mode_connector_set_tile_property(port->connector);
-
+ if (port->port_num >= DP_MST_LOGICAL_PORT_0) {
+ port->cached_edid = drm_get_edid(port->connector, &port->aux.ddc);
+ drm_mode_connector_set_tile_property(port->connector);
+ }
(*mstb->mgr->cbs->register_connector)(port->connector);
}
+
out:
/* put reference to this port */
drm_dp_put_port(port);
@@ -1188,8 +1190,8 @@
port->ddps = conn_stat->displayport_device_plug_status;
if (old_ddps != port->ddps) {
- dowork = true;
if (port->ddps) {
+ dowork = true;
} else {
port->available_pbn = 0;
}
@@ -1294,13 +1296,8 @@
if (port->input)
continue;
- if (!port->ddps) {
- if (port->cached_edid) {
- kfree(port->cached_edid);
- port->cached_edid = NULL;
- }
+ if (!port->ddps)
continue;
- }
if (!port->available_pbn)
drm_dp_send_enum_path_resources(mgr, mstb, port);
@@ -1311,12 +1308,6 @@
drm_dp_check_and_send_link_address(mgr, mstb_child);
drm_dp_put_mst_branch_device(mstb_child);
}
- } else if (port->pdt == DP_PEER_DEVICE_SST_SINK ||
- port->pdt == DP_PEER_DEVICE_DP_LEGACY_CONV) {
- if (!port->cached_edid) {
- port->cached_edid =
- drm_get_edid(port->connector, &port->aux.ddc);
- }
}
}
}
@@ -1336,8 +1327,6 @@
drm_dp_check_and_send_link_address(mgr, mstb);
drm_dp_put_mst_branch_device(mstb);
}
-
- (*mgr->cbs->hotplug)(mgr);
}
static bool drm_dp_validate_guid(struct drm_dp_mst_topology_mgr *mgr,
@@ -1597,6 +1586,7 @@
for (i = 0; i < txmsg->reply.u.link_addr.nports; i++) {
drm_dp_add_port(mstb, mgr->dev, &txmsg->reply.u.link_addr.ports[i]);
}
+ (*mgr->cbs->hotplug)(mgr);
}
} else {
mstb->link_address_sent = false;
@@ -2293,6 +2283,8 @@
drm_dp_update_port(mstb, &msg.u.conn_stat);
DRM_DEBUG_KMS("Got CSN: pn: %d ldps:%d ddps: %d mcs: %d ip: %d pdt: %d\n", msg.u.conn_stat.port_number, msg.u.conn_stat.legacy_device_plug_status, msg.u.conn_stat.displayport_device_plug_status, msg.u.conn_stat.message_capability_status, msg.u.conn_stat.input_port, msg.u.conn_stat.peer_device_type);
+ (*mgr->cbs->hotplug)(mgr);
+
} else if (msg.req_type == DP_RESOURCE_STATUS_NOTIFY) {
drm_dp_send_up_ack_reply(mgr, mgr->mst_primary, msg.req_type, seqno, false);
if (!mstb)
@@ -2379,6 +2371,10 @@
case DP_PEER_DEVICE_SST_SINK:
status = connector_status_connected;
+ /* for logical ports - cache the EDID */
+ if (port->port_num >= 8 && !port->cached_edid) {
+ port->cached_edid = drm_get_edid(connector, &port->aux.ddc);
+ }
break;
case DP_PEER_DEVICE_DP_LEGACY_CONV:
if (port->ldps)
@@ -2433,7 +2429,10 @@
if (port->cached_edid)
edid = drm_edid_duplicate(port->cached_edid);
-
+ else {
+ edid = drm_get_edid(connector, &port->aux.ddc);
+ drm_mode_connector_set_tile_property(connector);
+ }
port->has_audio = drm_detect_monitor_audio(edid);
drm_dp_put_port(port);
return edid;
diff --git a/drivers/gpu/drm/drm_irq.c b/drivers/gpu/drm/drm_irq.c
index d12a4ef..1fe1457 100644
--- a/drivers/gpu/drm/drm_irq.c
+++ b/drivers/gpu/drm/drm_irq.c
@@ -224,6 +224,64 @@
diff = (flags & DRM_CALLED_FROM_VBLIRQ) != 0;
}
+ /*
+ * Within a drm_vblank_pre_modeset - drm_vblank_post_modeset
+ * interval? If so then vblank irqs keep running and it will likely
+ * happen that the hardware vblank counter is not trustworthy as it
+ * might reset at some point in that interval and vblank timestamps
+ * are not trustworthy either in that interval. Iow. this can result
+ * in a bogus diff >> 1 which must be avoided as it would cause
+ * random large forward jumps of the software vblank counter.
+ */
+ if (diff > 1 && (vblank->inmodeset & 0x2)) {
+ DRM_DEBUG_VBL("clamping vblank bump to 1 on crtc %u: diffr=%u"
+ " due to pre-modeset.\n", pipe, diff);
+ diff = 1;
+ }
+
+ /*
+ * FIMXE: Need to replace this hack with proper seqlocks.
+ *
+ * Restrict the bump of the software vblank counter to a safe maximum
+ * value of +1 whenever there is the possibility that concurrent readers
+ * of vblank timestamps could be active at the moment, as the current
+ * implementation of the timestamp caching and updating is not safe
+ * against concurrent readers for calls to store_vblank() with a bump
+ * of anything but +1. A bump != 1 would very likely return corrupted
+ * timestamps to userspace, because the same slot in the cache could
+ * be concurrently written by store_vblank() and read by one of those
+ * readers without the read-retry logic detecting the collision.
+ *
+ * Concurrent readers can exist when we are called from the
+ * drm_vblank_off() or drm_vblank_on() functions and other non-vblank-
+ * irq callers. However, all those calls to us are happening with the
+ * vbl_lock locked to prevent drm_vblank_get(), so the vblank refcount
+ * can't increase while we are executing. Therefore a zero refcount at
+ * this point is safe for arbitrary counter bumps if we are called
+ * outside vblank irq, a non-zero count is not 100% safe. Unfortunately
+ * we must also accept a refcount of 1, as whenever we are called from
+ * drm_vblank_get() -> drm_vblank_enable() the refcount will be 1 and
+ * we must let that one pass through in order to not lose vblank counts
+ * during vblank irq off - which would completely defeat the whole
+ * point of this routine.
+ *
+ * Whenever we are called from vblank irq, we have to assume concurrent
+ * readers exist or can show up any time during our execution, even if
+ * the refcount is currently zero, as vblank irqs are usually only
+ * enabled due to the presence of readers, and because when we are called
+ * from vblank irq we can't hold the vbl_lock to protect us from sudden
+ * bumps in vblank refcount. Therefore also restrict bumps to +1 when
+ * called from vblank irq.
+ */
+ if ((diff > 1) && (atomic_read(&vblank->refcount) > 1 ||
+ (flags & DRM_CALLED_FROM_VBLIRQ))) {
+ DRM_DEBUG_VBL("clamping vblank bump to 1 on crtc %u: diffr=%u "
+ "refcount %u, vblirq %u\n", pipe, diff,
+ atomic_read(&vblank->refcount),
+ (flags & DRM_CALLED_FROM_VBLIRQ) != 0);
+ diff = 1;
+ }
+
DRM_DEBUG_VBL("updating vblank count on crtc %u:"
" current=%u, diff=%u, hw=%u hw_last=%u\n",
pipe, vblank->count, diff, cur_vblank, vblank->last);
@@ -1316,7 +1374,13 @@
spin_lock_irqsave(&dev->event_lock, irqflags);
spin_lock(&dev->vbl_lock);
- vblank_disable_and_save(dev, pipe);
+ DRM_DEBUG_VBL("crtc %d, vblank enabled %d, inmodeset %d\n",
+ pipe, vblank->enabled, vblank->inmodeset);
+
+ /* Avoid redundant vblank disables without previous drm_vblank_on(). */
+ if (drm_core_check_feature(dev, DRIVER_ATOMIC) || !vblank->inmodeset)
+ vblank_disable_and_save(dev, pipe);
+
wake_up(&vblank->queue);
/*
@@ -1418,6 +1482,9 @@
return;
spin_lock_irqsave(&dev->vbl_lock, irqflags);
+ DRM_DEBUG_VBL("crtc %d, vblank enabled %d, inmodeset %d\n",
+ pipe, vblank->enabled, vblank->inmodeset);
+
/* Drop our private "prevent drm_vblank_get" refcount */
if (vblank->inmodeset) {
atomic_dec(&vblank->refcount);
@@ -1430,8 +1497,7 @@
* re-enable interrupts if there are users left, or the
* user wishes vblank interrupts to be enabled all the time.
*/
- if (atomic_read(&vblank->refcount) != 0 ||
- (!dev->vblank_disable_immediate && drm_vblank_offdelay == 0))
+ if (atomic_read(&vblank->refcount) != 0 || drm_vblank_offdelay == 0)
WARN_ON(drm_vblank_enable(dev, pipe));
spin_unlock_irqrestore(&dev->vbl_lock, irqflags);
}
@@ -1526,6 +1592,7 @@
if (vblank->inmodeset) {
spin_lock_irqsave(&dev->vbl_lock, irqflags);
dev->vblank_disable_allowed = true;
+ drm_reset_vblank_timestamp(dev, pipe);
spin_unlock_irqrestore(&dev->vbl_lock, irqflags);
if (vblank->inmodeset & 0x2)
diff --git a/drivers/gpu/drm/exynos/Kconfig b/drivers/gpu/drm/exynos/Kconfig
index 83efca9..f17d392 100644
--- a/drivers/gpu/drm/exynos/Kconfig
+++ b/drivers/gpu/drm/exynos/Kconfig
@@ -1,6 +1,6 @@
config DRM_EXYNOS
tristate "DRM Support for Samsung SoC EXYNOS Series"
- depends on OF && DRM && (PLAT_SAMSUNG || ARCH_MULTIPLATFORM)
+ depends on OF && DRM && (ARCH_S3C64XX || ARCH_EXYNOS || ARCH_MULTIPLATFORM)
select DRM_KMS_HELPER
select DRM_KMS_FB_HELPER
select FB_CFB_FILLRECT
diff --git a/drivers/gpu/drm/exynos/exynos5433_drm_decon.c b/drivers/gpu/drm/exynos/exynos5433_drm_decon.c
index 1bf6a21..162ab93 100644
--- a/drivers/gpu/drm/exynos/exynos5433_drm_decon.c
+++ b/drivers/gpu/drm/exynos/exynos5433_drm_decon.c
@@ -93,7 +93,7 @@
if (test_bit(BIT_SUSPENDED, &ctx->flags))
return -EPERM;
- if (test_and_set_bit(BIT_IRQS_ENABLED, &ctx->flags)) {
+ if (!test_and_set_bit(BIT_IRQS_ENABLED, &ctx->flags)) {
val = VIDINTCON0_INTEN;
if (ctx->out_type == IFTYPE_I80)
val |= VIDINTCON0_FRAMEDONE;
@@ -402,8 +402,6 @@
decon_enable_vblank(ctx->crtc);
decon_commit(ctx->crtc);
-
- set_bit(BIT_SUSPENDED, &ctx->flags);
}
static void decon_disable(struct exynos_drm_crtc *crtc)
@@ -582,9 +580,9 @@
static int exynos5433_decon_suspend(struct device *dev)
{
struct decon_context *ctx = dev_get_drvdata(dev);
- int i;
+ int i = ARRAY_SIZE(decon_clks_name);
- for (i = 0; i < ARRAY_SIZE(decon_clks_name); i++)
+ while (--i >= 0)
clk_disable_unprepare(ctx->clks[i]);
return 0;
diff --git a/drivers/gpu/drm/exynos/exynos_drm_dsi.c b/drivers/gpu/drm/exynos/exynos_drm_dsi.c
index e977a81..26e81d19 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_dsi.c
+++ b/drivers/gpu/drm/exynos/exynos_drm_dsi.c
@@ -1782,6 +1782,7 @@
bridge = of_drm_find_bridge(dsi->bridge_node);
if (bridge) {
+ encoder->bridge = bridge;
drm_bridge_attach(drm_dev, bridge);
}
diff --git a/drivers/gpu/drm/exynos/exynos_drm_fbdev.c b/drivers/gpu/drm/exynos/exynos_drm_fbdev.c
index f6118ba..8baabd8 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_fbdev.c
+++ b/drivers/gpu/drm/exynos/exynos_drm_fbdev.c
@@ -50,7 +50,7 @@
if (vm_size > exynos_gem->size)
return -EINVAL;
- ret = dma_mmap_attrs(helper->dev->dev, vma, exynos_gem->pages,
+ ret = dma_mmap_attrs(helper->dev->dev, vma, exynos_gem->cookie,
exynos_gem->dma_addr, exynos_gem->size,
&exynos_gem->dma_attrs);
if (ret < 0) {
diff --git a/drivers/gpu/drm/exynos/exynos_drm_fimc.c b/drivers/gpu/drm/exynos/exynos_drm_fimc.c
index c747824..8a4f4a0 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_fimc.c
+++ b/drivers/gpu/drm/exynos/exynos_drm_fimc.c
@@ -1723,7 +1723,7 @@
goto err_put_clk;
}
- DRM_DEBUG_KMS("id[%d]ippdrv[0x%x]\n", ctx->id, (int)ippdrv);
+ DRM_DEBUG_KMS("id[%d]ippdrv[%p]\n", ctx->id, ippdrv);
spin_lock_init(&ctx->lock);
platform_set_drvdata(pdev, ctx);
diff --git a/drivers/gpu/drm/exynos/exynos_drm_g2d.c b/drivers/gpu/drm/exynos/exynos_drm_g2d.c
index c17efdb..8dfe6e1 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_g2d.c
+++ b/drivers/gpu/drm/exynos/exynos_drm_g2d.c
@@ -1166,7 +1166,7 @@
goto err_free_event;
}
- cmd = (struct drm_exynos_g2d_cmd *)(uint32_t)req->cmd;
+ cmd = (struct drm_exynos_g2d_cmd *)(unsigned long)req->cmd;
if (copy_from_user(cmdlist->data + cmdlist->last,
(void __user *)cmd,
@@ -1184,7 +1184,8 @@
if (req->cmd_buf_nr) {
struct drm_exynos_g2d_cmd *cmd_buf;
- cmd_buf = (struct drm_exynos_g2d_cmd *)(uint32_t)req->cmd_buf;
+ cmd_buf = (struct drm_exynos_g2d_cmd *)
+ (unsigned long)req->cmd_buf;
if (copy_from_user(cmdlist->data + cmdlist->last,
(void __user *)cmd_buf,
diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.c b/drivers/gpu/drm/exynos/exynos_drm_gem.c
index 32358c5..26b5e4b 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_gem.c
+++ b/drivers/gpu/drm/exynos/exynos_drm_gem.c
@@ -218,7 +218,7 @@
return ERR_PTR(ret);
}
- DRM_DEBUG_KMS("created file object = 0x%x\n", (unsigned int)obj->filp);
+ DRM_DEBUG_KMS("created file object = %p\n", obj->filp);
return exynos_gem;
}
@@ -335,7 +335,7 @@
if (vm_size > exynos_gem->size)
return -EINVAL;
- ret = dma_mmap_attrs(drm_dev->dev, vma, exynos_gem->pages,
+ ret = dma_mmap_attrs(drm_dev->dev, vma, exynos_gem->cookie,
exynos_gem->dma_addr, exynos_gem->size,
&exynos_gem->dma_attrs);
if (ret < 0) {
diff --git a/drivers/gpu/drm/exynos/exynos_drm_gsc.c b/drivers/gpu/drm/exynos/exynos_drm_gsc.c
index 7aecd23..5d20da8 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_gsc.c
+++ b/drivers/gpu/drm/exynos/exynos_drm_gsc.c
@@ -1723,7 +1723,7 @@
return ret;
}
- DRM_DEBUG_KMS("id[%d]ippdrv[0x%x]\n", ctx->id, (int)ippdrv);
+ DRM_DEBUG_KMS("id[%d]ippdrv[%p]\n", ctx->id, ippdrv);
mutex_init(&ctx->lock);
platform_set_drvdata(pdev, ctx);
diff --git a/drivers/gpu/drm/exynos/exynos_drm_ipp.c b/drivers/gpu/drm/exynos/exynos_drm_ipp.c
index 67d2423..95eeb91 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_ipp.c
+++ b/drivers/gpu/drm/exynos/exynos_drm_ipp.c
@@ -208,7 +208,7 @@
* e.g PAUSE state, queue buf, command control.
*/
list_for_each_entry(ippdrv, &exynos_drm_ippdrv_list, drv_list) {
- DRM_DEBUG_KMS("count[%d]ippdrv[0x%x]\n", count++, (int)ippdrv);
+ DRM_DEBUG_KMS("count[%d]ippdrv[%p]\n", count++, ippdrv);
mutex_lock(&ippdrv->cmd_lock);
list_for_each_entry(c_node, &ippdrv->cmd_list, list) {
@@ -388,8 +388,8 @@
}
property->prop_id = ret;
- DRM_DEBUG_KMS("created prop_id[%d]cmd[%d]ippdrv[0x%x]\n",
- property->prop_id, property->cmd, (int)ippdrv);
+ DRM_DEBUG_KMS("created prop_id[%d]cmd[%d]ippdrv[%p]\n",
+ property->prop_id, property->cmd, ippdrv);
/* stored property information and ippdrv in private data */
c_node->property = *property;
@@ -518,7 +518,7 @@
{
int i;
- DRM_DEBUG_KMS("node[0x%x]\n", (int)m_node);
+ DRM_DEBUG_KMS("node[%p]\n", m_node);
if (!m_node) {
DRM_ERROR("invalid dequeue node.\n");
@@ -562,7 +562,7 @@
m_node->buf_id = qbuf->buf_id;
INIT_LIST_HEAD(&m_node->list);
- DRM_DEBUG_KMS("m_node[0x%x]ops_id[%d]\n", (int)m_node, qbuf->ops_id);
+ DRM_DEBUG_KMS("m_node[%p]ops_id[%d]\n", m_node, qbuf->ops_id);
DRM_DEBUG_KMS("prop_id[%d]buf_id[%d]\n", qbuf->prop_id, m_node->buf_id);
for_each_ipp_planar(i) {
@@ -582,8 +582,8 @@
buf_info->handles[i] = qbuf->handle[i];
buf_info->base[i] = *addr;
- DRM_DEBUG_KMS("i[%d]base[0x%x]hd[0x%lx]\n", i,
- buf_info->base[i], buf_info->handles[i]);
+ DRM_DEBUG_KMS("i[%d]base[%pad]hd[0x%lx]\n", i,
+ &buf_info->base[i], buf_info->handles[i]);
}
}
@@ -664,7 +664,7 @@
mutex_lock(&c_node->event_lock);
list_for_each_entry_safe(e, te, &c_node->event_list, base.link) {
- DRM_DEBUG_KMS("count[%d]e[0x%x]\n", count++, (int)e);
+ DRM_DEBUG_KMS("count[%d]e[%p]\n", count++, e);
/*
* qbuf == NULL condition means all event deletion.
@@ -755,7 +755,7 @@
/* find memory node from memory list */
list_for_each_entry(m_node, head, list) {
- DRM_DEBUG_KMS("count[%d]m_node[0x%x]\n", count++, (int)m_node);
+ DRM_DEBUG_KMS("count[%d]m_node[%p]\n", count++, m_node);
/* compare buffer id */
if (m_node->buf_id == qbuf->buf_id)
@@ -772,7 +772,7 @@
struct exynos_drm_ipp_ops *ops = NULL;
int ret = 0;
- DRM_DEBUG_KMS("node[0x%x]\n", (int)m_node);
+ DRM_DEBUG_KMS("node[%p]\n", m_node);
if (!m_node) {
DRM_ERROR("invalid queue node.\n");
@@ -1237,7 +1237,7 @@
m_node = list_first_entry(head,
struct drm_exynos_ipp_mem_node, list);
- DRM_DEBUG_KMS("m_node[0x%x]\n", (int)m_node);
+ DRM_DEBUG_KMS("m_node[%p]\n", m_node);
ret = ipp_set_mem_node(ippdrv, c_node, m_node);
if (ret) {
@@ -1610,8 +1610,8 @@
}
ippdrv->prop_list.ipp_id = ret;
- DRM_DEBUG_KMS("count[%d]ippdrv[0x%x]ipp_id[%d]\n",
- count++, (int)ippdrv, ret);
+ DRM_DEBUG_KMS("count[%d]ippdrv[%p]ipp_id[%d]\n",
+ count++, ippdrv, ret);
/* store parent device for node */
ippdrv->parent_dev = dev;
@@ -1668,7 +1668,7 @@
file_priv->ipp_dev = dev;
- DRM_DEBUG_KMS("done priv[0x%x]\n", (int)dev);
+ DRM_DEBUG_KMS("done priv[%p]\n", dev);
return 0;
}
@@ -1685,8 +1685,8 @@
mutex_lock(&ippdrv->cmd_lock);
list_for_each_entry_safe(c_node, tc_node,
&ippdrv->cmd_list, list) {
- DRM_DEBUG_KMS("count[%d]ippdrv[0x%x]\n",
- count++, (int)ippdrv);
+ DRM_DEBUG_KMS("count[%d]ippdrv[%p]\n",
+ count++, ippdrv);
if (c_node->filp == file) {
/*
diff --git a/drivers/gpu/drm/exynos/exynos_drm_mic.c b/drivers/gpu/drm/exynos/exynos_drm_mic.c
index 4eaef36..9869d70 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_mic.c
+++ b/drivers/gpu/drm/exynos/exynos_drm_mic.c
@@ -18,6 +18,7 @@
#include <linux/of.h>
#include <linux/of_graph.h>
#include <linux/clk.h>
+#include <linux/component.h>
#include <drm/drmP.h>
#include <linux/mfd/syscon.h>
#include <linux/regmap.h>
@@ -306,9 +307,9 @@
return ret;
}
-void mic_disable(struct drm_bridge *bridge) { }
+static void mic_disable(struct drm_bridge *bridge) { }
-void mic_post_disable(struct drm_bridge *bridge)
+static void mic_post_disable(struct drm_bridge *bridge)
{
struct exynos_mic *mic = bridge->driver_private;
int i;
@@ -328,7 +329,7 @@
mutex_unlock(&mic_mutex);
}
-void mic_pre_enable(struct drm_bridge *bridge)
+static void mic_pre_enable(struct drm_bridge *bridge)
{
struct exynos_mic *mic = bridge->driver_private;
int ret, i;
@@ -371,11 +372,35 @@
mutex_unlock(&mic_mutex);
}
-void mic_enable(struct drm_bridge *bridge) { }
+static void mic_enable(struct drm_bridge *bridge) { }
-void mic_destroy(struct drm_bridge *bridge)
+static const struct drm_bridge_funcs mic_bridge_funcs = {
+ .disable = mic_disable,
+ .post_disable = mic_post_disable,
+ .pre_enable = mic_pre_enable,
+ .enable = mic_enable,
+};
+
+static int exynos_mic_bind(struct device *dev, struct device *master,
+ void *data)
{
- struct exynos_mic *mic = bridge->driver_private;
+ struct exynos_mic *mic = dev_get_drvdata(dev);
+ int ret;
+
+ mic->bridge.funcs = &mic_bridge_funcs;
+ mic->bridge.of_node = dev->of_node;
+ mic->bridge.driver_private = mic;
+ ret = drm_bridge_add(&mic->bridge);
+ if (ret)
+ DRM_ERROR("mic: Failed to add MIC to the global bridge list\n");
+
+ return ret;
+}
+
+static void exynos_mic_unbind(struct device *dev, struct device *master,
+ void *data)
+{
+ struct exynos_mic *mic = dev_get_drvdata(dev);
int i;
mutex_lock(&mic_mutex);
@@ -387,16 +412,16 @@
already_disabled:
mutex_unlock(&mic_mutex);
+
+ drm_bridge_remove(&mic->bridge);
}
-static const struct drm_bridge_funcs mic_bridge_funcs = {
- .disable = mic_disable,
- .post_disable = mic_post_disable,
- .pre_enable = mic_pre_enable,
- .enable = mic_enable,
+static const struct component_ops exynos_mic_component_ops = {
+ .bind = exynos_mic_bind,
+ .unbind = exynos_mic_unbind,
};
-int exynos_mic_probe(struct platform_device *pdev)
+static int exynos_mic_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct exynos_mic *mic;
@@ -435,17 +460,8 @@
goto err;
}
- mic->bridge.funcs = &mic_bridge_funcs;
- mic->bridge.of_node = dev->of_node;
- mic->bridge.driver_private = mic;
- ret = drm_bridge_add(&mic->bridge);
- if (ret) {
- DRM_ERROR("mic: Failed to add MIC to the global bridge list\n");
- goto err;
- }
-
for (i = 0; i < NUM_CLKS; i++) {
- mic->clks[i] = of_clk_get_by_name(dev->of_node, clk_names[i]);
+ mic->clks[i] = devm_clk_get(dev, clk_names[i]);
if (IS_ERR(mic->clks[i])) {
DRM_ERROR("mic: Failed to get clock (%s)\n",
clk_names[i]);
@@ -454,7 +470,10 @@
}
}
+ platform_set_drvdata(pdev, mic);
+
DRM_DEBUG_KMS("MIC has been probed\n");
+ return component_add(dev, &exynos_mic_component_ops);
err:
return ret;
@@ -462,14 +481,7 @@
static int exynos_mic_remove(struct platform_device *pdev)
{
- struct exynos_mic *mic = platform_get_drvdata(pdev);
- int i;
-
- drm_bridge_remove(&mic->bridge);
-
- for (i = NUM_CLKS - 1; i > -1; i--)
- clk_put(mic->clks[i]);
-
+ component_del(&pdev->dev, &exynos_mic_component_ops);
return 0;
}
diff --git a/drivers/gpu/drm/exynos/exynos_drm_rotator.c b/drivers/gpu/drm/exynos/exynos_drm_rotator.c
index bea0f78..ce59f44 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_rotator.c
+++ b/drivers/gpu/drm/exynos/exynos_drm_rotator.c
@@ -754,7 +754,7 @@
goto err_ippdrv_register;
}
- DRM_DEBUG_KMS("ippdrv[0x%x]\n", (int)ippdrv);
+ DRM_DEBUG_KMS("ippdrv[%p]\n", ippdrv);
platform_set_drvdata(pdev, rot);
diff --git a/drivers/gpu/drm/exynos/exynos_drm_vidi.c b/drivers/gpu/drm/exynos/exynos_drm_vidi.c
index 62ac4e5..b605bd7 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_vidi.c
+++ b/drivers/gpu/drm/exynos/exynos_drm_vidi.c
@@ -223,7 +223,7 @@
}
}
-static int vidi_show_connection(struct device *dev,
+static ssize_t vidi_show_connection(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct vidi_context *ctx = dev_get_drvdata(dev);
@@ -238,7 +238,7 @@
return rc;
}
-static int vidi_store_connection(struct device *dev,
+static ssize_t vidi_store_connection(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t len)
{
@@ -294,7 +294,9 @@
}
if (vidi->connection) {
- struct edid *raw_edid = (struct edid *)(uint32_t)vidi->edid;
+ struct edid *raw_edid;
+
+ raw_edid = (struct edid *)(unsigned long)vidi->edid;
if (!drm_edid_is_valid(raw_edid)) {
DRM_DEBUG_KMS("edid data is invalid.\n");
return -EINVAL;
diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index 0fc38bb..cf39ed3 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -825,8 +825,11 @@
}
for_each_pipe(dev_priv, pipe) {
- if (!intel_display_power_is_enabled(dev_priv,
- POWER_DOMAIN_PIPE(pipe))) {
+ enum intel_display_power_domain power_domain;
+
+ power_domain = POWER_DOMAIN_PIPE(pipe);
+ if (!intel_display_power_get_if_enabled(dev_priv,
+ power_domain)) {
seq_printf(m, "Pipe %c power disabled\n",
pipe_name(pipe));
continue;
@@ -840,6 +843,8 @@
seq_printf(m, "Pipe %c IER:\t%08x\n",
pipe_name(pipe),
I915_READ(GEN8_DE_PIPE_IER(pipe)));
+
+ intel_display_power_put(dev_priv, power_domain);
}
seq_printf(m, "Display Engine port interrupt mask:\t%08x\n",
@@ -3985,6 +3990,7 @@
struct intel_pipe_crc *pipe_crc = &dev_priv->pipe_crc[pipe];
struct intel_crtc *crtc = to_intel_crtc(intel_get_crtc_for_pipe(dev,
pipe));
+ enum intel_display_power_domain power_domain;
u32 val = 0; /* shut up gcc */
int ret;
@@ -3995,7 +4001,8 @@
if (pipe_crc->source && source)
return -EINVAL;
- if (!intel_display_power_is_enabled(dev_priv, POWER_DOMAIN_PIPE(pipe))) {
+ power_domain = POWER_DOMAIN_PIPE(pipe);
+ if (!intel_display_power_get_if_enabled(dev_priv, power_domain)) {
DRM_DEBUG_KMS("Trying to capture CRC while pipe is off\n");
return -EIO;
}
@@ -4012,7 +4019,7 @@
ret = ivb_pipe_crc_ctl_reg(dev, pipe, &source, &val);
if (ret != 0)
- return ret;
+ goto out;
/* none -> real source transition */
if (source) {
@@ -4024,8 +4031,10 @@
entries = kcalloc(INTEL_PIPE_CRC_ENTRIES_NR,
sizeof(pipe_crc->entries[0]),
GFP_KERNEL);
- if (!entries)
- return -ENOMEM;
+ if (!entries) {
+ ret = -ENOMEM;
+ goto out;
+ }
/*
* When IPS gets enabled, the pipe CRC changes. Since IPS gets
@@ -4081,7 +4090,12 @@
hsw_enable_ips(crtc);
}
- return 0;
+ ret = 0;
+
+out:
+ intel_display_power_put(dev_priv, power_domain);
+
+ return ret;
}
/*
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index f0f75d7..b0847b9 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -751,6 +751,7 @@
uint32_t mmio_count;
i915_reg_t mmioaddr[8];
uint32_t mmiodata[8];
+ uint32_t dc_state;
};
#define DEV_INFO_FOR_EACH_FLAG(func, sep) \
@@ -1988,6 +1989,9 @@
#define I915_GTT_OFFSET_NONE ((u32)-1)
struct drm_i915_gem_object_ops {
+ unsigned int flags;
+#define I915_GEM_OBJECT_HAS_STRUCT_PAGE 0x1
+
/* Interface between the GEM object and its backing storage.
* get_pages() is called once prior to the use of the associated set
* of pages before to binding them into the GTT, and put_pages() is
@@ -2003,6 +2007,7 @@
*/
int (*get_pages)(struct drm_i915_gem_object *);
void (*put_pages)(struct drm_i915_gem_object *);
+
int (*dmabuf_export)(struct drm_i915_gem_object *);
void (*release)(struct drm_i915_gem_object *);
};
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index ddc21d4..bb44bad 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -4425,6 +4425,7 @@
}
static const struct drm_i915_gem_object_ops i915_gem_object_ops = {
+ .flags = I915_GEM_OBJECT_HAS_STRUCT_PAGE,
.get_pages = i915_gem_object_get_pages_gtt,
.put_pages = i915_gem_object_put_pages_gtt,
};
@@ -5261,7 +5262,7 @@
struct page *page;
/* Only default objects have per-page dirty tracking */
- if (WARN_ON(obj->ops != &i915_gem_object_ops))
+ if (WARN_ON((obj->ops->flags & I915_GEM_OBJECT_HAS_STRUCT_PAGE) == 0))
return NULL;
page = i915_gem_object_get_page(obj, n);
diff --git a/drivers/gpu/drm/i915/i915_gem_userptr.c b/drivers/gpu/drm/i915/i915_gem_userptr.c
index 19fb0bdd..59e45b3 100644
--- a/drivers/gpu/drm/i915/i915_gem_userptr.c
+++ b/drivers/gpu/drm/i915/i915_gem_userptr.c
@@ -789,9 +789,10 @@
}
static const struct drm_i915_gem_object_ops i915_gem_userptr_ops = {
- .dmabuf_export = i915_gem_userptr_dmabuf_export,
+ .flags = I915_GEM_OBJECT_HAS_STRUCT_PAGE,
.get_pages = i915_gem_userptr_get_pages,
.put_pages = i915_gem_userptr_put_pages,
+ .dmabuf_export = i915_gem_userptr_dmabuf_export,
.release = i915_gem_userptr_release,
};
diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
index 007ae83..4897728 100644
--- a/drivers/gpu/drm/i915/i915_reg.h
+++ b/drivers/gpu/drm/i915/i915_reg.h
@@ -3287,19 +3287,20 @@
#define PORT_HOTPLUG_STAT _MMIO(dev_priv->info.display_mmio_offset + 0x61114)
/*
- * HDMI/DP bits are gen4+
+ * HDMI/DP bits are g4x+
*
* WARNING: Bspec for hpd status bits on gen4 seems to be completely confused.
* Please check the detailed lore in the commit message for for experimental
* evidence.
*/
-#define PORTD_HOTPLUG_LIVE_STATUS_G4X (1 << 29)
+/* Bspec says GM45 should match G4X/VLV/CHV, but reality disagrees */
+#define PORTD_HOTPLUG_LIVE_STATUS_GM45 (1 << 29)
+#define PORTC_HOTPLUG_LIVE_STATUS_GM45 (1 << 28)
+#define PORTB_HOTPLUG_LIVE_STATUS_GM45 (1 << 27)
+/* G4X/VLV/CHV DP/HDMI bits again match Bspec */
+#define PORTD_HOTPLUG_LIVE_STATUS_G4X (1 << 27)
#define PORTC_HOTPLUG_LIVE_STATUS_G4X (1 << 28)
-#define PORTB_HOTPLUG_LIVE_STATUS_G4X (1 << 27)
-/* VLV DP/HDMI bits again match Bspec */
-#define PORTD_HOTPLUG_LIVE_STATUS_VLV (1 << 27)
-#define PORTC_HOTPLUG_LIVE_STATUS_VLV (1 << 28)
-#define PORTB_HOTPLUG_LIVE_STATUS_VLV (1 << 29)
+#define PORTB_HOTPLUG_LIVE_STATUS_G4X (1 << 29)
#define PORTD_HOTPLUG_INT_STATUS (3 << 21)
#define PORTD_HOTPLUG_INT_LONG_PULSE (2 << 21)
#define PORTD_HOTPLUG_INT_SHORT_PULSE (1 << 21)
@@ -7514,7 +7515,7 @@
#define DPLL_CFGCR2_PDIV_7 (4<<2)
#define DPLL_CFGCR2_CENTRAL_FREQ_MASK (3)
-#define DPLL_CFGCR1(id) _MMIO_PIPE((id) - SKL_DPLL1, _DPLL1_CFGCR1, _DPLL2_CFGCR2)
+#define DPLL_CFGCR1(id) _MMIO_PIPE((id) - SKL_DPLL1, _DPLL1_CFGCR1, _DPLL2_CFGCR1)
#define DPLL_CFGCR2(id) _MMIO_PIPE((id) - SKL_DPLL1, _DPLL1_CFGCR2, _DPLL2_CFGCR2)
/* BXT display engine PLL */
diff --git a/drivers/gpu/drm/i915/i915_suspend.c b/drivers/gpu/drm/i915/i915_suspend.c
index a2aa09c..a8af594 100644
--- a/drivers/gpu/drm/i915/i915_suspend.c
+++ b/drivers/gpu/drm/i915/i915_suspend.c
@@ -49,7 +49,7 @@
dev_priv->regfile.savePP_ON_DELAYS = I915_READ(PCH_PP_ON_DELAYS);
dev_priv->regfile.savePP_OFF_DELAYS = I915_READ(PCH_PP_OFF_DELAYS);
dev_priv->regfile.savePP_DIVISOR = I915_READ(PCH_PP_DIVISOR);
- } else if (!IS_VALLEYVIEW(dev) && !IS_CHERRYVIEW(dev)) {
+ } else if (INTEL_INFO(dev)->gen <= 4) {
dev_priv->regfile.savePP_CONTROL = I915_READ(PP_CONTROL);
dev_priv->regfile.savePP_ON_DELAYS = I915_READ(PP_ON_DELAYS);
dev_priv->regfile.savePP_OFF_DELAYS = I915_READ(PP_OFF_DELAYS);
@@ -84,7 +84,7 @@
I915_WRITE(PCH_PP_OFF_DELAYS, dev_priv->regfile.savePP_OFF_DELAYS);
I915_WRITE(PCH_PP_DIVISOR, dev_priv->regfile.savePP_DIVISOR);
I915_WRITE(PCH_PP_CONTROL, dev_priv->regfile.savePP_CONTROL);
- } else if (!IS_VALLEYVIEW(dev) && !IS_CHERRYVIEW(dev)) {
+ } else if (INTEL_INFO(dev)->gen <= 4) {
I915_WRITE(PP_ON_DELAYS, dev_priv->regfile.savePP_ON_DELAYS);
I915_WRITE(PP_OFF_DELAYS, dev_priv->regfile.savePP_OFF_DELAYS);
I915_WRITE(PP_DIVISOR, dev_priv->regfile.savePP_DIVISOR);
diff --git a/drivers/gpu/drm/i915/intel_crt.c b/drivers/gpu/drm/i915/intel_crt.c
index 9c89df1..a7b4a524 100644
--- a/drivers/gpu/drm/i915/intel_crt.c
+++ b/drivers/gpu/drm/i915/intel_crt.c
@@ -71,22 +71,29 @@
struct intel_crt *crt = intel_encoder_to_crt(encoder);
enum intel_display_power_domain power_domain;
u32 tmp;
+ bool ret;
power_domain = intel_display_port_power_domain(encoder);
- if (!intel_display_power_is_enabled(dev_priv, power_domain))
+ if (!intel_display_power_get_if_enabled(dev_priv, power_domain))
return false;
+ ret = false;
+
tmp = I915_READ(crt->adpa_reg);
if (!(tmp & ADPA_DAC_ENABLE))
- return false;
+ goto out;
if (HAS_PCH_CPT(dev))
*pipe = PORT_TO_PIPE_CPT(tmp);
else
*pipe = PORT_TO_PIPE(tmp);
- return true;
+ ret = true;
+out:
+ intel_display_power_put(dev_priv, power_domain);
+
+ return ret;
}
static unsigned int intel_crt_get_flags(struct intel_encoder *encoder)
diff --git a/drivers/gpu/drm/i915/intel_csr.c b/drivers/gpu/drm/i915/intel_csr.c
index 9bb63a8..647d85e 100644
--- a/drivers/gpu/drm/i915/intel_csr.c
+++ b/drivers/gpu/drm/i915/intel_csr.c
@@ -240,6 +240,8 @@
I915_WRITE(dev_priv->csr.mmioaddr[i],
dev_priv->csr.mmiodata[i]);
}
+
+ dev_priv->csr.dc_state = 0;
}
static uint32_t *parse_csr_fw(struct drm_i915_private *dev_priv,
diff --git a/drivers/gpu/drm/i915/intel_ddi.c b/drivers/gpu/drm/i915/intel_ddi.c
index e6408e5..0f3df2c 100644
--- a/drivers/gpu/drm/i915/intel_ddi.c
+++ b/drivers/gpu/drm/i915/intel_ddi.c
@@ -1589,7 +1589,8 @@
DPLL_CFGCR2_KDIV(wrpll_params.kdiv) |
DPLL_CFGCR2_PDIV(wrpll_params.pdiv) |
wrpll_params.central_freq;
- } else if (intel_encoder->type == INTEL_OUTPUT_DISPLAYPORT) {
+ } else if (intel_encoder->type == INTEL_OUTPUT_DISPLAYPORT ||
+ intel_encoder->type == INTEL_OUTPUT_DP_MST) {
switch (crtc_state->port_clock / 2) {
case 81000:
ctrl1 |= DPLL_CTRL1_LINK_RATE(DPLL_CTRL1_LINK_RATE_810, 0);
@@ -1968,13 +1969,16 @@
enum transcoder cpu_transcoder;
enum intel_display_power_domain power_domain;
uint32_t tmp;
+ bool ret;
power_domain = intel_display_port_power_domain(intel_encoder);
- if (!intel_display_power_is_enabled(dev_priv, power_domain))
+ if (!intel_display_power_get_if_enabled(dev_priv, power_domain))
return false;
- if (!intel_encoder->get_hw_state(intel_encoder, &pipe))
- return false;
+ if (!intel_encoder->get_hw_state(intel_encoder, &pipe)) {
+ ret = false;
+ goto out;
+ }
if (port == PORT_A)
cpu_transcoder = TRANSCODER_EDP;
@@ -1986,23 +1990,33 @@
switch (tmp & TRANS_DDI_MODE_SELECT_MASK) {
case TRANS_DDI_MODE_SELECT_HDMI:
case TRANS_DDI_MODE_SELECT_DVI:
- return (type == DRM_MODE_CONNECTOR_HDMIA);
+ ret = type == DRM_MODE_CONNECTOR_HDMIA;
+ break;
case TRANS_DDI_MODE_SELECT_DP_SST:
- if (type == DRM_MODE_CONNECTOR_eDP)
- return true;
- return (type == DRM_MODE_CONNECTOR_DisplayPort);
+ ret = type == DRM_MODE_CONNECTOR_eDP ||
+ type == DRM_MODE_CONNECTOR_DisplayPort;
+ break;
+
case TRANS_DDI_MODE_SELECT_DP_MST:
/* if the transcoder is in MST state then
* connector isn't connected */
- return false;
+ ret = false;
+ break;
case TRANS_DDI_MODE_SELECT_FDI:
- return (type == DRM_MODE_CONNECTOR_VGA);
+ ret = type == DRM_MODE_CONNECTOR_VGA;
+ break;
default:
- return false;
+ ret = false;
+ break;
}
+
+out:
+ intel_display_power_put(dev_priv, power_domain);
+
+ return ret;
}
bool intel_ddi_get_hw_state(struct intel_encoder *encoder,
@@ -2014,15 +2028,18 @@
enum intel_display_power_domain power_domain;
u32 tmp;
int i;
+ bool ret;
power_domain = intel_display_port_power_domain(encoder);
- if (!intel_display_power_is_enabled(dev_priv, power_domain))
+ if (!intel_display_power_get_if_enabled(dev_priv, power_domain))
return false;
+ ret = false;
+
tmp = I915_READ(DDI_BUF_CTL(port));
if (!(tmp & DDI_BUF_CTL_ENABLE))
- return false;
+ goto out;
if (port == PORT_A) {
tmp = I915_READ(TRANS_DDI_FUNC_CTL(TRANSCODER_EDP));
@@ -2040,25 +2057,32 @@
break;
}
- return true;
- } else {
- for (i = TRANSCODER_A; i <= TRANSCODER_C; i++) {
- tmp = I915_READ(TRANS_DDI_FUNC_CTL(i));
+ ret = true;
- if ((tmp & TRANS_DDI_PORT_MASK)
- == TRANS_DDI_SELECT_PORT(port)) {
- if ((tmp & TRANS_DDI_MODE_SELECT_MASK) == TRANS_DDI_MODE_SELECT_DP_MST)
- return false;
+ goto out;
+ }
- *pipe = i;
- return true;
- }
+ for (i = TRANSCODER_A; i <= TRANSCODER_C; i++) {
+ tmp = I915_READ(TRANS_DDI_FUNC_CTL(i));
+
+ if ((tmp & TRANS_DDI_PORT_MASK) == TRANS_DDI_SELECT_PORT(port)) {
+ if ((tmp & TRANS_DDI_MODE_SELECT_MASK) ==
+ TRANS_DDI_MODE_SELECT_DP_MST)
+ goto out;
+
+ *pipe = i;
+ ret = true;
+
+ goto out;
}
}
DRM_DEBUG_KMS("No pipe for ddi port %c found\n", port_name(port));
- return false;
+out:
+ intel_display_power_put(dev_priv, power_domain);
+
+ return ret;
}
void intel_ddi_enable_pipe_clock(struct intel_crtc *intel_crtc)
@@ -2507,12 +2531,14 @@
{
uint32_t val;
- if (!intel_display_power_is_enabled(dev_priv, POWER_DOMAIN_PLLS))
+ if (!intel_display_power_get_if_enabled(dev_priv, POWER_DOMAIN_PLLS))
return false;
val = I915_READ(WRPLL_CTL(pll->id));
hw_state->wrpll = val;
+ intel_display_power_put(dev_priv, POWER_DOMAIN_PLLS);
+
return val & WRPLL_PLL_ENABLE;
}
@@ -2522,12 +2548,14 @@
{
uint32_t val;
- if (!intel_display_power_is_enabled(dev_priv, POWER_DOMAIN_PLLS))
+ if (!intel_display_power_get_if_enabled(dev_priv, POWER_DOMAIN_PLLS))
return false;
val = I915_READ(SPLL_CTL);
hw_state->spll = val;
+ intel_display_power_put(dev_priv, POWER_DOMAIN_PLLS);
+
return val & SPLL_PLL_ENABLE;
}
@@ -2644,16 +2672,19 @@
uint32_t val;
unsigned int dpll;
const struct skl_dpll_regs *regs = skl_dpll_regs;
+ bool ret;
- if (!intel_display_power_is_enabled(dev_priv, POWER_DOMAIN_PLLS))
+ if (!intel_display_power_get_if_enabled(dev_priv, POWER_DOMAIN_PLLS))
return false;
+ ret = false;
+
/* DPLL0 is not part of the shared DPLLs, so pll->id is 0 for DPLL1 */
dpll = pll->id + 1;
val = I915_READ(regs[pll->id].ctl);
if (!(val & LCPLL_PLL_ENABLE))
- return false;
+ goto out;
val = I915_READ(DPLL_CTRL1);
hw_state->ctrl1 = (val >> (dpll * 6)) & 0x3f;
@@ -2663,8 +2694,12 @@
hw_state->cfgcr1 = I915_READ(regs[pll->id].cfgcr1);
hw_state->cfgcr2 = I915_READ(regs[pll->id].cfgcr2);
}
+ ret = true;
- return true;
+out:
+ intel_display_power_put(dev_priv, POWER_DOMAIN_PLLS);
+
+ return ret;
}
static void skl_shared_dplls_init(struct drm_i915_private *dev_priv)
@@ -2931,13 +2966,16 @@
{
enum port port = (enum port)pll->id; /* 1:1 port->PLL mapping */
uint32_t val;
+ bool ret;
- if (!intel_display_power_is_enabled(dev_priv, POWER_DOMAIN_PLLS))
+ if (!intel_display_power_get_if_enabled(dev_priv, POWER_DOMAIN_PLLS))
return false;
+ ret = false;
+
val = I915_READ(BXT_PORT_PLL_ENABLE(port));
if (!(val & PORT_PLL_ENABLE))
- return false;
+ goto out;
hw_state->ebb0 = I915_READ(BXT_PORT_PLL_EBB_0(port));
hw_state->ebb0 &= PORT_PLL_P1_MASK | PORT_PLL_P2_MASK;
@@ -2984,7 +3022,12 @@
I915_READ(BXT_PORT_PCS_DW12_LN23(port)));
hw_state->pcsdw12 &= LANE_STAGGER_MASK | LANESTAGGER_STRAP_OVRD;
- return true;
+ ret = true;
+
+out:
+ intel_display_power_put(dev_priv, POWER_DOMAIN_PLLS);
+
+ return ret;
}
static void bxt_shared_dplls_init(struct drm_i915_private *dev_priv)
@@ -3119,11 +3162,15 @@
{
u32 temp;
- if (intel_display_power_is_enabled(dev_priv, POWER_DOMAIN_AUDIO)) {
+ if (intel_display_power_get_if_enabled(dev_priv, POWER_DOMAIN_AUDIO)) {
temp = I915_READ(HSW_AUD_PIN_ELD_CP_VLD);
+
+ intel_display_power_put(dev_priv, POWER_DOMAIN_AUDIO);
+
if (temp & AUDIO_OUTPUT_ENABLE(intel_crtc->pipe))
return true;
}
+
return false;
}
diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index 5feb657..46947ff 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -1351,18 +1351,21 @@
bool cur_state;
enum transcoder cpu_transcoder = intel_pipe_to_cpu_transcoder(dev_priv,
pipe);
+ enum intel_display_power_domain power_domain;
/* if we need the pipe quirk it must be always on */
if ((pipe == PIPE_A && dev_priv->quirks & QUIRK_PIPEA_FORCE) ||
(pipe == PIPE_B && dev_priv->quirks & QUIRK_PIPEB_FORCE))
state = true;
- if (!intel_display_power_is_enabled(dev_priv,
- POWER_DOMAIN_TRANSCODER(cpu_transcoder))) {
- cur_state = false;
- } else {
+ power_domain = POWER_DOMAIN_TRANSCODER(cpu_transcoder);
+ if (intel_display_power_get_if_enabled(dev_priv, power_domain)) {
u32 val = I915_READ(PIPECONF(cpu_transcoder));
cur_state = !!(val & PIPECONF_ENABLE);
+
+ intel_display_power_put(dev_priv, power_domain);
+ } else {
+ cur_state = false;
}
I915_STATE_WARN(cur_state != state,
@@ -8171,18 +8174,22 @@
{
struct drm_device *dev = crtc->base.dev;
struct drm_i915_private *dev_priv = dev->dev_private;
+ enum intel_display_power_domain power_domain;
uint32_t tmp;
+ bool ret;
- if (!intel_display_power_is_enabled(dev_priv,
- POWER_DOMAIN_PIPE(crtc->pipe)))
+ power_domain = POWER_DOMAIN_PIPE(crtc->pipe);
+ if (!intel_display_power_get_if_enabled(dev_priv, power_domain))
return false;
pipe_config->cpu_transcoder = (enum transcoder) crtc->pipe;
pipe_config->shared_dpll = DPLL_ID_PRIVATE;
+ ret = false;
+
tmp = I915_READ(PIPECONF(crtc->pipe));
if (!(tmp & PIPECONF_ENABLE))
- return false;
+ goto out;
if (IS_G4X(dev) || IS_VALLEYVIEW(dev) || IS_CHERRYVIEW(dev)) {
switch (tmp & PIPECONF_BPC_MASK) {
@@ -8262,7 +8269,12 @@
pipe_config->base.adjusted_mode.crtc_clock =
pipe_config->port_clock / pipe_config->pixel_multiplier;
- return true;
+ ret = true;
+
+out:
+ intel_display_power_put(dev_priv, power_domain);
+
+ return ret;
}
static void ironlake_init_pch_refclk(struct drm_device *dev)
@@ -9366,18 +9378,21 @@
{
struct drm_device *dev = crtc->base.dev;
struct drm_i915_private *dev_priv = dev->dev_private;
+ enum intel_display_power_domain power_domain;
uint32_t tmp;
+ bool ret;
- if (!intel_display_power_is_enabled(dev_priv,
- POWER_DOMAIN_PIPE(crtc->pipe)))
+ power_domain = POWER_DOMAIN_PIPE(crtc->pipe);
+ if (!intel_display_power_get_if_enabled(dev_priv, power_domain))
return false;
pipe_config->cpu_transcoder = (enum transcoder) crtc->pipe;
pipe_config->shared_dpll = DPLL_ID_PRIVATE;
+ ret = false;
tmp = I915_READ(PIPECONF(crtc->pipe));
if (!(tmp & PIPECONF_ENABLE))
- return false;
+ goto out;
switch (tmp & PIPECONF_BPC_MASK) {
case PIPECONF_6BPC:
@@ -9440,7 +9455,12 @@
ironlake_get_pfit_config(crtc, pipe_config);
- return true;
+ ret = true;
+
+out:
+ intel_display_power_put(dev_priv, power_domain);
+
+ return ret;
}
static void assert_can_disable_lcpll(struct drm_i915_private *dev_priv)
@@ -9950,12 +9970,17 @@
{
struct drm_device *dev = crtc->base.dev;
struct drm_i915_private *dev_priv = dev->dev_private;
- enum intel_display_power_domain pfit_domain;
+ enum intel_display_power_domain power_domain;
+ unsigned long power_domain_mask;
uint32_t tmp;
+ bool ret;
- if (!intel_display_power_is_enabled(dev_priv,
- POWER_DOMAIN_PIPE(crtc->pipe)))
+ power_domain = POWER_DOMAIN_PIPE(crtc->pipe);
+ if (!intel_display_power_get_if_enabled(dev_priv, power_domain))
return false;
+ power_domain_mask = BIT(power_domain);
+
+ ret = false;
pipe_config->cpu_transcoder = (enum transcoder) crtc->pipe;
pipe_config->shared_dpll = DPLL_ID_PRIVATE;
@@ -9982,13 +10007,14 @@
pipe_config->cpu_transcoder = TRANSCODER_EDP;
}
- if (!intel_display_power_is_enabled(dev_priv,
- POWER_DOMAIN_TRANSCODER(pipe_config->cpu_transcoder)))
- return false;
+ power_domain = POWER_DOMAIN_TRANSCODER(pipe_config->cpu_transcoder);
+ if (!intel_display_power_get_if_enabled(dev_priv, power_domain))
+ goto out;
+ power_domain_mask |= BIT(power_domain);
tmp = I915_READ(PIPECONF(pipe_config->cpu_transcoder));
if (!(tmp & PIPECONF_ENABLE))
- return false;
+ goto out;
haswell_get_ddi_port_state(crtc, pipe_config);
@@ -9998,14 +10024,14 @@
skl_init_scalers(dev, crtc, pipe_config);
}
- pfit_domain = POWER_DOMAIN_PIPE_PANEL_FITTER(crtc->pipe);
-
if (INTEL_INFO(dev)->gen >= 9) {
pipe_config->scaler_state.scaler_id = -1;
pipe_config->scaler_state.scaler_users &= ~(1 << SKL_CRTC_INDEX);
}
- if (intel_display_power_is_enabled(dev_priv, pfit_domain)) {
+ power_domain = POWER_DOMAIN_PIPE_PANEL_FITTER(crtc->pipe);
+ if (intel_display_power_get_if_enabled(dev_priv, power_domain)) {
+ power_domain_mask |= BIT(power_domain);
if (INTEL_INFO(dev)->gen >= 9)
skylake_get_pfit_config(crtc, pipe_config);
else
@@ -10023,7 +10049,13 @@
pipe_config->pixel_multiplier = 1;
}
- return true;
+ ret = true;
+
+out:
+ for_each_power_domain(power_domain, power_domain_mask)
+ intel_display_power_put(dev_priv, power_domain);
+
+ return ret;
}
static void i845_update_cursor(struct drm_crtc *crtc, u32 base, bool on)
@@ -13630,7 +13662,7 @@
{
uint32_t val;
- if (!intel_display_power_is_enabled(dev_priv, POWER_DOMAIN_PLLS))
+ if (!intel_display_power_get_if_enabled(dev_priv, POWER_DOMAIN_PLLS))
return false;
val = I915_READ(PCH_DPLL(pll->id));
@@ -13638,6 +13670,8 @@
hw_state->fp0 = I915_READ(PCH_FP0(pll->id));
hw_state->fp1 = I915_READ(PCH_FP1(pll->id));
+ intel_display_power_put(dev_priv, POWER_DOMAIN_PLLS);
+
return val & DPLL_VCO_ENABLE;
}
@@ -15568,10 +15602,12 @@
* level, just check if the power well is enabled instead of trying to
* follow the "don't touch the power well if we don't need it" policy
* the rest of the driver uses. */
- if (!intel_display_power_is_enabled(dev_priv, POWER_DOMAIN_VGA))
+ if (!intel_display_power_get_if_enabled(dev_priv, POWER_DOMAIN_VGA))
return;
i915_redisable_vga_power_on(dev);
+
+ intel_display_power_put(dev_priv, POWER_DOMAIN_VGA);
}
static bool primary_get_hw_state(struct intel_plane *plane)
diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
index 796e3d3..1d8de43 100644
--- a/drivers/gpu/drm/i915/intel_dp.c
+++ b/drivers/gpu/drm/i915/intel_dp.c
@@ -2362,15 +2362,18 @@
struct drm_i915_private *dev_priv = dev->dev_private;
enum intel_display_power_domain power_domain;
u32 tmp;
+ bool ret;
power_domain = intel_display_port_power_domain(encoder);
- if (!intel_display_power_is_enabled(dev_priv, power_domain))
+ if (!intel_display_power_get_if_enabled(dev_priv, power_domain))
return false;
+ ret = false;
+
tmp = I915_READ(intel_dp->output_reg);
if (!(tmp & DP_PORT_EN))
- return false;
+ goto out;
if (IS_GEN7(dev) && port == PORT_A) {
*pipe = PORT_TO_PIPE_CPT(tmp);
@@ -2381,7 +2384,9 @@
u32 trans_dp = I915_READ(TRANS_DP_CTL(p));
if (TRANS_DP_PIPE_TO_PORT(trans_dp) == port) {
*pipe = p;
- return true;
+ ret = true;
+
+ goto out;
}
}
@@ -2393,7 +2398,12 @@
*pipe = PORT_TO_PIPE(tmp);
}
- return true;
+ ret = true;
+
+out:
+ intel_display_power_put(dev_priv, power_domain);
+
+ return ret;
}
static void intel_dp_get_config(struct intel_encoder *encoder,
@@ -4493,20 +4503,20 @@
return I915_READ(PORT_HOTPLUG_STAT) & bit;
}
-static bool vlv_digital_port_connected(struct drm_i915_private *dev_priv,
- struct intel_digital_port *port)
+static bool gm45_digital_port_connected(struct drm_i915_private *dev_priv,
+ struct intel_digital_port *port)
{
u32 bit;
switch (port->port) {
case PORT_B:
- bit = PORTB_HOTPLUG_LIVE_STATUS_VLV;
+ bit = PORTB_HOTPLUG_LIVE_STATUS_GM45;
break;
case PORT_C:
- bit = PORTC_HOTPLUG_LIVE_STATUS_VLV;
+ bit = PORTC_HOTPLUG_LIVE_STATUS_GM45;
break;
case PORT_D:
- bit = PORTD_HOTPLUG_LIVE_STATUS_VLV;
+ bit = PORTD_HOTPLUG_LIVE_STATUS_GM45;
break;
default:
MISSING_CASE(port->port);
@@ -4558,8 +4568,8 @@
return cpt_digital_port_connected(dev_priv, port);
else if (IS_BROXTON(dev_priv))
return bxt_digital_port_connected(dev_priv, port);
- else if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv))
- return vlv_digital_port_connected(dev_priv, port);
+ else if (IS_GM45(dev_priv))
+ return gm45_digital_port_connected(dev_priv, port);
else
return g4x_digital_port_connected(dev_priv, port);
}
diff --git a/drivers/gpu/drm/i915/intel_dp_link_training.c b/drivers/gpu/drm/i915/intel_dp_link_training.c
index 8888793..0b8eefc 100644
--- a/drivers/gpu/drm/i915/intel_dp_link_training.c
+++ b/drivers/gpu/drm/i915/intel_dp_link_training.c
@@ -215,27 +215,46 @@
}
}
+/*
+ * Pick training pattern for channel equalization. Training Pattern 3 for HBR2
+ * or 1.2 devices that support it, Training Pattern 2 otherwise.
+ */
+static u32 intel_dp_training_pattern(struct intel_dp *intel_dp)
+{
+ u32 training_pattern = DP_TRAINING_PATTERN_2;
+ bool source_tps3, sink_tps3;
+
+ /*
+ * Intel platforms that support HBR2 also support TPS3. TPS3 support is
+ * also mandatory for downstream devices that support HBR2. However, not
+ * all sinks follow the spec.
+ *
+ * Due to WaDisableHBR2 SKL < B0 is the only exception where TPS3 is
+ * supported in source but still not enabled.
+ */
+ source_tps3 = intel_dp_source_supports_hbr2(intel_dp);
+ sink_tps3 = drm_dp_tps3_supported(intel_dp->dpcd);
+
+ if (source_tps3 && sink_tps3) {
+ training_pattern = DP_TRAINING_PATTERN_3;
+ } else if (intel_dp->link_rate == 540000) {
+ if (!source_tps3)
+ DRM_DEBUG_KMS("5.4 Gbps link rate without source HBR2/TPS3 support\n");
+ if (!sink_tps3)
+ DRM_DEBUG_KMS("5.4 Gbps link rate without sink TPS3 support\n");
+ }
+
+ return training_pattern;
+}
+
static void
intel_dp_link_training_channel_equalization(struct intel_dp *intel_dp)
{
bool channel_eq = false;
int tries, cr_tries;
- uint32_t training_pattern = DP_TRAINING_PATTERN_2;
+ u32 training_pattern;
- /*
- * Training Pattern 3 for HBR2 or 1.2 devices that support it.
- *
- * Intel platforms that support HBR2 also support TPS3. TPS3 support is
- * also mandatory for downstream devices that support HBR2.
- *
- * Due to WaDisableHBR2 SKL < B0 is the only exception where TPS3 is
- * supported but still not enabled.
- */
- if (intel_dp_source_supports_hbr2(intel_dp) &&
- drm_dp_tps3_supported(intel_dp->dpcd))
- training_pattern = DP_TRAINING_PATTERN_3;
- else if (intel_dp->link_rate == 540000)
- DRM_ERROR("5.4 Gbps link rate without HBR2/TPS3 support\n");
+ training_pattern = intel_dp_training_pattern(intel_dp);
/* channel equalization */
if (!intel_dp_set_link_train(intel_dp,
diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
index ea54158..df7f3cb 100644
--- a/drivers/gpu/drm/i915/intel_drv.h
+++ b/drivers/gpu/drm/i915/intel_drv.h
@@ -1428,6 +1428,8 @@
enum intel_display_power_domain domain);
void intel_display_power_get(struct drm_i915_private *dev_priv,
enum intel_display_power_domain domain);
+bool intel_display_power_get_if_enabled(struct drm_i915_private *dev_priv,
+ enum intel_display_power_domain domain);
void intel_display_power_put(struct drm_i915_private *dev_priv,
enum intel_display_power_domain domain);
@@ -1514,6 +1516,7 @@
enable_rpm_wakeref_asserts(dev_priv)
void intel_runtime_pm_get(struct drm_i915_private *dev_priv);
+bool intel_runtime_pm_get_if_in_use(struct drm_i915_private *dev_priv);
void intel_runtime_pm_get_noresume(struct drm_i915_private *dev_priv);
void intel_runtime_pm_put(struct drm_i915_private *dev_priv);
diff --git a/drivers/gpu/drm/i915/intel_dsi.c b/drivers/gpu/drm/i915/intel_dsi.c
index 44742fa..0193c62a 100644
--- a/drivers/gpu/drm/i915/intel_dsi.c
+++ b/drivers/gpu/drm/i915/intel_dsi.c
@@ -664,13 +664,16 @@
struct drm_device *dev = encoder->base.dev;
enum intel_display_power_domain power_domain;
enum port port;
+ bool ret;
DRM_DEBUG_KMS("\n");
power_domain = intel_display_port_power_domain(encoder);
- if (!intel_display_power_is_enabled(dev_priv, power_domain))
+ if (!intel_display_power_get_if_enabled(dev_priv, power_domain))
return false;
+ ret = false;
+
/* XXX: this only works for one DSI output */
for_each_dsi_port(port, intel_dsi->ports) {
i915_reg_t ctrl_reg = IS_BROXTON(dev) ?
@@ -691,12 +694,16 @@
if (dpi_enabled || (func & CMD_MODE_DATA_WIDTH_MASK)) {
if (I915_READ(MIPI_DEVICE_READY(port)) & DEVICE_READY) {
*pipe = port == PORT_A ? PIPE_A : PIPE_B;
- return true;
+ ret = true;
+
+ goto out;
}
}
}
+out:
+ intel_display_power_put(dev_priv, power_domain);
- return false;
+ return ret;
}
static void intel_dsi_get_config(struct intel_encoder *encoder,
diff --git a/drivers/gpu/drm/i915/intel_dsi_panel_vbt.c b/drivers/gpu/drm/i915/intel_dsi_panel_vbt.c
index a5e99ac..e8113ad 100644
--- a/drivers/gpu/drm/i915/intel_dsi_panel_vbt.c
+++ b/drivers/gpu/drm/i915/intel_dsi_panel_vbt.c
@@ -204,10 +204,28 @@
struct drm_device *dev = intel_dsi->base.base.dev;
struct drm_i915_private *dev_priv = dev->dev_private;
+ if (dev_priv->vbt.dsi.seq_version >= 3)
+ data++;
+
gpio = *data++;
/* pull up/down */
- action = *data++;
+ action = *data++ & 1;
+
+ if (gpio >= ARRAY_SIZE(gtable)) {
+ DRM_DEBUG_KMS("unknown gpio %u\n", gpio);
+ goto out;
+ }
+
+ if (!IS_VALLEYVIEW(dev_priv)) {
+ DRM_DEBUG_KMS("GPIO element not supported on this platform\n");
+ goto out;
+ }
+
+ if (dev_priv->vbt.dsi.seq_version >= 3) {
+ DRM_DEBUG_KMS("GPIO element v3 not supported\n");
+ goto out;
+ }
function = gtable[gpio].function_reg;
pad = gtable[gpio].pad_reg;
@@ -226,6 +244,7 @@
vlv_gpio_nc_write(dev_priv, pad, val);
mutex_unlock(&dev_priv->sb_lock);
+out:
return data;
}
diff --git a/drivers/gpu/drm/i915/intel_hdmi.c b/drivers/gpu/drm/i915/intel_hdmi.c
index 4a77639..cb5d1b1 100644
--- a/drivers/gpu/drm/i915/intel_hdmi.c
+++ b/drivers/gpu/drm/i915/intel_hdmi.c
@@ -880,15 +880,18 @@
struct intel_hdmi *intel_hdmi = enc_to_intel_hdmi(&encoder->base);
enum intel_display_power_domain power_domain;
u32 tmp;
+ bool ret;
power_domain = intel_display_port_power_domain(encoder);
- if (!intel_display_power_is_enabled(dev_priv, power_domain))
+ if (!intel_display_power_get_if_enabled(dev_priv, power_domain))
return false;
+ ret = false;
+
tmp = I915_READ(intel_hdmi->hdmi_reg);
if (!(tmp & SDVO_ENABLE))
- return false;
+ goto out;
if (HAS_PCH_CPT(dev))
*pipe = PORT_TO_PIPE_CPT(tmp);
@@ -897,7 +900,12 @@
else
*pipe = PORT_TO_PIPE(tmp);
- return true;
+ ret = true;
+
+out:
+ intel_display_power_put(dev_priv, power_domain);
+
+ return ret;
}
static void intel_hdmi_get_config(struct intel_encoder *encoder,
diff --git a/drivers/gpu/drm/i915/intel_i2c.c b/drivers/gpu/drm/i915/intel_i2c.c
index 25254b5..deb8282 100644
--- a/drivers/gpu/drm/i915/intel_i2c.c
+++ b/drivers/gpu/drm/i915/intel_i2c.c
@@ -683,7 +683,7 @@
return 0;
err:
- while (--pin) {
+ while (pin--) {
if (!intel_gmbus_is_valid_pin(dev_priv, pin))
continue;
diff --git a/drivers/gpu/drm/i915/intel_lvds.c b/drivers/gpu/drm/i915/intel_lvds.c
index 0da0240..bc04d8d 100644
--- a/drivers/gpu/drm/i915/intel_lvds.c
+++ b/drivers/gpu/drm/i915/intel_lvds.c
@@ -75,22 +75,30 @@
struct intel_lvds_encoder *lvds_encoder = to_lvds_encoder(&encoder->base);
enum intel_display_power_domain power_domain;
u32 tmp;
+ bool ret;
power_domain = intel_display_port_power_domain(encoder);
- if (!intel_display_power_is_enabled(dev_priv, power_domain))
+ if (!intel_display_power_get_if_enabled(dev_priv, power_domain))
return false;
+ ret = false;
+
tmp = I915_READ(lvds_encoder->reg);
if (!(tmp & LVDS_PORT_EN))
- return false;
+ goto out;
if (HAS_PCH_CPT(dev))
*pipe = PORT_TO_PIPE_CPT(tmp);
else
*pipe = PORT_TO_PIPE(tmp);
- return true;
+ ret = true;
+
+out:
+ intel_display_power_put(dev_priv, power_domain);
+
+ return ret;
}
static void intel_lvds_get_config(struct intel_encoder *encoder,
diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
index eb5fa05..b28c29f 100644
--- a/drivers/gpu/drm/i915/intel_pm.c
+++ b/drivers/gpu/drm/i915/intel_pm.c
@@ -1783,16 +1783,20 @@
const struct intel_plane_state *pstate,
uint32_t mem_value)
{
- int bpp = pstate->base.fb ? pstate->base.fb->bits_per_pixel / 8 : 0;
+ /*
+ * We treat the cursor plane as always-on for the purposes of watermark
+ * calculation. Until we have two-stage watermark programming merged,
+ * this is necessary to avoid flickering.
+ */
+ int cpp = 4;
+ int width = pstate->visible ? pstate->base.crtc_w : 64;
- if (!cstate->base.active || !pstate->visible)
+ if (!cstate->base.active)
return 0;
return ilk_wm_method2(ilk_pipe_pixel_rate(cstate),
cstate->base.adjusted_mode.crtc_htotal,
- drm_rect_width(&pstate->dst),
- bpp,
- mem_value);
+ width, cpp, mem_value);
}
/* Only for WM_LP. */
@@ -2825,7 +2829,10 @@
memset(ddb, 0, sizeof(*ddb));
for_each_pipe(dev_priv, pipe) {
- if (!intel_display_power_is_enabled(dev_priv, POWER_DOMAIN_PIPE(pipe)))
+ enum intel_display_power_domain power_domain;
+
+ power_domain = POWER_DOMAIN_PIPE(pipe);
+ if (!intel_display_power_get_if_enabled(dev_priv, power_domain))
continue;
for_each_plane(dev_priv, pipe, plane) {
@@ -2837,6 +2844,8 @@
val = I915_READ(CUR_BUF_CFG(pipe));
skl_ddb_entry_init_from_hw(&ddb->plane[pipe][PLANE_CURSOR],
val);
+
+ intel_display_power_put(dev_priv, power_domain);
}
}
diff --git a/drivers/gpu/drm/i915/intel_runtime_pm.c b/drivers/gpu/drm/i915/intel_runtime_pm.c
index ddbdbff..678ed34 100644
--- a/drivers/gpu/drm/i915/intel_runtime_pm.c
+++ b/drivers/gpu/drm/i915/intel_runtime_pm.c
@@ -470,6 +470,43 @@
}
}
+static void gen9_write_dc_state(struct drm_i915_private *dev_priv,
+ u32 state)
+{
+ int rewrites = 0;
+ int rereads = 0;
+ u32 v;
+
+ I915_WRITE(DC_STATE_EN, state);
+
+ /* It has been observed that disabling the dc6 state sometimes
+ * doesn't stick and dmc keeps returning old value. Make sure
+ * the write really sticks enough times and also force rewrite until
+ * we are confident that state is exactly what we want.
+ */
+ do {
+ v = I915_READ(DC_STATE_EN);
+
+ if (v != state) {
+ I915_WRITE(DC_STATE_EN, state);
+ rewrites++;
+ rereads = 0;
+ } else if (rereads++ > 5) {
+ break;
+ }
+
+ } while (rewrites < 100);
+
+ if (v != state)
+ DRM_ERROR("Writing dc state to 0x%x failed, now 0x%x\n",
+ state, v);
+
+ /* Most of the times we need one retry, avoid spam */
+ if (rewrites > 1)
+ DRM_DEBUG_KMS("Rewrote dc state to 0x%x %d times\n",
+ state, rewrites);
+}
+
static void gen9_set_dc_state(struct drm_i915_private *dev_priv, uint32_t state)
{
uint32_t val;
@@ -494,10 +531,18 @@
val = I915_READ(DC_STATE_EN);
DRM_DEBUG_KMS("Setting DC state from %02x to %02x\n",
val & mask, state);
+
+ /* Check if DMC is ignoring our DC state requests */
+ if ((val & mask) != dev_priv->csr.dc_state)
+ DRM_ERROR("DC state mismatch (0x%x -> 0x%x)\n",
+ dev_priv->csr.dc_state, val & mask);
+
val &= ~mask;
val |= state;
- I915_WRITE(DC_STATE_EN, val);
- POSTING_READ(DC_STATE_EN);
+
+ gen9_write_dc_state(dev_priv, val);
+
+ dev_priv->csr.dc_state = val & mask;
}
void bxt_enable_dc9(struct drm_i915_private *dev_priv)
@@ -1442,6 +1487,22 @@
chv_set_pipe_power_well(dev_priv, power_well, false);
}
+static void
+__intel_display_power_get_domain(struct drm_i915_private *dev_priv,
+ enum intel_display_power_domain domain)
+{
+ struct i915_power_domains *power_domains = &dev_priv->power_domains;
+ struct i915_power_well *power_well;
+ int i;
+
+ for_each_power_well(i, power_well, BIT(domain), power_domains) {
+ if (!power_well->count++)
+ intel_power_well_enable(dev_priv, power_well);
+ }
+
+ power_domains->domain_use_count[domain]++;
+}
+
/**
* intel_display_power_get - grab a power domain reference
* @dev_priv: i915 device instance
@@ -1457,24 +1518,53 @@
void intel_display_power_get(struct drm_i915_private *dev_priv,
enum intel_display_power_domain domain)
{
- struct i915_power_domains *power_domains;
- struct i915_power_well *power_well;
- int i;
+ struct i915_power_domains *power_domains = &dev_priv->power_domains;
intel_runtime_pm_get(dev_priv);
- power_domains = &dev_priv->power_domains;
+ mutex_lock(&power_domains->lock);
+
+ __intel_display_power_get_domain(dev_priv, domain);
+
+ mutex_unlock(&power_domains->lock);
+}
+
+/**
+ * intel_display_power_get_if_enabled - grab a reference for an enabled display power domain
+ * @dev_priv: i915 device instance
+ * @domain: power domain to reference
+ *
+ * This function grabs a power domain reference for @domain and ensures that the
+ * power domain and all its parents are powered up. Therefore users should only
+ * grab a reference to the innermost power domain they need.
+ *
+ * Any power domain reference obtained by this function must have a symmetric
+ * call to intel_display_power_put() to release the reference again.
+ */
+bool intel_display_power_get_if_enabled(struct drm_i915_private *dev_priv,
+ enum intel_display_power_domain domain)
+{
+ struct i915_power_domains *power_domains = &dev_priv->power_domains;
+ bool is_enabled;
+
+ if (!intel_runtime_pm_get_if_in_use(dev_priv))
+ return false;
mutex_lock(&power_domains->lock);
- for_each_power_well(i, power_well, BIT(domain), power_domains) {
- if (!power_well->count++)
- intel_power_well_enable(dev_priv, power_well);
+ if (__intel_display_power_is_enabled(dev_priv, domain)) {
+ __intel_display_power_get_domain(dev_priv, domain);
+ is_enabled = true;
+ } else {
+ is_enabled = false;
}
- power_domains->domain_use_count[domain]++;
-
mutex_unlock(&power_domains->lock);
+
+ if (!is_enabled)
+ intel_runtime_pm_put(dev_priv);
+
+ return is_enabled;
}
/**
@@ -2246,6 +2336,43 @@
}
/**
+ * intel_runtime_pm_get_if_in_use - grab a runtime pm reference if device in use
+ * @dev_priv: i915 device instance
+ *
+ * This function grabs a device-level runtime pm reference if the device is
+ * already in use and ensures that it is powered up.
+ *
+ * Any runtime pm reference obtained by this function must have a symmetric
+ * call to intel_runtime_pm_put() to release the reference again.
+ */
+bool intel_runtime_pm_get_if_in_use(struct drm_i915_private *dev_priv)
+{
+ struct drm_device *dev = dev_priv->dev;
+ struct device *device = &dev->pdev->dev;
+ int ret;
+
+ if (!IS_ENABLED(CONFIG_PM))
+ return true;
+
+ ret = pm_runtime_get_if_in_use(device);
+
+ /*
+ * In cases runtime PM is disabled by the RPM core and we get an
+ * -EINVAL return value we are not supposed to call this function,
+ * since the power state is undefined. This applies atm to the
+ * late/early system suspend/resume handlers.
+ */
+ WARN_ON_ONCE(ret < 0);
+ if (ret <= 0)
+ return false;
+
+ atomic_inc(&dev_priv->pm.wakeref_count);
+ assert_rpm_wakelock_held(dev_priv);
+
+ return true;
+}
+
+/**
* intel_runtime_pm_get_noresume - grab a runtime pm reference
* @dev_priv: i915 device instance
*
diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c b/drivers/gpu/drm/nouveau/nouveau_bo.c
index 78f520d..e3acc35 100644
--- a/drivers/gpu/drm/nouveau/nouveau_bo.c
+++ b/drivers/gpu/drm/nouveau/nouveau_bo.c
@@ -1520,7 +1520,7 @@
DMA_BIDIRECTIONAL);
if (dma_mapping_error(pdev, addr)) {
- while (--i) {
+ while (i--) {
dma_unmap_page(pdev, ttm_dma->dma_address[i],
PAGE_SIZE, DMA_BIDIRECTIONAL);
ttm_dma->dma_address[i] = 0;
diff --git a/drivers/gpu/drm/nouveau/nouveau_display.c b/drivers/gpu/drm/nouveau/nouveau_display.c
index 24be27d..20935eb 100644
--- a/drivers/gpu/drm/nouveau/nouveau_display.c
+++ b/drivers/gpu/drm/nouveau/nouveau_display.c
@@ -635,10 +635,6 @@
nv_crtc->lut.depth = 0;
}
- /* Make sure that drm and hw vblank irqs get resumed if needed. */
- for (head = 0; head < dev->mode_config.num_crtc; head++)
- drm_vblank_on(dev, head);
-
/* This should ensure we don't hit a locking problem when someone
* wakes us up via a connector. We should never go into suspend
* while the display is on anyways.
@@ -648,6 +644,10 @@
drm_helper_resume_force_mode(dev);
+ /* Make sure that drm and hw vblank irqs get resumed if needed. */
+ for (head = 0; head < dev->mode_config.num_crtc; head++)
+ drm_vblank_on(dev, head);
+
list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) {
struct nouveau_crtc *nv_crtc = nouveau_crtc(crtc);
diff --git a/drivers/gpu/drm/nouveau/nouveau_platform.c b/drivers/gpu/drm/nouveau/nouveau_platform.c
index 8a70cec..2dfe58a 100644
--- a/drivers/gpu/drm/nouveau/nouveau_platform.c
+++ b/drivers/gpu/drm/nouveau/nouveau_platform.c
@@ -24,7 +24,7 @@
static int nouveau_platform_probe(struct platform_device *pdev)
{
const struct nvkm_device_tegra_func *func;
- struct nvkm_device *device;
+ struct nvkm_device *device = NULL;
struct drm_device *drm;
int ret;
diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/device/tegra.c b/drivers/gpu/drm/nouveau/nvkm/engine/device/tegra.c
index 7f8a427..e7e581d 100644
--- a/drivers/gpu/drm/nouveau/nvkm/engine/device/tegra.c
+++ b/drivers/gpu/drm/nouveau/nvkm/engine/device/tegra.c
@@ -252,32 +252,40 @@
if (!(tdev = kzalloc(sizeof(*tdev), GFP_KERNEL)))
return -ENOMEM;
- *pdevice = &tdev->device;
+
tdev->func = func;
tdev->pdev = pdev;
tdev->irq = -1;
tdev->vdd = devm_regulator_get(&pdev->dev, "vdd");
- if (IS_ERR(tdev->vdd))
- return PTR_ERR(tdev->vdd);
+ if (IS_ERR(tdev->vdd)) {
+ ret = PTR_ERR(tdev->vdd);
+ goto free;
+ }
tdev->rst = devm_reset_control_get(&pdev->dev, "gpu");
- if (IS_ERR(tdev->rst))
- return PTR_ERR(tdev->rst);
+ if (IS_ERR(tdev->rst)) {
+ ret = PTR_ERR(tdev->rst);
+ goto free;
+ }
tdev->clk = devm_clk_get(&pdev->dev, "gpu");
- if (IS_ERR(tdev->clk))
- return PTR_ERR(tdev->clk);
+ if (IS_ERR(tdev->clk)) {
+ ret = PTR_ERR(tdev->clk);
+ goto free;
+ }
tdev->clk_pwr = devm_clk_get(&pdev->dev, "pwr");
- if (IS_ERR(tdev->clk_pwr))
- return PTR_ERR(tdev->clk_pwr);
+ if (IS_ERR(tdev->clk_pwr)) {
+ ret = PTR_ERR(tdev->clk_pwr);
+ goto free;
+ }
nvkm_device_tegra_probe_iommu(tdev);
ret = nvkm_device_tegra_power_up(tdev);
if (ret)
- return ret;
+ goto remove;
tdev->gpu_speedo = tegra_sku_info.gpu_speedo_value;
ret = nvkm_device_ctor(&nvkm_device_tegra_func, NULL, &pdev->dev,
@@ -285,9 +293,19 @@
cfg, dbg, detect, mmio, subdev_mask,
&tdev->device);
if (ret)
- return ret;
+ goto powerdown;
+
+ *pdevice = &tdev->device;
return 0;
+
+powerdown:
+ nvkm_device_tegra_power_down(tdev);
+remove:
+ nvkm_device_tegra_remove_iommu(tdev);
+free:
+ kfree(tdev);
+ return ret;
}
#else
int
diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/disp/dport.c b/drivers/gpu/drm/nouveau/nvkm/engine/disp/dport.c
index 74e2f7c..9688970 100644
--- a/drivers/gpu/drm/nouveau/nvkm/engine/disp/dport.c
+++ b/drivers/gpu/drm/nouveau/nvkm/engine/disp/dport.c
@@ -328,6 +328,7 @@
.outp = outp,
}, *dp = &_dp;
u32 datarate = 0;
+ u8 pwr;
int ret;
if (!outp->base.info.location && disp->func->sor.magic)
@@ -355,6 +356,15 @@
/* disable link interrupt handling during link training */
nvkm_notify_put(&outp->irq);
+ /* ensure sink is not in a low-power state */
+ if (!nvkm_rdaux(outp->aux, DPCD_SC00, &pwr, 1)) {
+ if ((pwr & DPCD_SC00_SET_POWER) != DPCD_SC00_SET_POWER_D0) {
+ pwr &= ~DPCD_SC00_SET_POWER;
+ pwr |= DPCD_SC00_SET_POWER_D0;
+ nvkm_wraux(outp->aux, DPCD_SC00, &pwr, 1);
+ }
+ }
+
/* enable down-spreading and execute pre-train script from vbios */
dp_link_train_init(dp, outp->dpcd[3] & 0x01);
diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/disp/dport.h b/drivers/gpu/drm/nouveau/nvkm/engine/disp/dport.h
index 9596290..6e10c5e 100644
--- a/drivers/gpu/drm/nouveau/nvkm/engine/disp/dport.h
+++ b/drivers/gpu/drm/nouveau/nvkm/engine/disp/dport.h
@@ -71,5 +71,11 @@
#define DPCD_LS0C_LANE1_POST_CURSOR2 0x0c
#define DPCD_LS0C_LANE0_POST_CURSOR2 0x03
+/* DPCD Sink Control */
+#define DPCD_SC00 0x00600
+#define DPCD_SC00_SET_POWER 0x03
+#define DPCD_SC00_SET_POWER_D0 0x01
+#define DPCD_SC00_SET_POWER_D3 0x03
+
void nvkm_dp_train(struct work_struct *);
#endif
diff --git a/drivers/gpu/drm/qxl/qxl_ioctl.c b/drivers/gpu/drm/qxl/qxl_ioctl.c
index 2ae8577..7c2e782 100644
--- a/drivers/gpu/drm/qxl/qxl_ioctl.c
+++ b/drivers/gpu/drm/qxl/qxl_ioctl.c
@@ -168,7 +168,8 @@
cmd->command_size))
return -EFAULT;
- reloc_info = kmalloc(sizeof(struct qxl_reloc_info) * cmd->relocs_num, GFP_KERNEL);
+ reloc_info = kmalloc_array(cmd->relocs_num,
+ sizeof(struct qxl_reloc_info), GFP_KERNEL);
if (!reloc_info)
return -ENOMEM;
diff --git a/drivers/gpu/drm/qxl/qxl_prime.c b/drivers/gpu/drm/qxl/qxl_prime.c
index 3d031b5..9f029dd 100644
--- a/drivers/gpu/drm/qxl/qxl_prime.c
+++ b/drivers/gpu/drm/qxl/qxl_prime.c
@@ -68,5 +68,5 @@
struct vm_area_struct *area)
{
WARN_ONCE(1, "not implemented");
- return ENOSYS;
+ return -ENOSYS;
}
diff --git a/drivers/gpu/drm/radeon/radeon_display.c b/drivers/gpu/drm/radeon/radeon_display.c
index 298ea1c..2b9ba03 100644
--- a/drivers/gpu/drm/radeon/radeon_display.c
+++ b/drivers/gpu/drm/radeon/radeon_display.c
@@ -403,7 +403,8 @@
struct drm_crtc *crtc = &radeon_crtc->base;
unsigned long flags;
int r;
- int vpos, hpos, stat, min_udelay;
+ int vpos, hpos, stat, min_udelay = 0;
+ unsigned repcnt = 4;
struct drm_vblank_crtc *vblank = &crtc->dev->vblank[work->crtc_id];
down_read(&rdev->exclusive_lock);
@@ -454,7 +455,7 @@
* In practice this won't execute very often unless on very fast
* machines because the time window for this to happen is very small.
*/
- for (;;) {
+ while (radeon_crtc->enabled && repcnt--) {
/* GET_DISTANCE_TO_VBLANKSTART returns distance to real vblank
* start in hpos, and to the "fudged earlier" vblank start in
* vpos.
@@ -472,10 +473,22 @@
/* Sleep at least until estimated real start of hw vblank */
spin_unlock_irqrestore(&crtc->dev->event_lock, flags);
min_udelay = (-hpos + 1) * max(vblank->linedur_ns / 1000, 5);
+ if (min_udelay > vblank->framedur_ns / 2000) {
+ /* Don't wait ridiculously long - something is wrong */
+ repcnt = 0;
+ break;
+ }
usleep_range(min_udelay, 2 * min_udelay);
spin_lock_irqsave(&crtc->dev->event_lock, flags);
};
+ if (!repcnt)
+ DRM_DEBUG_DRIVER("Delay problem on crtc %d: min_udelay %d, "
+ "framedur %d, linedur %d, stat %d, vpos %d, "
+ "hpos %d\n", work->crtc_id, min_udelay,
+ vblank->framedur_ns / 1000,
+ vblank->linedur_ns / 1000, stat, vpos, hpos);
+
/* do the flip (mmio) */
radeon_page_flip(rdev, radeon_crtc->crtc_id, work->base);
diff --git a/drivers/gpu/drm/radeon/radeon_pm.c b/drivers/gpu/drm/radeon/radeon_pm.c
index 460c8f2..ca3be90 100644
--- a/drivers/gpu/drm/radeon/radeon_pm.c
+++ b/drivers/gpu/drm/radeon/radeon_pm.c
@@ -276,8 +276,12 @@
if (rdev->irq.installed) {
for (i = 0; i < rdev->num_crtc; i++) {
if (rdev->pm.active_crtcs & (1 << i)) {
- rdev->pm.req_vblank |= (1 << i);
- drm_vblank_get(rdev->ddev, i);
+ /* This can fail if a modeset is in progress */
+ if (drm_vblank_get(rdev->ddev, i) == 0)
+ rdev->pm.req_vblank |= (1 << i);
+ else
+ DRM_DEBUG_DRIVER("crtc %d no vblank, can glitch\n",
+ i);
}
}
}
@@ -1075,8 +1079,6 @@
/* update display watermarks based on new power state */
radeon_bandwidth_update(rdev);
- /* update displays */
- radeon_dpm_display_configuration_changed(rdev);
rdev->pm.dpm.current_active_crtcs = rdev->pm.dpm.new_active_crtcs;
rdev->pm.dpm.current_active_crtc_count = rdev->pm.dpm.new_active_crtc_count;
@@ -1097,6 +1099,9 @@
radeon_dpm_post_set_power_state(rdev);
+ /* update displays */
+ radeon_dpm_display_configuration_changed(rdev);
+
if (rdev->asic->dpm.force_performance_level) {
if (rdev->pm.dpm.thermal_active) {
enum radeon_dpm_forced_level level = rdev->pm.dpm.forced_level;
diff --git a/drivers/gpu/drm/radeon/radeon_sa.c b/drivers/gpu/drm/radeon/radeon_sa.c
index c507896..197b157 100644
--- a/drivers/gpu/drm/radeon/radeon_sa.c
+++ b/drivers/gpu/drm/radeon/radeon_sa.c
@@ -349,8 +349,13 @@
/* see if we can skip over some allocations */
} while (radeon_sa_bo_next_hole(sa_manager, fences, tries));
+ for (i = 0; i < RADEON_NUM_RINGS; ++i)
+ radeon_fence_ref(fences[i]);
+
spin_unlock(&sa_manager->wq.lock);
r = radeon_fence_wait_any(rdev, fences, false);
+ for (i = 0; i < RADEON_NUM_RINGS; ++i)
+ radeon_fence_unref(&fences[i]);
spin_lock(&sa_manager->wq.lock);
/* if we have nothing to wait for block */
if (r == -ENOENT) {
diff --git a/drivers/gpu/drm/radeon/radeon_ttm.c b/drivers/gpu/drm/radeon/radeon_ttm.c
index e343074..e06ac54 100644
--- a/drivers/gpu/drm/radeon/radeon_ttm.c
+++ b/drivers/gpu/drm/radeon/radeon_ttm.c
@@ -758,7 +758,7 @@
0, PAGE_SIZE,
PCI_DMA_BIDIRECTIONAL);
if (pci_dma_mapping_error(rdev->pdev, gtt->ttm.dma_address[i])) {
- while (--i) {
+ while (i--) {
pci_unmap_page(rdev->pdev, gtt->ttm.dma_address[i],
PAGE_SIZE, PCI_DMA_BIDIRECTIONAL);
gtt->ttm.dma_address[i] = 0;
diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c
index 18dfe3e..22278bc 100644
--- a/drivers/gpu/drm/vc4/vc4_bo.c
+++ b/drivers/gpu/drm/vc4/vc4_bo.c
@@ -215,7 +215,7 @@
struct drm_gem_cma_object *cma_obj;
if (size == 0)
- return NULL;
+ return ERR_PTR(-EINVAL);
/* First, try to get a vc4_bo from the kernel BO cache. */
if (from_cache) {
@@ -237,7 +237,7 @@
if (IS_ERR(cma_obj)) {
DRM_ERROR("Failed to allocate from CMA:\n");
vc4_bo_stats_dump(vc4);
- return NULL;
+ return ERR_PTR(-ENOMEM);
}
}
@@ -259,8 +259,8 @@
args->size = args->pitch * args->height;
bo = vc4_bo_create(dev, args->size, false);
- if (!bo)
- return -ENOMEM;
+ if (IS_ERR(bo))
+ return PTR_ERR(bo);
ret = drm_gem_handle_create(file_priv, &bo->base.base, &args->handle);
drm_gem_object_unreference_unlocked(&bo->base.base);
@@ -443,8 +443,8 @@
* get zeroed, and that might leak data between users.
*/
bo = vc4_bo_create(dev, args->size, false);
- if (!bo)
- return -ENOMEM;
+ if (IS_ERR(bo))
+ return PTR_ERR(bo);
ret = drm_gem_handle_create(file_priv, &bo->base.base, &args->handle);
drm_gem_object_unreference_unlocked(&bo->base.base);
@@ -496,8 +496,8 @@
}
bo = vc4_bo_create(dev, args->size, true);
- if (!bo)
- return -ENOMEM;
+ if (IS_ERR(bo))
+ return PTR_ERR(bo);
ret = copy_from_user(bo->base.vaddr,
(void __user *)(uintptr_t)args->data,
diff --git a/drivers/gpu/drm/vc4/vc4_drv.h b/drivers/gpu/drm/vc4/vc4_drv.h
index 080865e..51a6333 100644
--- a/drivers/gpu/drm/vc4/vc4_drv.h
+++ b/drivers/gpu/drm/vc4/vc4_drv.h
@@ -91,8 +91,12 @@
struct vc4_bo *overflow_mem;
struct work_struct overflow_mem_work;
+ int power_refcount;
+
+ /* Mutex controlling the power refcount. */
+ struct mutex power_lock;
+
struct {
- uint32_t last_ct0ca, last_ct1ca;
struct timer_list timer;
struct work_struct reset_work;
} hangcheck;
@@ -142,6 +146,7 @@
};
struct vc4_v3d {
+ struct vc4_dev *vc4;
struct platform_device *pdev;
void __iomem *regs;
};
@@ -192,6 +197,11 @@
/* Sequence number for this bin/render job. */
uint64_t seqno;
+ /* Last current addresses the hardware was processing when the
+ * hangcheck timer checked on us.
+ */
+ uint32_t last_ct0ca, last_ct1ca;
+
/* Kernel-space copy of the ioctl arguments */
struct drm_vc4_submit_cl *args;
@@ -434,7 +444,6 @@
extern struct platform_driver vc4_v3d_driver;
int vc4_v3d_debugfs_ident(struct seq_file *m, void *unused);
int vc4_v3d_debugfs_regs(struct seq_file *m, void *unused);
-int vc4_v3d_set_power(struct vc4_dev *vc4, bool on);
/* vc4_validate.c */
int
diff --git a/drivers/gpu/drm/vc4/vc4_gem.c b/drivers/gpu/drm/vc4/vc4_gem.c
index 48ce30a..202aa15 100644
--- a/drivers/gpu/drm/vc4/vc4_gem.c
+++ b/drivers/gpu/drm/vc4/vc4_gem.c
@@ -23,6 +23,7 @@
#include <linux/module.h>
#include <linux/platform_device.h>
+#include <linux/pm_runtime.h>
#include <linux/device.h>
#include <linux/io.h>
@@ -228,8 +229,16 @@
struct vc4_dev *vc4 = to_vc4_dev(dev);
DRM_INFO("Resetting GPU.\n");
- vc4_v3d_set_power(vc4, false);
- vc4_v3d_set_power(vc4, true);
+
+ mutex_lock(&vc4->power_lock);
+ if (vc4->power_refcount) {
+ /* Power the device off and back on the by dropping the
+ * reference on runtime PM.
+ */
+ pm_runtime_put_sync_suspend(&vc4->v3d->pdev->dev);
+ pm_runtime_get_sync(&vc4->v3d->pdev->dev);
+ }
+ mutex_unlock(&vc4->power_lock);
vc4_irq_reset(dev);
@@ -257,10 +266,17 @@
struct drm_device *dev = (struct drm_device *)data;
struct vc4_dev *vc4 = to_vc4_dev(dev);
uint32_t ct0ca, ct1ca;
+ unsigned long irqflags;
+ struct vc4_exec_info *exec;
+
+ spin_lock_irqsave(&vc4->job_lock, irqflags);
+ exec = vc4_first_job(vc4);
/* If idle, we can stop watching for hangs. */
- if (list_empty(&vc4->job_list))
+ if (!exec) {
+ spin_unlock_irqrestore(&vc4->job_lock, irqflags);
return;
+ }
ct0ca = V3D_READ(V3D_CTNCA(0));
ct1ca = V3D_READ(V3D_CTNCA(1));
@@ -268,14 +284,16 @@
/* If we've made any progress in execution, rearm the timer
* and wait.
*/
- if (ct0ca != vc4->hangcheck.last_ct0ca ||
- ct1ca != vc4->hangcheck.last_ct1ca) {
- vc4->hangcheck.last_ct0ca = ct0ca;
- vc4->hangcheck.last_ct1ca = ct1ca;
+ if (ct0ca != exec->last_ct0ca || ct1ca != exec->last_ct1ca) {
+ exec->last_ct0ca = ct0ca;
+ exec->last_ct1ca = ct1ca;
+ spin_unlock_irqrestore(&vc4->job_lock, irqflags);
vc4_queue_hangcheck(dev);
return;
}
+ spin_unlock_irqrestore(&vc4->job_lock, irqflags);
+
/* We've gone too long with no progress, reset. This has to
* be done from a work struct, since resetting can sleep and
* this timer hook isn't allowed to.
@@ -340,12 +358,7 @@
finish_wait(&vc4->job_wait_queue, &wait);
trace_vc4_wait_for_seqno_end(dev, seqno);
- if (ret && ret != -ERESTARTSYS) {
- DRM_ERROR("timeout waiting for render thread idle\n");
- return ret;
- }
-
- return 0;
+ return ret;
}
static void
@@ -578,9 +591,9 @@
}
bo = vc4_bo_create(dev, exec_size, true);
- if (!bo) {
+ if (IS_ERR(bo)) {
DRM_ERROR("Couldn't allocate BO for binning\n");
- ret = -ENOMEM;
+ ret = PTR_ERR(bo);
goto fail;
}
exec->exec_bo = &bo->base;
@@ -617,6 +630,7 @@
static void
vc4_complete_exec(struct drm_device *dev, struct vc4_exec_info *exec)
{
+ struct vc4_dev *vc4 = to_vc4_dev(dev);
unsigned i;
/* Need the struct lock for drm_gem_object_unreference(). */
@@ -635,6 +649,11 @@
}
mutex_unlock(&dev->struct_mutex);
+ mutex_lock(&vc4->power_lock);
+ if (--vc4->power_refcount == 0)
+ pm_runtime_put(&vc4->v3d->pdev->dev);
+ mutex_unlock(&vc4->power_lock);
+
kfree(exec);
}
@@ -746,6 +765,9 @@
struct drm_gem_object *gem_obj;
struct vc4_bo *bo;
+ if (args->pad != 0)
+ return -EINVAL;
+
gem_obj = drm_gem_object_lookup(dev, file_priv, args->handle);
if (!gem_obj) {
DRM_ERROR("Failed to look up GEM BO %d\n", args->handle);
@@ -772,7 +794,7 @@
struct vc4_dev *vc4 = to_vc4_dev(dev);
struct drm_vc4_submit_cl *args = data;
struct vc4_exec_info *exec;
- int ret;
+ int ret = 0;
if ((args->flags & ~VC4_SUBMIT_CL_USE_CLEAR_COLOR) != 0) {
DRM_ERROR("Unknown flags: 0x%02x\n", args->flags);
@@ -785,6 +807,15 @@
return -ENOMEM;
}
+ mutex_lock(&vc4->power_lock);
+ if (vc4->power_refcount++ == 0)
+ ret = pm_runtime_get_sync(&vc4->v3d->pdev->dev);
+ mutex_unlock(&vc4->power_lock);
+ if (ret < 0) {
+ kfree(exec);
+ return ret;
+ }
+
exec->args = args;
INIT_LIST_HEAD(&exec->unref_list);
@@ -839,6 +870,8 @@
(unsigned long)dev);
INIT_WORK(&vc4->job_done_work, vc4_job_done_work);
+
+ mutex_init(&vc4->power_lock);
}
void
diff --git a/drivers/gpu/drm/vc4/vc4_irq.c b/drivers/gpu/drm/vc4/vc4_irq.c
index b68060e..78a2135 100644
--- a/drivers/gpu/drm/vc4/vc4_irq.c
+++ b/drivers/gpu/drm/vc4/vc4_irq.c
@@ -57,7 +57,7 @@
struct vc4_bo *bo;
bo = vc4_bo_create(dev, 256 * 1024, true);
- if (!bo) {
+ if (IS_ERR(bo)) {
DRM_ERROR("Couldn't allocate binner overflow mem\n");
return;
}
diff --git a/drivers/gpu/drm/vc4/vc4_render_cl.c b/drivers/gpu/drm/vc4/vc4_render_cl.c
index 8a2a312..0f12418 100644
--- a/drivers/gpu/drm/vc4/vc4_render_cl.c
+++ b/drivers/gpu/drm/vc4/vc4_render_cl.c
@@ -316,20 +316,11 @@
size += xtiles * ytiles * loop_body_size;
setup->rcl = &vc4_bo_create(dev, size, true)->base;
- if (!setup->rcl)
- return -ENOMEM;
+ if (IS_ERR(setup->rcl))
+ return PTR_ERR(setup->rcl);
list_add_tail(&to_vc4_bo(&setup->rcl->base)->unref_head,
&exec->unref_list);
- rcl_u8(setup, VC4_PACKET_TILE_RENDERING_MODE_CONFIG);
- rcl_u32(setup,
- (setup->color_write ? (setup->color_write->paddr +
- args->color_write.offset) :
- 0));
- rcl_u16(setup, args->width);
- rcl_u16(setup, args->height);
- rcl_u16(setup, args->color_write.bits);
-
/* The tile buffer gets cleared when the previous tile is stored. If
* the clear values changed between frames, then the tile buffer has
* stale clear values in it, so we have to do a store in None mode (no
@@ -349,6 +340,15 @@
rcl_u32(setup, 0); /* no address, since we're in None mode */
}
+ rcl_u8(setup, VC4_PACKET_TILE_RENDERING_MODE_CONFIG);
+ rcl_u32(setup,
+ (setup->color_write ? (setup->color_write->paddr +
+ args->color_write.offset) :
+ 0));
+ rcl_u16(setup, args->width);
+ rcl_u16(setup, args->height);
+ rcl_u16(setup, args->color_write.bits);
+
for (y = min_y_tile; y <= max_y_tile; y++) {
for (x = min_x_tile; x <= max_x_tile; x++) {
bool first = (x == min_x_tile && y == min_y_tile);
diff --git a/drivers/gpu/drm/vc4/vc4_v3d.c b/drivers/gpu/drm/vc4/vc4_v3d.c
index 314ff71..31de5d1 100644
--- a/drivers/gpu/drm/vc4/vc4_v3d.c
+++ b/drivers/gpu/drm/vc4/vc4_v3d.c
@@ -17,6 +17,7 @@
*/
#include "linux/component.h"
+#include "linux/pm_runtime.h"
#include "vc4_drv.h"
#include "vc4_regs.h"
@@ -144,18 +145,6 @@
}
#endif /* CONFIG_DEBUG_FS */
-int
-vc4_v3d_set_power(struct vc4_dev *vc4, bool on)
-{
- /* XXX: This interface is needed for GPU reset, and the way to
- * do it is to turn our power domain off and back on. We
- * can't just reset from within the driver, because the reset
- * bits are in the power domain's register area, and get set
- * during the poweron process.
- */
- return 0;
-}
-
static void vc4_v3d_init_hw(struct drm_device *dev)
{
struct vc4_dev *vc4 = to_vc4_dev(dev);
@@ -167,6 +156,29 @@
V3D_WRITE(V3D_VPMBASE, 0);
}
+#ifdef CONFIG_PM
+static int vc4_v3d_runtime_suspend(struct device *dev)
+{
+ struct vc4_v3d *v3d = dev_get_drvdata(dev);
+ struct vc4_dev *vc4 = v3d->vc4;
+
+ vc4_irq_uninstall(vc4->dev);
+
+ return 0;
+}
+
+static int vc4_v3d_runtime_resume(struct device *dev)
+{
+ struct vc4_v3d *v3d = dev_get_drvdata(dev);
+ struct vc4_dev *vc4 = v3d->vc4;
+
+ vc4_v3d_init_hw(vc4->dev);
+ vc4_irq_postinstall(vc4->dev);
+
+ return 0;
+}
+#endif
+
static int vc4_v3d_bind(struct device *dev, struct device *master, void *data)
{
struct platform_device *pdev = to_platform_device(dev);
@@ -179,6 +191,8 @@
if (!v3d)
return -ENOMEM;
+ dev_set_drvdata(dev, v3d);
+
v3d->pdev = pdev;
v3d->regs = vc4_ioremap_regs(pdev, 0);
@@ -186,6 +200,7 @@
return PTR_ERR(v3d->regs);
vc4->v3d = v3d;
+ v3d->vc4 = vc4;
if (V3D_READ(V3D_IDENT0) != V3D_EXPECTED_IDENT0) {
DRM_ERROR("V3D_IDENT0 read 0x%08x instead of 0x%08x\n",
@@ -207,6 +222,8 @@
return ret;
}
+ pm_runtime_enable(dev);
+
return 0;
}
@@ -216,6 +233,8 @@
struct drm_device *drm = dev_get_drvdata(master);
struct vc4_dev *vc4 = to_vc4_dev(drm);
+ pm_runtime_disable(dev);
+
drm_irq_uninstall(drm);
/* Disable the binner's overflow memory address, so the next
@@ -228,6 +247,10 @@
vc4->v3d = NULL;
}
+static const struct dev_pm_ops vc4_v3d_pm_ops = {
+ SET_RUNTIME_PM_OPS(vc4_v3d_runtime_suspend, vc4_v3d_runtime_resume, NULL)
+};
+
static const struct component_ops vc4_v3d_ops = {
.bind = vc4_v3d_bind,
.unbind = vc4_v3d_unbind,
@@ -255,5 +278,6 @@
.driver = {
.name = "vc4_v3d",
.of_match_table = vc4_v3d_dt_match,
+ .pm = &vc4_v3d_pm_ops,
},
};
diff --git a/drivers/gpu/drm/vc4/vc4_validate.c b/drivers/gpu/drm/vc4/vc4_validate.c
index e26d9f6..24c2c74 100644
--- a/drivers/gpu/drm/vc4/vc4_validate.c
+++ b/drivers/gpu/drm/vc4/vc4_validate.c
@@ -401,8 +401,8 @@
tile_bo = vc4_bo_create(dev, exec->tile_alloc_offset + tile_alloc_size,
true);
exec->tile_bo = &tile_bo->base;
- if (!exec->tile_bo)
- return -ENOMEM;
+ if (IS_ERR(exec->tile_bo))
+ return PTR_ERR(exec->tile_bo);
list_add_tail(&tile_bo->unref_head, &exec->unref_list);
/* tile alloc address. */
diff --git a/drivers/hv/channel.c b/drivers/hv/channel.c
index 1161d68..56dd261 100644
--- a/drivers/hv/channel.c
+++ b/drivers/hv/channel.c
@@ -219,6 +219,21 @@
}
EXPORT_SYMBOL_GPL(vmbus_open);
+/* Used for Hyper-V Socket: a guest client's connect() to the host */
+int vmbus_send_tl_connect_request(const uuid_le *shv_guest_servie_id,
+ const uuid_le *shv_host_servie_id)
+{
+ struct vmbus_channel_tl_connect_request conn_msg;
+
+ memset(&conn_msg, 0, sizeof(conn_msg));
+ conn_msg.header.msgtype = CHANNELMSG_TL_CONNECT_REQUEST;
+ conn_msg.guest_endpoint_id = *shv_guest_servie_id;
+ conn_msg.host_service_id = *shv_host_servie_id;
+
+ return vmbus_post_msg(&conn_msg, sizeof(conn_msg));
+}
+EXPORT_SYMBOL_GPL(vmbus_send_tl_connect_request);
+
/*
* create_gpadl_header - Creates a gpadl for the specified buffer
*/
@@ -624,6 +639,7 @@
u64 aligned_data = 0;
int ret;
bool signal = false;
+ bool lock = channel->acquire_ring_lock;
int num_vecs = ((bufferlen != 0) ? 3 : 1);
@@ -643,7 +659,7 @@
bufferlist[2].iov_len = (packetlen_aligned - packetlen);
ret = hv_ringbuffer_write(&channel->outbound, bufferlist, num_vecs,
- &signal);
+ &signal, lock);
/*
* Signalling the host is conditional on many factors:
@@ -659,6 +675,9 @@
* If we cannot write to the ring-buffer; signal the host
* even if we may not have written anything. This is a rare
* enough condition that it should not matter.
+ * NOTE: in this case, the hvsock channel is an exception, because
+ * it looks the host side's hvsock implementation has a throttling
+ * mechanism which can hurt the performance otherwise.
*/
if (channel->signal_policy)
@@ -666,7 +685,8 @@
else
kick_q = true;
- if (((ret == 0) && kick_q && signal) || (ret))
+ if (((ret == 0) && kick_q && signal) ||
+ (ret && !is_hvsock_channel(channel)))
vmbus_setevent(channel);
return ret;
@@ -719,6 +739,7 @@
struct kvec bufferlist[3];
u64 aligned_data = 0;
bool signal = false;
+ bool lock = channel->acquire_ring_lock;
if (pagecount > MAX_PAGE_BUFFER_COUNT)
return -EINVAL;
@@ -755,7 +776,8 @@
bufferlist[2].iov_base = &aligned_data;
bufferlist[2].iov_len = (packetlen_aligned - packetlen);
- ret = hv_ringbuffer_write(&channel->outbound, bufferlist, 3, &signal);
+ ret = hv_ringbuffer_write(&channel->outbound, bufferlist, 3,
+ &signal, lock);
/*
* Signalling the host is conditional on many factors:
@@ -818,6 +840,7 @@
struct kvec bufferlist[3];
u64 aligned_data = 0;
bool signal = false;
+ bool lock = channel->acquire_ring_lock;
packetlen = desc_size + bufferlen;
packetlen_aligned = ALIGN(packetlen, sizeof(u64));
@@ -837,7 +860,8 @@
bufferlist[2].iov_base = &aligned_data;
bufferlist[2].iov_len = (packetlen_aligned - packetlen);
- ret = hv_ringbuffer_write(&channel->outbound, bufferlist, 3, &signal);
+ ret = hv_ringbuffer_write(&channel->outbound, bufferlist, 3,
+ &signal, lock);
if (ret == 0 && signal)
vmbus_setevent(channel);
@@ -862,6 +886,7 @@
struct kvec bufferlist[3];
u64 aligned_data = 0;
bool signal = false;
+ bool lock = channel->acquire_ring_lock;
u32 pfncount = NUM_PAGES_SPANNED(multi_pagebuffer->offset,
multi_pagebuffer->len);
@@ -900,7 +925,8 @@
bufferlist[2].iov_base = &aligned_data;
bufferlist[2].iov_len = (packetlen_aligned - packetlen);
- ret = hv_ringbuffer_write(&channel->outbound, bufferlist, 3, &signal);
+ ret = hv_ringbuffer_write(&channel->outbound, bufferlist, 3,
+ &signal, lock);
if (ret == 0 && signal)
vmbus_setevent(channel);
diff --git a/drivers/hv/channel_mgmt.c b/drivers/hv/channel_mgmt.c
index 1c1ad47..b40f429 100644
--- a/drivers/hv/channel_mgmt.c
+++ b/drivers/hv/channel_mgmt.c
@@ -28,12 +28,127 @@
#include <linux/list.h>
#include <linux/module.h>
#include <linux/completion.h>
+#include <linux/delay.h>
#include <linux/hyperv.h>
#include "hyperv_vmbus.h"
-static void init_vp_index(struct vmbus_channel *channel,
- const uuid_le *type_guid);
+static void init_vp_index(struct vmbus_channel *channel, u16 dev_type);
+
+static const struct vmbus_device vmbus_devs[] = {
+ /* IDE */
+ { .dev_type = HV_IDE,
+ HV_IDE_GUID,
+ .perf_device = true,
+ },
+
+ /* SCSI */
+ { .dev_type = HV_SCSI,
+ HV_SCSI_GUID,
+ .perf_device = true,
+ },
+
+ /* Fibre Channel */
+ { .dev_type = HV_FC,
+ HV_SYNTHFC_GUID,
+ .perf_device = true,
+ },
+
+ /* Synthetic NIC */
+ { .dev_type = HV_NIC,
+ HV_NIC_GUID,
+ .perf_device = true,
+ },
+
+ /* Network Direct */
+ { .dev_type = HV_ND,
+ HV_ND_GUID,
+ .perf_device = true,
+ },
+
+ /* PCIE */
+ { .dev_type = HV_PCIE,
+ HV_PCIE_GUID,
+ .perf_device = true,
+ },
+
+ /* Synthetic Frame Buffer */
+ { .dev_type = HV_FB,
+ HV_SYNTHVID_GUID,
+ .perf_device = false,
+ },
+
+ /* Synthetic Keyboard */
+ { .dev_type = HV_KBD,
+ HV_KBD_GUID,
+ .perf_device = false,
+ },
+
+ /* Synthetic MOUSE */
+ { .dev_type = HV_MOUSE,
+ HV_MOUSE_GUID,
+ .perf_device = false,
+ },
+
+ /* KVP */
+ { .dev_type = HV_KVP,
+ HV_KVP_GUID,
+ .perf_device = false,
+ },
+
+ /* Time Synch */
+ { .dev_type = HV_TS,
+ HV_TS_GUID,
+ .perf_device = false,
+ },
+
+ /* Heartbeat */
+ { .dev_type = HV_HB,
+ HV_HEART_BEAT_GUID,
+ .perf_device = false,
+ },
+
+ /* Shutdown */
+ { .dev_type = HV_SHUTDOWN,
+ HV_SHUTDOWN_GUID,
+ .perf_device = false,
+ },
+
+ /* File copy */
+ { .dev_type = HV_FCOPY,
+ HV_FCOPY_GUID,
+ .perf_device = false,
+ },
+
+ /* Backup */
+ { .dev_type = HV_BACKUP,
+ HV_VSS_GUID,
+ .perf_device = false,
+ },
+
+ /* Dynamic Memory */
+ { .dev_type = HV_DM,
+ HV_DM_GUID,
+ .perf_device = false,
+ },
+
+ /* Unknown GUID */
+ { .dev_type = HV_UNKOWN,
+ .perf_device = false,
+ },
+};
+
+static u16 hv_get_dev_type(const uuid_le *guid)
+{
+ u16 i;
+
+ for (i = HV_IDE; i < HV_UNKOWN; i++) {
+ if (!uuid_le_cmp(*guid, vmbus_devs[i].guid))
+ return i;
+ }
+ pr_info("Unknown GUID: %pUl\n", guid);
+ return i;
+}
/**
* vmbus_prep_negotiate_resp() - Create default response for Hyper-V Negotiate message
@@ -144,6 +259,7 @@
return NULL;
channel->id = atomic_inc_return(&chan_num);
+ channel->acquire_ring_lock = true;
spin_lock_init(&channel->inbound_lock);
spin_lock_init(&channel->lock);
@@ -195,6 +311,7 @@
vmbus_release_relid(relid);
BUG_ON(!channel->rescind);
+ BUG_ON(!mutex_is_locked(&vmbus_connection.channel_mutex));
if (channel->target_cpu != get_cpu()) {
put_cpu();
@@ -206,9 +323,7 @@
}
if (channel->primary_channel == NULL) {
- mutex_lock(&vmbus_connection.channel_mutex);
list_del(&channel->listentry);
- mutex_unlock(&vmbus_connection.channel_mutex);
primary_channel = channel;
} else {
@@ -251,6 +366,8 @@
struct vmbus_channel *channel;
bool fnew = true;
unsigned long flags;
+ u16 dev_type;
+ int ret;
/* Make sure this is a new offer */
mutex_lock(&vmbus_connection.channel_mutex);
@@ -288,7 +405,9 @@
goto err_free_chan;
}
- init_vp_index(newchannel, &newchannel->offermsg.offer.if_type);
+ dev_type = hv_get_dev_type(&newchannel->offermsg.offer.if_type);
+
+ init_vp_index(newchannel, dev_type);
if (newchannel->target_cpu != get_cpu()) {
put_cpu();
@@ -325,12 +444,17 @@
if (!newchannel->device_obj)
goto err_deq_chan;
+ newchannel->device_obj->device_id = dev_type;
/*
* Add the new device to the bus. This will kick off device-driver
* binding which eventually invokes the device driver's AddDevice()
* method.
*/
- if (vmbus_device_register(newchannel->device_obj) != 0) {
+ mutex_lock(&vmbus_connection.channel_mutex);
+ ret = vmbus_device_register(newchannel->device_obj);
+ mutex_unlock(&vmbus_connection.channel_mutex);
+
+ if (ret != 0) {
pr_err("unable to add child device object (relid %d)\n",
newchannel->offermsg.child_relid);
kfree(newchannel->device_obj);
@@ -358,37 +482,6 @@
free_channel(newchannel);
}
-enum {
- IDE = 0,
- SCSI,
- FC,
- NIC,
- ND_NIC,
- PCIE,
- MAX_PERF_CHN,
-};
-
-/*
- * This is an array of device_ids (device types) that are performance critical.
- * We attempt to distribute the interrupt load for these devices across
- * all available CPUs.
- */
-static const struct hv_vmbus_device_id hp_devs[] = {
- /* IDE */
- { HV_IDE_GUID, },
- /* Storage - SCSI */
- { HV_SCSI_GUID, },
- /* Storage - FC */
- { HV_SYNTHFC_GUID, },
- /* Network */
- { HV_NIC_GUID, },
- /* NetworkDirect Guest RDMA */
- { HV_ND_GUID, },
- /* PCI Express Pass Through */
- { HV_PCIE_GUID, },
-};
-
-
/*
* We use this state to statically distribute the channel interrupt load.
*/
@@ -405,22 +498,15 @@
* For pre-win8 hosts or non-performance critical channels we assign the
* first CPU in the first NUMA node.
*/
-static void init_vp_index(struct vmbus_channel *channel, const uuid_le *type_guid)
+static void init_vp_index(struct vmbus_channel *channel, u16 dev_type)
{
u32 cur_cpu;
- int i;
- bool perf_chn = false;
+ bool perf_chn = vmbus_devs[dev_type].perf_device;
struct vmbus_channel *primary = channel->primary_channel;
int next_node;
struct cpumask available_mask;
struct cpumask *alloced_mask;
- for (i = IDE; i < MAX_PERF_CHN; i++) {
- if (!uuid_le_cmp(*type_guid, hp_devs[i].guid)) {
- perf_chn = true;
- break;
- }
- }
if ((vmbus_proto_version == VERSION_WS2008) ||
(vmbus_proto_version == VERSION_WIN7) || (!perf_chn)) {
/*
@@ -469,6 +555,17 @@
cpumask_of_node(primary->numa_node));
cur_cpu = -1;
+
+ /*
+ * Normally Hyper-V host doesn't create more subchannels than there
+ * are VCPUs on the node but it is possible when not all present VCPUs
+ * on the node are initialized by guest. Clear the alloced_cpus_in_node
+ * to start over.
+ */
+ if (cpumask_equal(&primary->alloced_cpus_in_node,
+ cpumask_of_node(primary->numa_node)))
+ cpumask_clear(&primary->alloced_cpus_in_node);
+
while (true) {
cur_cpu = cpumask_next(cur_cpu, &available_mask);
if (cur_cpu >= nr_cpu_ids) {
@@ -498,6 +595,40 @@
channel->target_vp = hv_context.vp_index[cur_cpu];
}
+static void vmbus_wait_for_unload(void)
+{
+ int cpu = smp_processor_id();
+ void *page_addr = hv_context.synic_message_page[cpu];
+ struct hv_message *msg = (struct hv_message *)page_addr +
+ VMBUS_MESSAGE_SINT;
+ struct vmbus_channel_message_header *hdr;
+ bool unloaded = false;
+
+ while (1) {
+ if (msg->header.message_type == HVMSG_NONE) {
+ mdelay(10);
+ continue;
+ }
+
+ hdr = (struct vmbus_channel_message_header *)msg->u.payload;
+ if (hdr->msgtype == CHANNELMSG_UNLOAD_RESPONSE)
+ unloaded = true;
+
+ msg->header.message_type = HVMSG_NONE;
+ /*
+ * header.message_type needs to be written before we do
+ * wrmsrl() below.
+ */
+ mb();
+
+ if (msg->header.message_flags.msg_pending)
+ wrmsrl(HV_X64_MSR_EOM, 0);
+
+ if (unloaded)
+ break;
+ }
+}
+
/*
* vmbus_unload_response - Handler for the unload response.
*/
@@ -523,7 +654,14 @@
hdr.msgtype = CHANNELMSG_UNLOAD;
vmbus_post_msg(&hdr, sizeof(struct vmbus_channel_message_header));
- wait_for_completion(&vmbus_connection.unload_event);
+ /*
+ * vmbus_initiate_unload() is also called on crash and the crash can be
+ * happening in an interrupt context, where scheduling is impossible.
+ */
+ if (!in_interrupt())
+ wait_for_completion(&vmbus_connection.unload_event);
+ else
+ vmbus_wait_for_unload();
}
/*
@@ -592,6 +730,8 @@
struct device *dev;
rescind = (struct vmbus_channel_rescind_offer *)hdr;
+
+ mutex_lock(&vmbus_connection.channel_mutex);
channel = relid2channel(rescind->child_relid);
if (channel == NULL) {
@@ -600,7 +740,7 @@
* vmbus_process_offer(), we have already invoked
* vmbus_release_relid() on error.
*/
- return;
+ goto out;
}
spin_lock_irqsave(&channel->lock, flags);
@@ -608,6 +748,10 @@
spin_unlock_irqrestore(&channel->lock, flags);
if (channel->device_obj) {
+ if (channel->chn_rescind_callback) {
+ channel->chn_rescind_callback(channel);
+ goto out;
+ }
/*
* We will have to unregister this device from the
* driver core.
@@ -621,8 +765,25 @@
hv_process_channel_removal(channel,
channel->offermsg.child_relid);
}
+
+out:
+ mutex_unlock(&vmbus_connection.channel_mutex);
}
+void vmbus_hvsock_device_unregister(struct vmbus_channel *channel)
+{
+ mutex_lock(&vmbus_connection.channel_mutex);
+
+ BUG_ON(!is_hvsock_channel(channel));
+
+ channel->rescind = true;
+ vmbus_device_unregister(channel->device_obj);
+
+ mutex_unlock(&vmbus_connection.channel_mutex);
+}
+EXPORT_SYMBOL_GPL(vmbus_hvsock_device_unregister);
+
+
/*
* vmbus_onoffers_delivered -
* This is invoked when all offers have been delivered.
@@ -825,6 +986,10 @@
{CHANNELMSG_VERSION_RESPONSE, 1, vmbus_onversion_response},
{CHANNELMSG_UNLOAD, 0, NULL},
{CHANNELMSG_UNLOAD_RESPONSE, 1, vmbus_unload_response},
+ {CHANNELMSG_18, 0, NULL},
+ {CHANNELMSG_19, 0, NULL},
+ {CHANNELMSG_20, 0, NULL},
+ {CHANNELMSG_TL_CONNECT_REQUEST, 0, NULL},
};
/*
@@ -973,3 +1138,10 @@
return ret;
}
EXPORT_SYMBOL_GPL(vmbus_are_subchannels_present);
+
+void vmbus_set_chn_rescind_callback(struct vmbus_channel *channel,
+ void (*chn_rescind_cb)(struct vmbus_channel *))
+{
+ channel->chn_rescind_callback = chn_rescind_cb;
+}
+EXPORT_SYMBOL_GPL(vmbus_set_chn_rescind_callback);
diff --git a/drivers/hv/connection.c b/drivers/hv/connection.c
index 3dc5a9c..fa86b2c 100644
--- a/drivers/hv/connection.c
+++ b/drivers/hv/connection.c
@@ -288,7 +288,8 @@
struct list_head *cur, *tmp;
struct vmbus_channel *cur_sc;
- mutex_lock(&vmbus_connection.channel_mutex);
+ BUG_ON(!mutex_is_locked(&vmbus_connection.channel_mutex));
+
list_for_each_entry(channel, &vmbus_connection.chn_list, listentry) {
if (channel->offermsg.child_relid == relid) {
found_channel = channel;
@@ -307,7 +308,6 @@
}
}
}
- mutex_unlock(&vmbus_connection.channel_mutex);
return found_channel;
}
@@ -474,7 +474,7 @@
/*
* vmbus_set_event - Send an event notification to the parent
*/
-int vmbus_set_event(struct vmbus_channel *channel)
+void vmbus_set_event(struct vmbus_channel *channel)
{
u32 child_relid = channel->offermsg.child_relid;
@@ -485,5 +485,5 @@
(child_relid >> 5));
}
- return hv_signal_event(channel->sig_event);
+ hv_do_hypercall(HVCALL_SIGNAL_EVENT, channel->sig_event, NULL);
}
diff --git a/drivers/hv/hv.c b/drivers/hv/hv.c
index 11bca51..ccb335f 100644
--- a/drivers/hv/hv.c
+++ b/drivers/hv/hv.c
@@ -295,8 +295,14 @@
* Cleanup the TSC page based CS.
*/
if (ms_hyperv.features & HV_X64_MSR_REFERENCE_TSC_AVAILABLE) {
- clocksource_change_rating(&hyperv_cs_tsc, 10);
- clocksource_unregister(&hyperv_cs_tsc);
+ /*
+ * Crash can happen in an interrupt context and unregistering
+ * a clocksource is impossible and redundant in this case.
+ */
+ if (!oops_in_progress) {
+ clocksource_change_rating(&hyperv_cs_tsc, 10);
+ clocksource_unregister(&hyperv_cs_tsc);
+ }
hypercall_msr.as_uint64 = 0;
wrmsrl(HV_X64_MSR_REFERENCE_TSC, hypercall_msr.as_uint64);
@@ -337,22 +343,6 @@
return status & 0xFFFF;
}
-
-/*
- * hv_signal_event -
- * Signal an event on the specified connection using the hypervisor event IPC.
- *
- * This involves a hypercall.
- */
-int hv_signal_event(void *con_id)
-{
- u64 status;
-
- status = hv_do_hypercall(HVCALL_SIGNAL_EVENT, con_id, NULL);
-
- return status & 0xFFFF;
-}
-
static int hv_ce_set_next_event(unsigned long delta,
struct clock_event_device *evt)
{
diff --git a/drivers/hv/hyperv_vmbus.h b/drivers/hv/hyperv_vmbus.h
index 4ebc796..b9ea7f5 100644
--- a/drivers/hv/hyperv_vmbus.h
+++ b/drivers/hv/hyperv_vmbus.h
@@ -501,8 +501,6 @@
enum hv_message_type message_type,
void *payload, size_t payload_size);
-extern int hv_signal_event(void *con_id);
-
extern int hv_synic_alloc(void);
extern void hv_synic_free(void);
@@ -531,7 +529,7 @@
int hv_ringbuffer_write(struct hv_ring_buffer_info *ring_info,
struct kvec *kv_list,
- u32 kv_count, bool *signal);
+ u32 kv_count, bool *signal, bool lock);
int hv_ringbuffer_read(struct hv_ring_buffer_info *inring_info,
void *buffer, u32 buflen, u32 *buffer_actual_len,
@@ -650,7 +648,7 @@
int vmbus_post_msg(void *buffer, size_t buflen);
-int vmbus_set_event(struct vmbus_channel *channel);
+void vmbus_set_event(struct vmbus_channel *channel);
void vmbus_on_event(unsigned long data);
diff --git a/drivers/hv/ring_buffer.c b/drivers/hv/ring_buffer.c
index b53702c..5613e2b 100644
--- a/drivers/hv/ring_buffer.c
+++ b/drivers/hv/ring_buffer.c
@@ -314,7 +314,7 @@
/* Write to the ring buffer. */
int hv_ringbuffer_write(struct hv_ring_buffer_info *outring_info,
- struct kvec *kv_list, u32 kv_count, bool *signal)
+ struct kvec *kv_list, u32 kv_count, bool *signal, bool lock)
{
int i = 0;
u32 bytes_avail_towrite;
@@ -324,14 +324,15 @@
u32 next_write_location;
u32 old_write;
u64 prev_indices = 0;
- unsigned long flags;
+ unsigned long flags = 0;
for (i = 0; i < kv_count; i++)
totalbytes_towrite += kv_list[i].iov_len;
totalbytes_towrite += sizeof(u64);
- spin_lock_irqsave(&outring_info->ring_lock, flags);
+ if (lock)
+ spin_lock_irqsave(&outring_info->ring_lock, flags);
hv_get_ringbuffer_availbytes(outring_info,
&bytes_avail_toread,
@@ -343,7 +344,8 @@
* is empty since the read index == write index.
*/
if (bytes_avail_towrite <= totalbytes_towrite) {
- spin_unlock_irqrestore(&outring_info->ring_lock, flags);
+ if (lock)
+ spin_unlock_irqrestore(&outring_info->ring_lock, flags);
return -EAGAIN;
}
@@ -374,7 +376,8 @@
hv_set_next_write_location(outring_info, next_write_location);
- spin_unlock_irqrestore(&outring_info->ring_lock, flags);
+ if (lock)
+ spin_unlock_irqrestore(&outring_info->ring_lock, flags);
*signal = hv_need_to_signal(old_write, outring_info);
return 0;
@@ -388,7 +391,6 @@
u32 bytes_avail_toread;
u32 next_read_location = 0;
u64 prev_indices = 0;
- unsigned long flags;
struct vmpacket_descriptor desc;
u32 offset;
u32 packetlen;
@@ -397,7 +399,6 @@
if (buflen <= 0)
return -EINVAL;
- spin_lock_irqsave(&inring_info->ring_lock, flags);
*buffer_actual_len = 0;
*requestid = 0;
@@ -412,7 +413,7 @@
* No error is set when there is even no header, drivers are
* supposed to analyze buffer_actual_len.
*/
- goto out_unlock;
+ return ret;
}
next_read_location = hv_get_next_read_location(inring_info);
@@ -425,15 +426,11 @@
*buffer_actual_len = packetlen;
*requestid = desc.trans_id;
- if (bytes_avail_toread < packetlen + offset) {
- ret = -EAGAIN;
- goto out_unlock;
- }
+ if (bytes_avail_toread < packetlen + offset)
+ return -EAGAIN;
- if (packetlen > buflen) {
- ret = -ENOBUFS;
- goto out_unlock;
- }
+ if (packetlen > buflen)
+ return -ENOBUFS;
next_read_location =
hv_get_next_readlocation_withoffset(inring_info, offset);
@@ -460,7 +457,5 @@
*signal = hv_need_to_signal_on_read(bytes_avail_towrite, inring_info);
-out_unlock:
- spin_unlock_irqrestore(&inring_info->ring_lock, flags);
return ret;
}
diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c
index 328e4c3..063e5f5 100644
--- a/drivers/hv/vmbus_drv.c
+++ b/drivers/hv/vmbus_drv.c
@@ -477,6 +477,24 @@
}
static DEVICE_ATTR_RO(channel_vp_mapping);
+static ssize_t vendor_show(struct device *dev,
+ struct device_attribute *dev_attr,
+ char *buf)
+{
+ struct hv_device *hv_dev = device_to_hv_device(dev);
+ return sprintf(buf, "0x%x\n", hv_dev->vendor_id);
+}
+static DEVICE_ATTR_RO(vendor);
+
+static ssize_t device_show(struct device *dev,
+ struct device_attribute *dev_attr,
+ char *buf)
+{
+ struct hv_device *hv_dev = device_to_hv_device(dev);
+ return sprintf(buf, "0x%x\n", hv_dev->device_id);
+}
+static DEVICE_ATTR_RO(device);
+
/* Set up per device attributes in /sys/bus/vmbus/devices/<bus device> */
static struct attribute *vmbus_attrs[] = {
&dev_attr_id.attr,
@@ -502,6 +520,8 @@
&dev_attr_in_read_bytes_avail.attr,
&dev_attr_in_write_bytes_avail.attr,
&dev_attr_channel_vp_mapping.attr,
+ &dev_attr_vendor.attr,
+ &dev_attr_device.attr,
NULL,
};
ATTRIBUTE_GROUPS(vmbus);
@@ -562,6 +582,10 @@
struct hv_driver *drv = drv_to_hv_drv(driver);
struct hv_device *hv_dev = device_to_hv_device(device);
+ /* The hv_sock driver handles all hv_sock offers. */
+ if (is_hvsock_channel(hv_dev->channel))
+ return drv->hvsock;
+
if (hv_vmbus_get_id(drv->id_table, &hv_dev->dev_type))
return 1;
@@ -957,6 +981,7 @@
memcpy(&child_device_obj->dev_type, type, sizeof(uuid_le));
memcpy(&child_device_obj->dev_instance, instance,
sizeof(uuid_le));
+ child_device_obj->vendor_id = 0x1414; /* MSFT vendor ID */
return child_device_obj;
diff --git a/drivers/hwmon/ads1015.c b/drivers/hwmon/ads1015.c
index f155b83..2b3105c 100644
--- a/drivers/hwmon/ads1015.c
+++ b/drivers/hwmon/ads1015.c
@@ -126,7 +126,7 @@
struct ads1015_data *data = i2c_get_clientdata(client);
unsigned int pga = data->channel_data[channel].pga;
int fullscale = fullscale_table[pga];
- const unsigned mask = data->id == ads1115 ? 0x7fff : 0x7ff0;
+ const int mask = data->id == ads1115 ? 0x7fff : 0x7ff0;
return DIV_ROUND_CLOSEST(reg * fullscale, mask);
}
diff --git a/drivers/hwmon/gpio-fan.c b/drivers/hwmon/gpio-fan.c
index 82de3de..685568b 100644
--- a/drivers/hwmon/gpio-fan.c
+++ b/drivers/hwmon/gpio-fan.c
@@ -406,16 +406,11 @@
unsigned long *state)
{
struct gpio_fan_data *fan_data = cdev->devdata;
- int r;
if (!fan_data)
return -EINVAL;
- r = get_fan_speed_index(fan_data);
- if (r < 0)
- return r;
-
- *state = r;
+ *state = fan_data->speed_index;
return 0;
}
diff --git a/drivers/hwtracing/coresight/Kconfig b/drivers/hwtracing/coresight/Kconfig
index c85935f..db05410 100644
--- a/drivers/hwtracing/coresight/Kconfig
+++ b/drivers/hwtracing/coresight/Kconfig
@@ -4,6 +4,7 @@
menuconfig CORESIGHT
bool "CoreSight Tracing Support"
select ARM_AMBA
+ select PERF_EVENTS
help
This framework provides a kernel interface for the CoreSight debug
and trace drivers to register themselves with. It's intended to build
diff --git a/drivers/hwtracing/coresight/Makefile b/drivers/hwtracing/coresight/Makefile
index 99f8e5f6..cf8c6d6 100644
--- a/drivers/hwtracing/coresight/Makefile
+++ b/drivers/hwtracing/coresight/Makefile
@@ -8,6 +8,8 @@
obj-$(CONFIG_CORESIGHT_SINK_ETBV10) += coresight-etb10.o
obj-$(CONFIG_CORESIGHT_LINKS_AND_SINKS) += coresight-funnel.o \
coresight-replicator.o
-obj-$(CONFIG_CORESIGHT_SOURCE_ETM3X) += coresight-etm3x.o coresight-etm-cp14.o
+obj-$(CONFIG_CORESIGHT_SOURCE_ETM3X) += coresight-etm3x.o coresight-etm-cp14.o \
+ coresight-etm3x-sysfs.o \
+ coresight-etm-perf.o
obj-$(CONFIG_CORESIGHT_SOURCE_ETM4X) += coresight-etm4x.o
obj-$(CONFIG_CORESIGHT_QCOM_REPLICATOR) += coresight-replicator-qcom.o
diff --git a/drivers/hwtracing/coresight/coresight-etb10.c b/drivers/hwtracing/coresight/coresight-etb10.c
index 77d0f9c..acbce79 100644
--- a/drivers/hwtracing/coresight/coresight-etb10.c
+++ b/drivers/hwtracing/coresight/coresight-etb10.c
@@ -1,5 +1,7 @@
/* Copyright (c) 2011-2012, The Linux Foundation. All rights reserved.
*
+ * Description: CoreSight Embedded Trace Buffer driver
+ *
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
@@ -10,8 +12,8 @@
* GNU General Public License for more details.
*/
+#include <asm/local.h>
#include <linux/kernel.h>
-#include <linux/module.h>
#include <linux/init.h>
#include <linux/types.h>
#include <linux/device.h>
@@ -27,6 +29,11 @@
#include <linux/coresight.h>
#include <linux/amba/bus.h>
#include <linux/clk.h>
+#include <linux/circ_buf.h>
+#include <linux/mm.h>
+#include <linux/perf_event.h>
+
+#include <asm/local.h>
#include "coresight-priv.h"
@@ -64,6 +71,26 @@
#define ETB_FRAME_SIZE_WORDS 4
/**
+ * struct cs_buffer - keep track of a recording session' specifics
+ * @cur: index of the current buffer
+ * @nr_pages: max number of pages granted to us
+ * @offset: offset within the current buffer
+ * @data_size: how much we collected in this run
+ * @lost: other than zero if we had a HW buffer wrap around
+ * @snapshot: is this run in snapshot mode
+ * @data_pages: a handle the ring buffer
+ */
+struct cs_buffers {
+ unsigned int cur;
+ unsigned int nr_pages;
+ unsigned long offset;
+ local_t data_size;
+ local_t lost;
+ bool snapshot;
+ void **data_pages;
+};
+
+/**
* struct etb_drvdata - specifics associated to an ETB component
* @base: memory mapped base address for this component.
* @dev: the device entity associated to this component.
@@ -71,10 +98,10 @@
* @csdev: component vitals needed by the framework.
* @miscdev: specifics to handle "/dev/xyz.etb" entry.
* @spinlock: only one at a time pls.
- * @in_use: synchronise user space access to etb buffer.
+ * @reading: synchronise user space access to etb buffer.
+ * @mode: this ETB is being used.
* @buf: area of memory where ETB buffer content gets sent.
* @buffer_depth: size of @buf.
- * @enable: this ETB is being used.
* @trigger_cntr: amount of words to store after a trigger.
*/
struct etb_drvdata {
@@ -84,10 +111,10 @@
struct coresight_device *csdev;
struct miscdevice miscdev;
spinlock_t spinlock;
- atomic_t in_use;
+ local_t reading;
+ local_t mode;
u8 *buf;
u32 buffer_depth;
- bool enable;
u32 trigger_cntr;
};
@@ -132,18 +159,31 @@
CS_LOCK(drvdata->base);
}
-static int etb_enable(struct coresight_device *csdev)
+static int etb_enable(struct coresight_device *csdev, u32 mode)
{
- struct etb_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
+ u32 val;
unsigned long flags;
+ struct etb_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
- pm_runtime_get_sync(drvdata->dev);
+ val = local_cmpxchg(&drvdata->mode,
+ CS_MODE_DISABLED, mode);
+ /*
+ * When accessing from Perf, a HW buffer can be handled
+ * by a single trace entity. In sysFS mode many tracers
+ * can be logging to the same HW buffer.
+ */
+ if (val == CS_MODE_PERF)
+ return -EBUSY;
+
+ /* Nothing to do, the tracer is already enabled. */
+ if (val == CS_MODE_SYSFS)
+ goto out;
spin_lock_irqsave(&drvdata->spinlock, flags);
etb_enable_hw(drvdata);
- drvdata->enable = true;
spin_unlock_irqrestore(&drvdata->spinlock, flags);
+out:
dev_info(drvdata->dev, "ETB enabled\n");
return 0;
}
@@ -244,17 +284,225 @@
spin_lock_irqsave(&drvdata->spinlock, flags);
etb_disable_hw(drvdata);
etb_dump_hw(drvdata);
- drvdata->enable = false;
spin_unlock_irqrestore(&drvdata->spinlock, flags);
- pm_runtime_put(drvdata->dev);
+ local_set(&drvdata->mode, CS_MODE_DISABLED);
dev_info(drvdata->dev, "ETB disabled\n");
}
+static void *etb_alloc_buffer(struct coresight_device *csdev, int cpu,
+ void **pages, int nr_pages, bool overwrite)
+{
+ int node;
+ struct cs_buffers *buf;
+
+ if (cpu == -1)
+ cpu = smp_processor_id();
+ node = cpu_to_node(cpu);
+
+ buf = kzalloc_node(sizeof(struct cs_buffers), GFP_KERNEL, node);
+ if (!buf)
+ return NULL;
+
+ buf->snapshot = overwrite;
+ buf->nr_pages = nr_pages;
+ buf->data_pages = pages;
+
+ return buf;
+}
+
+static void etb_free_buffer(void *config)
+{
+ struct cs_buffers *buf = config;
+
+ kfree(buf);
+}
+
+static int etb_set_buffer(struct coresight_device *csdev,
+ struct perf_output_handle *handle,
+ void *sink_config)
+{
+ int ret = 0;
+ unsigned long head;
+ struct cs_buffers *buf = sink_config;
+
+ /* wrap head around to the amount of space we have */
+ head = handle->head & ((buf->nr_pages << PAGE_SHIFT) - 1);
+
+ /* find the page to write to */
+ buf->cur = head / PAGE_SIZE;
+
+ /* and offset within that page */
+ buf->offset = head % PAGE_SIZE;
+
+ local_set(&buf->data_size, 0);
+
+ return ret;
+}
+
+static unsigned long etb_reset_buffer(struct coresight_device *csdev,
+ struct perf_output_handle *handle,
+ void *sink_config, bool *lost)
+{
+ unsigned long size = 0;
+ struct cs_buffers *buf = sink_config;
+
+ if (buf) {
+ /*
+ * In snapshot mode ->data_size holds the new address of the
+ * ring buffer's head. The size itself is the whole address
+ * range since we want the latest information.
+ */
+ if (buf->snapshot)
+ handle->head = local_xchg(&buf->data_size,
+ buf->nr_pages << PAGE_SHIFT);
+
+ /*
+ * Tell the tracer PMU how much we got in this run and if
+ * something went wrong along the way. Nobody else can use
+ * this cs_buffers instance until we are done. As such
+ * resetting parameters here and squaring off with the ring
+ * buffer API in the tracer PMU is fine.
+ */
+ *lost = !!local_xchg(&buf->lost, 0);
+ size = local_xchg(&buf->data_size, 0);
+ }
+
+ return size;
+}
+
+static void etb_update_buffer(struct coresight_device *csdev,
+ struct perf_output_handle *handle,
+ void *sink_config)
+{
+ int i, cur;
+ u8 *buf_ptr;
+ u32 read_ptr, write_ptr, capacity;
+ u32 status, read_data, to_read;
+ unsigned long offset;
+ struct cs_buffers *buf = sink_config;
+ struct etb_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
+
+ if (!buf)
+ return;
+
+ capacity = drvdata->buffer_depth * ETB_FRAME_SIZE_WORDS;
+
+ CS_UNLOCK(drvdata->base);
+ etb_disable_hw(drvdata);
+
+ /* unit is in words, not bytes */
+ read_ptr = readl_relaxed(drvdata->base + ETB_RAM_READ_POINTER);
+ write_ptr = readl_relaxed(drvdata->base + ETB_RAM_WRITE_POINTER);
+
+ /*
+ * Entries should be aligned to the frame size. If they are not
+ * go back to the last alignement point to give decoding tools a
+ * chance to fix things.
+ */
+ if (write_ptr % ETB_FRAME_SIZE_WORDS) {
+ dev_err(drvdata->dev,
+ "write_ptr: %lu not aligned to formatter frame size\n",
+ (unsigned long)write_ptr);
+
+ write_ptr &= ~(ETB_FRAME_SIZE_WORDS - 1);
+ local_inc(&buf->lost);
+ }
+
+ /*
+ * Get a hold of the status register and see if a wrap around
+ * has occurred. If so adjust things accordingly. Otherwise
+ * start at the beginning and go until the write pointer has
+ * been reached.
+ */
+ status = readl_relaxed(drvdata->base + ETB_STATUS_REG);
+ if (status & ETB_STATUS_RAM_FULL) {
+ local_inc(&buf->lost);
+ to_read = capacity;
+ read_ptr = write_ptr;
+ } else {
+ to_read = CIRC_CNT(write_ptr, read_ptr, drvdata->buffer_depth);
+ to_read *= ETB_FRAME_SIZE_WORDS;
+ }
+
+ /*
+ * Make sure we don't overwrite data that hasn't been consumed yet.
+ * It is entirely possible that the HW buffer has more data than the
+ * ring buffer can currently handle. If so adjust the start address
+ * to take only the last traces.
+ *
+ * In snapshot mode we are looking to get the latest traces only and as
+ * such, we don't care about not overwriting data that hasn't been
+ * processed by user space.
+ */
+ if (!buf->snapshot && to_read > handle->size) {
+ u32 mask = ~(ETB_FRAME_SIZE_WORDS - 1);
+
+ /* The new read pointer must be frame size aligned */
+ to_read -= handle->size & mask;
+ /*
+ * Move the RAM read pointer up, keeping in mind that
+ * everything is in frame size units.
+ */
+ read_ptr = (write_ptr + drvdata->buffer_depth) -
+ to_read / ETB_FRAME_SIZE_WORDS;
+ /* Wrap around if need be*/
+ read_ptr &= ~(drvdata->buffer_depth - 1);
+ /* let the decoder know we've skipped ahead */
+ local_inc(&buf->lost);
+ }
+
+ /* finally tell HW where we want to start reading from */
+ writel_relaxed(read_ptr, drvdata->base + ETB_RAM_READ_POINTER);
+
+ cur = buf->cur;
+ offset = buf->offset;
+ for (i = 0; i < to_read; i += 4) {
+ buf_ptr = buf->data_pages[cur] + offset;
+ read_data = readl_relaxed(drvdata->base +
+ ETB_RAM_READ_DATA_REG);
+ *buf_ptr++ = read_data >> 0;
+ *buf_ptr++ = read_data >> 8;
+ *buf_ptr++ = read_data >> 16;
+ *buf_ptr++ = read_data >> 24;
+
+ offset += 4;
+ if (offset >= PAGE_SIZE) {
+ offset = 0;
+ cur++;
+ /* wrap around at the end of the buffer */
+ cur &= buf->nr_pages - 1;
+ }
+ }
+
+ /* reset ETB buffer for next run */
+ writel_relaxed(0x0, drvdata->base + ETB_RAM_READ_POINTER);
+ writel_relaxed(0x0, drvdata->base + ETB_RAM_WRITE_POINTER);
+
+ /*
+ * In snapshot mode all we have to do is communicate to
+ * perf_aux_output_end() the address of the current head. In full
+ * trace mode the same function expects a size to move rb->aux_head
+ * forward.
+ */
+ if (buf->snapshot)
+ local_set(&buf->data_size, (cur * PAGE_SIZE) + offset);
+ else
+ local_add(to_read, &buf->data_size);
+
+ etb_enable_hw(drvdata);
+ CS_LOCK(drvdata->base);
+}
+
static const struct coresight_ops_sink etb_sink_ops = {
.enable = etb_enable,
.disable = etb_disable,
+ .alloc_buffer = etb_alloc_buffer,
+ .free_buffer = etb_free_buffer,
+ .set_buffer = etb_set_buffer,
+ .reset_buffer = etb_reset_buffer,
+ .update_buffer = etb_update_buffer,
};
static const struct coresight_ops etb_cs_ops = {
@@ -266,7 +514,7 @@
unsigned long flags;
spin_lock_irqsave(&drvdata->spinlock, flags);
- if (drvdata->enable) {
+ if (local_read(&drvdata->mode) == CS_MODE_SYSFS) {
etb_disable_hw(drvdata);
etb_dump_hw(drvdata);
etb_enable_hw(drvdata);
@@ -281,7 +529,7 @@
struct etb_drvdata *drvdata = container_of(file->private_data,
struct etb_drvdata, miscdev);
- if (atomic_cmpxchg(&drvdata->in_use, 0, 1))
+ if (local_cmpxchg(&drvdata->reading, 0, 1))
return -EBUSY;
dev_dbg(drvdata->dev, "%s: successfully opened\n", __func__);
@@ -317,7 +565,7 @@
{
struct etb_drvdata *drvdata = container_of(file->private_data,
struct etb_drvdata, miscdev);
- atomic_set(&drvdata->in_use, 0);
+ local_set(&drvdata->reading, 0);
dev_dbg(drvdata->dev, "%s: released\n", __func__);
return 0;
@@ -489,15 +737,6 @@
return ret;
}
-static int etb_remove(struct amba_device *adev)
-{
- struct etb_drvdata *drvdata = amba_get_drvdata(adev);
-
- misc_deregister(&drvdata->miscdev);
- coresight_unregister(drvdata->csdev);
- return 0;
-}
-
#ifdef CONFIG_PM
static int etb_runtime_suspend(struct device *dev)
{
@@ -537,14 +776,10 @@
.name = "coresight-etb10",
.owner = THIS_MODULE,
.pm = &etb_dev_pm_ops,
+ .suppress_bind_attrs = true,
},
.probe = etb_probe,
- .remove = etb_remove,
.id_table = etb_ids,
};
-
-module_amba_driver(etb_driver);
-
-MODULE_LICENSE("GPL v2");
-MODULE_DESCRIPTION("CoreSight Embedded Trace Buffer driver");
+builtin_amba_driver(etb_driver);
diff --git a/drivers/hwtracing/coresight/coresight-etm-perf.c b/drivers/hwtracing/coresight/coresight-etm-perf.c
new file mode 100644
index 0000000..36153a7
--- /dev/null
+++ b/drivers/hwtracing/coresight/coresight-etm-perf.c
@@ -0,0 +1,393 @@
+/*
+ * Copyright(C) 2015 Linaro Limited. All rights reserved.
+ * Author: Mathieu Poirier <mathieu.poirier@linaro.org>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 as published by
+ * the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/coresight.h>
+#include <linux/coresight-pmu.h>
+#include <linux/cpumask.h>
+#include <linux/device.h>
+#include <linux/list.h>
+#include <linux/mm.h>
+#include <linux/module.h>
+#include <linux/perf_event.h>
+#include <linux/slab.h>
+#include <linux/types.h>
+#include <linux/workqueue.h>
+
+#include "coresight-priv.h"
+
+static struct pmu etm_pmu;
+static bool etm_perf_up;
+
+/**
+ * struct etm_event_data - Coresight specifics associated to an event
+ * @work: Handle to free allocated memory outside IRQ context.
+ * @mask: Hold the CPU(s) this event was set for.
+ * @snk_config: The sink configuration.
+ * @path: An array of path, each slot for one CPU.
+ */
+struct etm_event_data {
+ struct work_struct work;
+ cpumask_t mask;
+ void *snk_config;
+ struct list_head **path;
+};
+
+static DEFINE_PER_CPU(struct perf_output_handle, ctx_handle);
+static DEFINE_PER_CPU(struct coresight_device *, csdev_src);
+
+/* ETMv3.5/PTM's ETMCR is 'config' */
+PMU_FORMAT_ATTR(cycacc, "config:" __stringify(ETM_OPT_CYCACC));
+PMU_FORMAT_ATTR(timestamp, "config:" __stringify(ETM_OPT_TS));
+
+static struct attribute *etm_config_formats_attr[] = {
+ &format_attr_cycacc.attr,
+ &format_attr_timestamp.attr,
+ NULL,
+};
+
+static struct attribute_group etm_pmu_format_group = {
+ .name = "format",
+ .attrs = etm_config_formats_attr,
+};
+
+static const struct attribute_group *etm_pmu_attr_groups[] = {
+ &etm_pmu_format_group,
+ NULL,
+};
+
+static void etm_event_read(struct perf_event *event) {}
+
+static int etm_event_init(struct perf_event *event)
+{
+ if (event->attr.type != etm_pmu.type)
+ return -ENOENT;
+
+ return 0;
+}
+
+static void free_event_data(struct work_struct *work)
+{
+ int cpu;
+ cpumask_t *mask;
+ struct etm_event_data *event_data;
+ struct coresight_device *sink;
+
+ event_data = container_of(work, struct etm_event_data, work);
+ mask = &event_data->mask;
+ /*
+ * First deal with the sink configuration. See comment in
+ * etm_setup_aux() about why we take the first available path.
+ */
+ if (event_data->snk_config) {
+ cpu = cpumask_first(mask);
+ sink = coresight_get_sink(event_data->path[cpu]);
+ if (sink_ops(sink)->free_buffer)
+ sink_ops(sink)->free_buffer(event_data->snk_config);
+ }
+
+ for_each_cpu(cpu, mask) {
+ if (event_data->path[cpu])
+ coresight_release_path(event_data->path[cpu]);
+ }
+
+ kfree(event_data->path);
+ kfree(event_data);
+}
+
+static void *alloc_event_data(int cpu)
+{
+ int size;
+ cpumask_t *mask;
+ struct etm_event_data *event_data;
+
+ /* First get memory for the session's data */
+ event_data = kzalloc(sizeof(struct etm_event_data), GFP_KERNEL);
+ if (!event_data)
+ return NULL;
+
+ /* Make sure nothing disappears under us */
+ get_online_cpus();
+ size = num_online_cpus();
+
+ mask = &event_data->mask;
+ if (cpu != -1)
+ cpumask_set_cpu(cpu, mask);
+ else
+ cpumask_copy(mask, cpu_online_mask);
+ put_online_cpus();
+
+ /*
+ * Each CPU has a single path between source and destination. As such
+ * allocate an array using CPU numbers as indexes. That way a path
+ * for any CPU can easily be accessed at any given time. We proceed
+ * the same way for sessions involving a single CPU. The cost of
+ * unused memory when dealing with single CPU trace scenarios is small
+ * compared to the cost of searching through an optimized array.
+ */
+ event_data->path = kcalloc(size,
+ sizeof(struct list_head *), GFP_KERNEL);
+ if (!event_data->path) {
+ kfree(event_data);
+ return NULL;
+ }
+
+ return event_data;
+}
+
+static void etm_free_aux(void *data)
+{
+ struct etm_event_data *event_data = data;
+
+ schedule_work(&event_data->work);
+}
+
+static void *etm_setup_aux(int event_cpu, void **pages,
+ int nr_pages, bool overwrite)
+{
+ int cpu;
+ cpumask_t *mask;
+ struct coresight_device *sink;
+ struct etm_event_data *event_data = NULL;
+
+ event_data = alloc_event_data(event_cpu);
+ if (!event_data)
+ return NULL;
+
+ INIT_WORK(&event_data->work, free_event_data);
+
+ mask = &event_data->mask;
+
+ /* Setup the path for each CPU in a trace session */
+ for_each_cpu(cpu, mask) {
+ struct coresight_device *csdev;
+
+ csdev = per_cpu(csdev_src, cpu);
+ if (!csdev)
+ goto err;
+
+ /*
+ * Building a path doesn't enable it, it simply builds a
+ * list of devices from source to sink that can be
+ * referenced later when the path is actually needed.
+ */
+ event_data->path[cpu] = coresight_build_path(csdev);
+ if (!event_data->path[cpu])
+ goto err;
+ }
+
+ /*
+ * In theory nothing prevent tracers in a trace session from being
+ * associated with different sinks, nor having a sink per tracer. But
+ * until we have HW with this kind of topology and a way to convey
+ * sink assignement from the perf cmd line we need to assume tracers
+ * in a trace session are using the same sink. Therefore pick the sink
+ * found at the end of the first available path.
+ */
+ cpu = cpumask_first(mask);
+ /* Grab the sink at the end of the path */
+ sink = coresight_get_sink(event_data->path[cpu]);
+ if (!sink)
+ goto err;
+
+ if (!sink_ops(sink)->alloc_buffer)
+ goto err;
+
+ /* Get the AUX specific data from the sink buffer */
+ event_data->snk_config =
+ sink_ops(sink)->alloc_buffer(sink, cpu, pages,
+ nr_pages, overwrite);
+ if (!event_data->snk_config)
+ goto err;
+
+out:
+ return event_data;
+
+err:
+ etm_free_aux(event_data);
+ event_data = NULL;
+ goto out;
+}
+
+static void etm_event_start(struct perf_event *event, int flags)
+{
+ int cpu = smp_processor_id();
+ struct etm_event_data *event_data;
+ struct perf_output_handle *handle = this_cpu_ptr(&ctx_handle);
+ struct coresight_device *sink, *csdev = per_cpu(csdev_src, cpu);
+
+ if (!csdev)
+ goto fail;
+
+ /*
+ * Deal with the ring buffer API and get a handle on the
+ * session's information.
+ */
+ event_data = perf_aux_output_begin(handle, event);
+ if (!event_data)
+ goto fail;
+
+ /* We need a sink, no need to continue without one */
+ sink = coresight_get_sink(event_data->path[cpu]);
+ if (WARN_ON_ONCE(!sink || !sink_ops(sink)->set_buffer))
+ goto fail_end_stop;
+
+ /* Configure the sink */
+ if (sink_ops(sink)->set_buffer(sink, handle,
+ event_data->snk_config))
+ goto fail_end_stop;
+
+ /* Nothing will happen without a path */
+ if (coresight_enable_path(event_data->path[cpu], CS_MODE_PERF))
+ goto fail_end_stop;
+
+ /* Tell the perf core the event is alive */
+ event->hw.state = 0;
+
+ /* Finally enable the tracer */
+ if (source_ops(csdev)->enable(csdev, &event->attr, CS_MODE_PERF))
+ goto fail_end_stop;
+
+out:
+ return;
+
+fail_end_stop:
+ perf_aux_output_end(handle, 0, true);
+fail:
+ event->hw.state = PERF_HES_STOPPED;
+ goto out;
+}
+
+static void etm_event_stop(struct perf_event *event, int mode)
+{
+ bool lost;
+ int cpu = smp_processor_id();
+ unsigned long size;
+ struct coresight_device *sink, *csdev = per_cpu(csdev_src, cpu);
+ struct perf_output_handle *handle = this_cpu_ptr(&ctx_handle);
+ struct etm_event_data *event_data = perf_get_aux(handle);
+
+ if (event->hw.state == PERF_HES_STOPPED)
+ return;
+
+ if (!csdev)
+ return;
+
+ sink = coresight_get_sink(event_data->path[cpu]);
+ if (!sink)
+ return;
+
+ /* stop tracer */
+ source_ops(csdev)->disable(csdev);
+
+ /* tell the core */
+ event->hw.state = PERF_HES_STOPPED;
+
+ if (mode & PERF_EF_UPDATE) {
+ if (WARN_ON_ONCE(handle->event != event))
+ return;
+
+ /* update trace information */
+ if (!sink_ops(sink)->update_buffer)
+ return;
+
+ sink_ops(sink)->update_buffer(sink, handle,
+ event_data->snk_config);
+
+ if (!sink_ops(sink)->reset_buffer)
+ return;
+
+ size = sink_ops(sink)->reset_buffer(sink, handle,
+ event_data->snk_config,
+ &lost);
+
+ perf_aux_output_end(handle, size, lost);
+ }
+
+ /* Disabling the path make its elements available to other sessions */
+ coresight_disable_path(event_data->path[cpu]);
+}
+
+static int etm_event_add(struct perf_event *event, int mode)
+{
+ int ret = 0;
+ struct hw_perf_event *hwc = &event->hw;
+
+ if (mode & PERF_EF_START) {
+ etm_event_start(event, 0);
+ if (hwc->state & PERF_HES_STOPPED)
+ ret = -EINVAL;
+ } else {
+ hwc->state = PERF_HES_STOPPED;
+ }
+
+ return ret;
+}
+
+static void etm_event_del(struct perf_event *event, int mode)
+{
+ etm_event_stop(event, PERF_EF_UPDATE);
+}
+
+int etm_perf_symlink(struct coresight_device *csdev, bool link)
+{
+ char entry[sizeof("cpu9999999")];
+ int ret = 0, cpu = source_ops(csdev)->cpu_id(csdev);
+ struct device *pmu_dev = etm_pmu.dev;
+ struct device *cs_dev = &csdev->dev;
+
+ sprintf(entry, "cpu%d", cpu);
+
+ if (!etm_perf_up)
+ return -EPROBE_DEFER;
+
+ if (link) {
+ ret = sysfs_create_link(&pmu_dev->kobj, &cs_dev->kobj, entry);
+ if (ret)
+ return ret;
+ per_cpu(csdev_src, cpu) = csdev;
+ } else {
+ sysfs_remove_link(&pmu_dev->kobj, entry);
+ per_cpu(csdev_src, cpu) = NULL;
+ }
+
+ return 0;
+}
+
+static int __init etm_perf_init(void)
+{
+ int ret;
+
+ etm_pmu.capabilities = PERF_PMU_CAP_EXCLUSIVE;
+
+ etm_pmu.attr_groups = etm_pmu_attr_groups;
+ etm_pmu.task_ctx_nr = perf_sw_context;
+ etm_pmu.read = etm_event_read;
+ etm_pmu.event_init = etm_event_init;
+ etm_pmu.setup_aux = etm_setup_aux;
+ etm_pmu.free_aux = etm_free_aux;
+ etm_pmu.start = etm_event_start;
+ etm_pmu.stop = etm_event_stop;
+ etm_pmu.add = etm_event_add;
+ etm_pmu.del = etm_event_del;
+
+ ret = perf_pmu_register(&etm_pmu, CORESIGHT_ETM_PMU_NAME, -1);
+ if (ret == 0)
+ etm_perf_up = true;
+
+ return ret;
+}
+module_init(etm_perf_init);
diff --git a/drivers/hwtracing/coresight/coresight-etm-perf.h b/drivers/hwtracing/coresight/coresight-etm-perf.h
new file mode 100644
index 0000000..87f5a13
--- /dev/null
+++ b/drivers/hwtracing/coresight/coresight-etm-perf.h
@@ -0,0 +1,32 @@
+/*
+ * Copyright(C) 2015 Linaro Limited. All rights reserved.
+ * Author: Mathieu Poirier <mathieu.poirier@linaro.org>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 as published by
+ * the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef _CORESIGHT_ETM_PERF_H
+#define _CORESIGHT_ETM_PERF_H
+
+struct coresight_device;
+
+#ifdef CONFIG_CORESIGHT
+int etm_perf_symlink(struct coresight_device *csdev, bool link);
+
+#else
+static inline int etm_perf_symlink(struct coresight_device *csdev, bool link)
+{ return -EINVAL; }
+
+#endif /* CONFIG_CORESIGHT */
+
+#endif
diff --git a/drivers/hwtracing/coresight/coresight-etm.h b/drivers/hwtracing/coresight/coresight-etm.h
index b4481eb..51597cb 100644
--- a/drivers/hwtracing/coresight/coresight-etm.h
+++ b/drivers/hwtracing/coresight/coresight-etm.h
@@ -13,6 +13,7 @@
#ifndef _CORESIGHT_CORESIGHT_ETM_H
#define _CORESIGHT_CORESIGHT_ETM_H
+#include <asm/local.h>
#include <linux/spinlock.h>
#include "coresight-priv.h"
@@ -109,7 +110,10 @@
#define ETM_MODE_STALL BIT(2)
#define ETM_MODE_TIMESTAMP BIT(3)
#define ETM_MODE_CTXID BIT(4)
-#define ETM_MODE_ALL 0x1f
+#define ETM_MODE_ALL (ETM_MODE_EXCLUDE | ETM_MODE_CYCACC | \
+ ETM_MODE_STALL | ETM_MODE_TIMESTAMP | \
+ ETM_MODE_CTXID | ETM_MODE_EXCL_KERN | \
+ ETM_MODE_EXCL_USER)
#define ETM_SQR_MASK 0x3
#define ETM_TRACEID_MASK 0x3f
@@ -136,35 +140,16 @@
#define ETM_DEFAULT_EVENT_VAL (ETM_HARD_WIRE_RES_A | \
ETM_ADD_COMP_0 | \
ETM_EVENT_NOT_A)
+
/**
- * struct etm_drvdata - specifics associated to an ETM component
- * @base: memory mapped base address for this component.
- * @dev: the device entity associated to this component.
- * @atclk: optional clock for the core parts of the ETM.
- * @csdev: component vitals needed by the framework.
- * @spinlock: only one at a time pls.
- * @cpu: the cpu this component is affined to.
- * @port_size: port size as reported by ETMCR bit 4-6 and 21.
- * @arch: ETM/PTM version number.
- * @use_cpu14: true if management registers need to be accessed via CP14.
- * @enable: is this ETM/PTM currently tracing.
- * @sticky_enable: true if ETM base configuration has been done.
- * @boot_enable:true if we should start tracing at boot time.
- * @os_unlock: true if access to management registers is allowed.
- * @nr_addr_cmp:Number of pairs of address comparators as found in ETMCCR.
- * @nr_cntr: Number of counters as found in ETMCCR bit 13-15.
- * @nr_ext_inp: Number of external input as found in ETMCCR bit 17-19.
- * @nr_ext_out: Number of external output as found in ETMCCR bit 20-22.
- * @nr_ctxid_cmp: Number of contextID comparators as found in ETMCCR bit 24-25.
- * @etmccr: value of register ETMCCR.
- * @etmccer: value of register ETMCCER.
- * @traceid: value of the current ID for this component.
+ * struct etm_config - configuration information related to an ETM
* @mode: controls various modes supported by this ETM/PTM.
* @ctrl: used in conjunction with @mode.
* @trigger_event: setting for register ETMTRIGGER.
* @startstop_ctrl: setting for register ETMTSSCR.
* @enable_event: setting for register ETMTEEVR.
* @enable_ctrl1: setting for register ETMTECR1.
+ * @enable_ctrl2: setting for register ETMTECR2.
* @fifofull_level: setting for register ETMFFLR.
* @addr_idx: index for the address comparator selection.
* @addr_val: value for address comparator register.
@@ -189,36 +174,16 @@
* @ctxid_mask: mask applicable to all the context IDs.
* @sync_freq: Synchronisation frequency.
* @timestamp_event: Defines an event that requests the insertion
- of a timestamp into the trace stream.
+ * of a timestamp into the trace stream.
*/
-struct etm_drvdata {
- void __iomem *base;
- struct device *dev;
- struct clk *atclk;
- struct coresight_device *csdev;
- spinlock_t spinlock;
- int cpu;
- int port_size;
- u8 arch;
- bool use_cp14;
- bool enable;
- bool sticky_enable;
- bool boot_enable;
- bool os_unlock;
- u8 nr_addr_cmp;
- u8 nr_cntr;
- u8 nr_ext_inp;
- u8 nr_ext_out;
- u8 nr_ctxid_cmp;
- u32 etmccr;
- u32 etmccer;
- u32 traceid;
+struct etm_config {
u32 mode;
u32 ctrl;
u32 trigger_event;
u32 startstop_ctrl;
u32 enable_event;
u32 enable_ctrl1;
+ u32 enable_ctrl2;
u32 fifofull_level;
u8 addr_idx;
u32 addr_val[ETM_MAX_ADDR_CMP];
@@ -244,6 +209,56 @@
u32 timestamp_event;
};
+/**
+ * struct etm_drvdata - specifics associated to an ETM component
+ * @base: memory mapped base address for this component.
+ * @dev: the device entity associated to this component.
+ * @atclk: optional clock for the core parts of the ETM.
+ * @csdev: component vitals needed by the framework.
+ * @spinlock: only one at a time pls.
+ * @cpu: the cpu this component is affined to.
+ * @port_size: port size as reported by ETMCR bit 4-6 and 21.
+ * @arch: ETM/PTM version number.
+ * @use_cpu14: true if management registers need to be accessed via CP14.
+ * @mode: this tracer's mode, i.e sysFS, Perf or disabled.
+ * @sticky_enable: true if ETM base configuration has been done.
+ * @boot_enable:true if we should start tracing at boot time.
+ * @os_unlock: true if access to management registers is allowed.
+ * @nr_addr_cmp:Number of pairs of address comparators as found in ETMCCR.
+ * @nr_cntr: Number of counters as found in ETMCCR bit 13-15.
+ * @nr_ext_inp: Number of external input as found in ETMCCR bit 17-19.
+ * @nr_ext_out: Number of external output as found in ETMCCR bit 20-22.
+ * @nr_ctxid_cmp: Number of contextID comparators as found in ETMCCR bit 24-25.
+ * @etmccr: value of register ETMCCR.
+ * @etmccer: value of register ETMCCER.
+ * @traceid: value of the current ID for this component.
+ * @config: structure holding configuration parameters.
+ */
+struct etm_drvdata {
+ void __iomem *base;
+ struct device *dev;
+ struct clk *atclk;
+ struct coresight_device *csdev;
+ spinlock_t spinlock;
+ int cpu;
+ int port_size;
+ u8 arch;
+ bool use_cp14;
+ local_t mode;
+ bool sticky_enable;
+ bool boot_enable;
+ bool os_unlock;
+ u8 nr_addr_cmp;
+ u8 nr_cntr;
+ u8 nr_ext_inp;
+ u8 nr_ext_out;
+ u8 nr_ctxid_cmp;
+ u32 etmccr;
+ u32 etmccer;
+ u32 traceid;
+ struct etm_config config;
+};
+
enum etm_addr_type {
ETM_ADDR_TYPE_NONE,
ETM_ADDR_TYPE_SINGLE,
@@ -251,4 +266,39 @@
ETM_ADDR_TYPE_START,
ETM_ADDR_TYPE_STOP,
};
+
+static inline void etm_writel(struct etm_drvdata *drvdata,
+ u32 val, u32 off)
+{
+ if (drvdata->use_cp14) {
+ if (etm_writel_cp14(off, val)) {
+ dev_err(drvdata->dev,
+ "invalid CP14 access to ETM reg: %#x", off);
+ }
+ } else {
+ writel_relaxed(val, drvdata->base + off);
+ }
+}
+
+static inline unsigned int etm_readl(struct etm_drvdata *drvdata, u32 off)
+{
+ u32 val;
+
+ if (drvdata->use_cp14) {
+ if (etm_readl_cp14(off, &val)) {
+ dev_err(drvdata->dev,
+ "invalid CP14 access to ETM reg: %#x", off);
+ }
+ } else {
+ val = readl_relaxed(drvdata->base + off);
+ }
+
+ return val;
+}
+
+extern const struct attribute_group *coresight_etm_groups[];
+int etm_get_trace_id(struct etm_drvdata *drvdata);
+void etm_set_default(struct etm_config *config);
+void etm_config_trace_mode(struct etm_config *config);
+struct etm_config *get_etm_config(struct etm_drvdata *drvdata);
#endif
diff --git a/drivers/hwtracing/coresight/coresight-etm3x-sysfs.c b/drivers/hwtracing/coresight/coresight-etm3x-sysfs.c
new file mode 100644
index 0000000..cbb4046
--- /dev/null
+++ b/drivers/hwtracing/coresight/coresight-etm3x-sysfs.c
@@ -0,0 +1,1272 @@
+/*
+ * Copyright(C) 2015 Linaro Limited. All rights reserved.
+ * Author: Mathieu Poirier <mathieu.poirier@linaro.org>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 as published by
+ * the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/pm_runtime.h>
+#include <linux/sysfs.h>
+#include "coresight-etm.h"
+
+static ssize_t nr_addr_cmp_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ unsigned long val;
+ struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+
+ val = drvdata->nr_addr_cmp;
+ return sprintf(buf, "%#lx\n", val);
+}
+static DEVICE_ATTR_RO(nr_addr_cmp);
+
+static ssize_t nr_cntr_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{ unsigned long val;
+ struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+
+ val = drvdata->nr_cntr;
+ return sprintf(buf, "%#lx\n", val);
+}
+static DEVICE_ATTR_RO(nr_cntr);
+
+static ssize_t nr_ctxid_cmp_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ unsigned long val;
+ struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+
+ val = drvdata->nr_ctxid_cmp;
+ return sprintf(buf, "%#lx\n", val);
+}
+static DEVICE_ATTR_RO(nr_ctxid_cmp);
+
+static ssize_t etmsr_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ unsigned long flags, val;
+ struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+
+ pm_runtime_get_sync(drvdata->dev);
+ spin_lock_irqsave(&drvdata->spinlock, flags);
+ CS_UNLOCK(drvdata->base);
+
+ val = etm_readl(drvdata, ETMSR);
+
+ CS_LOCK(drvdata->base);
+ spin_unlock_irqrestore(&drvdata->spinlock, flags);
+ pm_runtime_put(drvdata->dev);
+
+ return sprintf(buf, "%#lx\n", val);
+}
+static DEVICE_ATTR_RO(etmsr);
+
+static ssize_t reset_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ int i, ret;
+ unsigned long val;
+ struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etm_config *config = &drvdata->config;
+
+ ret = kstrtoul(buf, 16, &val);
+ if (ret)
+ return ret;
+
+ if (val) {
+ spin_lock(&drvdata->spinlock);
+ memset(config, 0, sizeof(struct etm_config));
+ config->mode = ETM_MODE_EXCLUDE;
+ config->trigger_event = ETM_DEFAULT_EVENT_VAL;
+ for (i = 0; i < drvdata->nr_addr_cmp; i++) {
+ config->addr_type[i] = ETM_ADDR_TYPE_NONE;
+ }
+
+ etm_set_default(config);
+ spin_unlock(&drvdata->spinlock);
+ }
+
+ return size;
+}
+static DEVICE_ATTR_WO(reset);
+
+static ssize_t mode_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ unsigned long val;
+ struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etm_config *config = &drvdata->config;
+
+ val = config->mode;
+ return sprintf(buf, "%#lx\n", val);
+}
+
+static ssize_t mode_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ int ret;
+ unsigned long val;
+ struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etm_config *config = &drvdata->config;
+
+ ret = kstrtoul(buf, 16, &val);
+ if (ret)
+ return ret;
+
+ spin_lock(&drvdata->spinlock);
+ config->mode = val & ETM_MODE_ALL;
+
+ if (config->mode & ETM_MODE_EXCLUDE)
+ config->enable_ctrl1 |= ETMTECR1_INC_EXC;
+ else
+ config->enable_ctrl1 &= ~ETMTECR1_INC_EXC;
+
+ if (config->mode & ETM_MODE_CYCACC)
+ config->ctrl |= ETMCR_CYC_ACC;
+ else
+ config->ctrl &= ~ETMCR_CYC_ACC;
+
+ if (config->mode & ETM_MODE_STALL) {
+ if (!(drvdata->etmccr & ETMCCR_FIFOFULL)) {
+ dev_warn(drvdata->dev, "stall mode not supported\n");
+ ret = -EINVAL;
+ goto err_unlock;
+ }
+ config->ctrl |= ETMCR_STALL_MODE;
+ } else
+ config->ctrl &= ~ETMCR_STALL_MODE;
+
+ if (config->mode & ETM_MODE_TIMESTAMP) {
+ if (!(drvdata->etmccer & ETMCCER_TIMESTAMP)) {
+ dev_warn(drvdata->dev, "timestamp not supported\n");
+ ret = -EINVAL;
+ goto err_unlock;
+ }
+ config->ctrl |= ETMCR_TIMESTAMP_EN;
+ } else
+ config->ctrl &= ~ETMCR_TIMESTAMP_EN;
+
+ if (config->mode & ETM_MODE_CTXID)
+ config->ctrl |= ETMCR_CTXID_SIZE;
+ else
+ config->ctrl &= ~ETMCR_CTXID_SIZE;
+
+ if (config->mode & (ETM_MODE_EXCL_KERN | ETM_MODE_EXCL_USER))
+ etm_config_trace_mode(config);
+
+ spin_unlock(&drvdata->spinlock);
+
+ return size;
+
+err_unlock:
+ spin_unlock(&drvdata->spinlock);
+ return ret;
+}
+static DEVICE_ATTR_RW(mode);
+
+static ssize_t trigger_event_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ unsigned long val;
+ struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etm_config *config = &drvdata->config;
+
+ val = config->trigger_event;
+ return sprintf(buf, "%#lx\n", val);
+}
+
+static ssize_t trigger_event_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ int ret;
+ unsigned long val;
+ struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etm_config *config = &drvdata->config;
+
+ ret = kstrtoul(buf, 16, &val);
+ if (ret)
+ return ret;
+
+ config->trigger_event = val & ETM_EVENT_MASK;
+
+ return size;
+}
+static DEVICE_ATTR_RW(trigger_event);
+
+static ssize_t enable_event_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ unsigned long val;
+ struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etm_config *config = &drvdata->config;
+
+ val = config->enable_event;
+ return sprintf(buf, "%#lx\n", val);
+}
+
+static ssize_t enable_event_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ int ret;
+ unsigned long val;
+ struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etm_config *config = &drvdata->config;
+
+ ret = kstrtoul(buf, 16, &val);
+ if (ret)
+ return ret;
+
+ config->enable_event = val & ETM_EVENT_MASK;
+
+ return size;
+}
+static DEVICE_ATTR_RW(enable_event);
+
+static ssize_t fifofull_level_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ unsigned long val;
+ struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etm_config *config = &drvdata->config;
+
+ val = config->fifofull_level;
+ return sprintf(buf, "%#lx\n", val);
+}
+
+static ssize_t fifofull_level_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ int ret;
+ unsigned long val;
+ struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etm_config *config = &drvdata->config;
+
+ ret = kstrtoul(buf, 16, &val);
+ if (ret)
+ return ret;
+
+ config->fifofull_level = val;
+
+ return size;
+}
+static DEVICE_ATTR_RW(fifofull_level);
+
+static ssize_t addr_idx_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ unsigned long val;
+ struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etm_config *config = &drvdata->config;
+
+ val = config->addr_idx;
+ return sprintf(buf, "%#lx\n", val);
+}
+
+static ssize_t addr_idx_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ int ret;
+ unsigned long val;
+ struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etm_config *config = &drvdata->config;
+
+ ret = kstrtoul(buf, 16, &val);
+ if (ret)
+ return ret;
+
+ if (val >= drvdata->nr_addr_cmp)
+ return -EINVAL;
+
+ /*
+ * Use spinlock to ensure index doesn't change while it gets
+ * dereferenced multiple times within a spinlock block elsewhere.
+ */
+ spin_lock(&drvdata->spinlock);
+ config->addr_idx = val;
+ spin_unlock(&drvdata->spinlock);
+
+ return size;
+}
+static DEVICE_ATTR_RW(addr_idx);
+
+static ssize_t addr_single_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ u8 idx;
+ unsigned long val;
+ struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etm_config *config = &drvdata->config;
+
+ spin_lock(&drvdata->spinlock);
+ idx = config->addr_idx;
+ if (!(config->addr_type[idx] == ETM_ADDR_TYPE_NONE ||
+ config->addr_type[idx] == ETM_ADDR_TYPE_SINGLE)) {
+ spin_unlock(&drvdata->spinlock);
+ return -EINVAL;
+ }
+
+ val = config->addr_val[idx];
+ spin_unlock(&drvdata->spinlock);
+
+ return sprintf(buf, "%#lx\n", val);
+}
+
+static ssize_t addr_single_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ u8 idx;
+ int ret;
+ unsigned long val;
+ struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etm_config *config = &drvdata->config;
+
+ ret = kstrtoul(buf, 16, &val);
+ if (ret)
+ return ret;
+
+ spin_lock(&drvdata->spinlock);
+ idx = config->addr_idx;
+ if (!(config->addr_type[idx] == ETM_ADDR_TYPE_NONE ||
+ config->addr_type[idx] == ETM_ADDR_TYPE_SINGLE)) {
+ spin_unlock(&drvdata->spinlock);
+ return -EINVAL;
+ }
+
+ config->addr_val[idx] = val;
+ config->addr_type[idx] = ETM_ADDR_TYPE_SINGLE;
+ spin_unlock(&drvdata->spinlock);
+
+ return size;
+}
+static DEVICE_ATTR_RW(addr_single);
+
+static ssize_t addr_range_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ u8 idx;
+ unsigned long val1, val2;
+ struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etm_config *config = &drvdata->config;
+
+ spin_lock(&drvdata->spinlock);
+ idx = config->addr_idx;
+ if (idx % 2 != 0) {
+ spin_unlock(&drvdata->spinlock);
+ return -EPERM;
+ }
+ if (!((config->addr_type[idx] == ETM_ADDR_TYPE_NONE &&
+ config->addr_type[idx + 1] == ETM_ADDR_TYPE_NONE) ||
+ (config->addr_type[idx] == ETM_ADDR_TYPE_RANGE &&
+ config->addr_type[idx + 1] == ETM_ADDR_TYPE_RANGE))) {
+ spin_unlock(&drvdata->spinlock);
+ return -EPERM;
+ }
+
+ val1 = config->addr_val[idx];
+ val2 = config->addr_val[idx + 1];
+ spin_unlock(&drvdata->spinlock);
+
+ return sprintf(buf, "%#lx %#lx\n", val1, val2);
+}
+
+static ssize_t addr_range_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ u8 idx;
+ unsigned long val1, val2;
+ struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etm_config *config = &drvdata->config;
+
+ if (sscanf(buf, "%lx %lx", &val1, &val2) != 2)
+ return -EINVAL;
+ /* Lower address comparator cannot have a higher address value */
+ if (val1 > val2)
+ return -EINVAL;
+
+ spin_lock(&drvdata->spinlock);
+ idx = config->addr_idx;
+ if (idx % 2 != 0) {
+ spin_unlock(&drvdata->spinlock);
+ return -EPERM;
+ }
+ if (!((config->addr_type[idx] == ETM_ADDR_TYPE_NONE &&
+ config->addr_type[idx + 1] == ETM_ADDR_TYPE_NONE) ||
+ (config->addr_type[idx] == ETM_ADDR_TYPE_RANGE &&
+ config->addr_type[idx + 1] == ETM_ADDR_TYPE_RANGE))) {
+ spin_unlock(&drvdata->spinlock);
+ return -EPERM;
+ }
+
+ config->addr_val[idx] = val1;
+ config->addr_type[idx] = ETM_ADDR_TYPE_RANGE;
+ config->addr_val[idx + 1] = val2;
+ config->addr_type[idx + 1] = ETM_ADDR_TYPE_RANGE;
+ config->enable_ctrl1 |= (1 << (idx/2));
+ spin_unlock(&drvdata->spinlock);
+
+ return size;
+}
+static DEVICE_ATTR_RW(addr_range);
+
+static ssize_t addr_start_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ u8 idx;
+ unsigned long val;
+ struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etm_config *config = &drvdata->config;
+
+ spin_lock(&drvdata->spinlock);
+ idx = config->addr_idx;
+ if (!(config->addr_type[idx] == ETM_ADDR_TYPE_NONE ||
+ config->addr_type[idx] == ETM_ADDR_TYPE_START)) {
+ spin_unlock(&drvdata->spinlock);
+ return -EPERM;
+ }
+
+ val = config->addr_val[idx];
+ spin_unlock(&drvdata->spinlock);
+
+ return sprintf(buf, "%#lx\n", val);
+}
+
+static ssize_t addr_start_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ u8 idx;
+ int ret;
+ unsigned long val;
+ struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etm_config *config = &drvdata->config;
+
+ ret = kstrtoul(buf, 16, &val);
+ if (ret)
+ return ret;
+
+ spin_lock(&drvdata->spinlock);
+ idx = config->addr_idx;
+ if (!(config->addr_type[idx] == ETM_ADDR_TYPE_NONE ||
+ config->addr_type[idx] == ETM_ADDR_TYPE_START)) {
+ spin_unlock(&drvdata->spinlock);
+ return -EPERM;
+ }
+
+ config->addr_val[idx] = val;
+ config->addr_type[idx] = ETM_ADDR_TYPE_START;
+ config->startstop_ctrl |= (1 << idx);
+ config->enable_ctrl1 |= BIT(25);
+ spin_unlock(&drvdata->spinlock);
+
+ return size;
+}
+static DEVICE_ATTR_RW(addr_start);
+
+static ssize_t addr_stop_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ u8 idx;
+ unsigned long val;
+ struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etm_config *config = &drvdata->config;
+
+ spin_lock(&drvdata->spinlock);
+ idx = config->addr_idx;
+ if (!(config->addr_type[idx] == ETM_ADDR_TYPE_NONE ||
+ config->addr_type[idx] == ETM_ADDR_TYPE_STOP)) {
+ spin_unlock(&drvdata->spinlock);
+ return -EPERM;
+ }
+
+ val = config->addr_val[idx];
+ spin_unlock(&drvdata->spinlock);
+
+ return sprintf(buf, "%#lx\n", val);
+}
+
+static ssize_t addr_stop_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ u8 idx;
+ int ret;
+ unsigned long val;
+ struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etm_config *config = &drvdata->config;
+
+ ret = kstrtoul(buf, 16, &val);
+ if (ret)
+ return ret;
+
+ spin_lock(&drvdata->spinlock);
+ idx = config->addr_idx;
+ if (!(config->addr_type[idx] == ETM_ADDR_TYPE_NONE ||
+ config->addr_type[idx] == ETM_ADDR_TYPE_STOP)) {
+ spin_unlock(&drvdata->spinlock);
+ return -EPERM;
+ }
+
+ config->addr_val[idx] = val;
+ config->addr_type[idx] = ETM_ADDR_TYPE_STOP;
+ config->startstop_ctrl |= (1 << (idx + 16));
+ config->enable_ctrl1 |= ETMTECR1_START_STOP;
+ spin_unlock(&drvdata->spinlock);
+
+ return size;
+}
+static DEVICE_ATTR_RW(addr_stop);
+
+static ssize_t addr_acctype_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ unsigned long val;
+ struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etm_config *config = &drvdata->config;
+
+ spin_lock(&drvdata->spinlock);
+ val = config->addr_acctype[config->addr_idx];
+ spin_unlock(&drvdata->spinlock);
+
+ return sprintf(buf, "%#lx\n", val);
+}
+
+static ssize_t addr_acctype_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ int ret;
+ unsigned long val;
+ struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etm_config *config = &drvdata->config;
+
+ ret = kstrtoul(buf, 16, &val);
+ if (ret)
+ return ret;
+
+ spin_lock(&drvdata->spinlock);
+ config->addr_acctype[config->addr_idx] = val;
+ spin_unlock(&drvdata->spinlock);
+
+ return size;
+}
+static DEVICE_ATTR_RW(addr_acctype);
+
+static ssize_t cntr_idx_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ unsigned long val;
+ struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etm_config *config = &drvdata->config;
+
+ val = config->cntr_idx;
+ return sprintf(buf, "%#lx\n", val);
+}
+
+static ssize_t cntr_idx_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ int ret;
+ unsigned long val;
+ struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etm_config *config = &drvdata->config;
+
+ ret = kstrtoul(buf, 16, &val);
+ if (ret)
+ return ret;
+
+ if (val >= drvdata->nr_cntr)
+ return -EINVAL;
+ /*
+ * Use spinlock to ensure index doesn't change while it gets
+ * dereferenced multiple times within a spinlock block elsewhere.
+ */
+ spin_lock(&drvdata->spinlock);
+ config->cntr_idx = val;
+ spin_unlock(&drvdata->spinlock);
+
+ return size;
+}
+static DEVICE_ATTR_RW(cntr_idx);
+
+static ssize_t cntr_rld_val_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ unsigned long val;
+ struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etm_config *config = &drvdata->config;
+
+ spin_lock(&drvdata->spinlock);
+ val = config->cntr_rld_val[config->cntr_idx];
+ spin_unlock(&drvdata->spinlock);
+
+ return sprintf(buf, "%#lx\n", val);
+}
+
+static ssize_t cntr_rld_val_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ int ret;
+ unsigned long val;
+ struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etm_config *config = &drvdata->config;
+
+ ret = kstrtoul(buf, 16, &val);
+ if (ret)
+ return ret;
+
+ spin_lock(&drvdata->spinlock);
+ config->cntr_rld_val[config->cntr_idx] = val;
+ spin_unlock(&drvdata->spinlock);
+
+ return size;
+}
+static DEVICE_ATTR_RW(cntr_rld_val);
+
+static ssize_t cntr_event_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ unsigned long val;
+ struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etm_config *config = &drvdata->config;
+
+ spin_lock(&drvdata->spinlock);
+ val = config->cntr_event[config->cntr_idx];
+ spin_unlock(&drvdata->spinlock);
+
+ return sprintf(buf, "%#lx\n", val);
+}
+
+static ssize_t cntr_event_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ int ret;
+ unsigned long val;
+ struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etm_config *config = &drvdata->config;
+
+ ret = kstrtoul(buf, 16, &val);
+ if (ret)
+ return ret;
+
+ spin_lock(&drvdata->spinlock);
+ config->cntr_event[config->cntr_idx] = val & ETM_EVENT_MASK;
+ spin_unlock(&drvdata->spinlock);
+
+ return size;
+}
+static DEVICE_ATTR_RW(cntr_event);
+
+static ssize_t cntr_rld_event_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ unsigned long val;
+ struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etm_config *config = &drvdata->config;
+
+ spin_lock(&drvdata->spinlock);
+ val = config->cntr_rld_event[config->cntr_idx];
+ spin_unlock(&drvdata->spinlock);
+
+ return sprintf(buf, "%#lx\n", val);
+}
+
+static ssize_t cntr_rld_event_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ int ret;
+ unsigned long val;
+ struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etm_config *config = &drvdata->config;
+
+ ret = kstrtoul(buf, 16, &val);
+ if (ret)
+ return ret;
+
+ spin_lock(&drvdata->spinlock);
+ config->cntr_rld_event[config->cntr_idx] = val & ETM_EVENT_MASK;
+ spin_unlock(&drvdata->spinlock);
+
+ return size;
+}
+static DEVICE_ATTR_RW(cntr_rld_event);
+
+static ssize_t cntr_val_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ int i, ret = 0;
+ u32 val;
+ struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etm_config *config = &drvdata->config;
+
+ if (!local_read(&drvdata->mode)) {
+ spin_lock(&drvdata->spinlock);
+ for (i = 0; i < drvdata->nr_cntr; i++)
+ ret += sprintf(buf, "counter %d: %x\n",
+ i, config->cntr_val[i]);
+ spin_unlock(&drvdata->spinlock);
+ return ret;
+ }
+
+ for (i = 0; i < drvdata->nr_cntr; i++) {
+ val = etm_readl(drvdata, ETMCNTVRn(i));
+ ret += sprintf(buf, "counter %d: %x\n", i, val);
+ }
+
+ return ret;
+}
+
+static ssize_t cntr_val_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ int ret;
+ unsigned long val;
+ struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etm_config *config = &drvdata->config;
+
+ ret = kstrtoul(buf, 16, &val);
+ if (ret)
+ return ret;
+
+ spin_lock(&drvdata->spinlock);
+ config->cntr_val[config->cntr_idx] = val;
+ spin_unlock(&drvdata->spinlock);
+
+ return size;
+}
+static DEVICE_ATTR_RW(cntr_val);
+
+static ssize_t seq_12_event_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ unsigned long val;
+ struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etm_config *config = &drvdata->config;
+
+ val = config->seq_12_event;
+ return sprintf(buf, "%#lx\n", val);
+}
+
+static ssize_t seq_12_event_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ int ret;
+ unsigned long val;
+ struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etm_config *config = &drvdata->config;
+
+ ret = kstrtoul(buf, 16, &val);
+ if (ret)
+ return ret;
+
+ config->seq_12_event = val & ETM_EVENT_MASK;
+ return size;
+}
+static DEVICE_ATTR_RW(seq_12_event);
+
+static ssize_t seq_21_event_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ unsigned long val;
+ struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etm_config *config = &drvdata->config;
+
+ val = config->seq_21_event;
+ return sprintf(buf, "%#lx\n", val);
+}
+
+static ssize_t seq_21_event_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ int ret;
+ unsigned long val;
+ struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etm_config *config = &drvdata->config;
+
+ ret = kstrtoul(buf, 16, &val);
+ if (ret)
+ return ret;
+
+ config->seq_21_event = val & ETM_EVENT_MASK;
+ return size;
+}
+static DEVICE_ATTR_RW(seq_21_event);
+
+static ssize_t seq_23_event_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ unsigned long val;
+ struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etm_config *config = &drvdata->config;
+
+ val = config->seq_23_event;
+ return sprintf(buf, "%#lx\n", val);
+}
+
+static ssize_t seq_23_event_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ int ret;
+ unsigned long val;
+ struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etm_config *config = &drvdata->config;
+
+ ret = kstrtoul(buf, 16, &val);
+ if (ret)
+ return ret;
+
+ config->seq_23_event = val & ETM_EVENT_MASK;
+ return size;
+}
+static DEVICE_ATTR_RW(seq_23_event);
+
+static ssize_t seq_31_event_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ unsigned long val;
+ struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etm_config *config = &drvdata->config;
+
+ val = config->seq_31_event;
+ return sprintf(buf, "%#lx\n", val);
+}
+
+static ssize_t seq_31_event_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ int ret;
+ unsigned long val;
+ struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etm_config *config = &drvdata->config;
+
+ ret = kstrtoul(buf, 16, &val);
+ if (ret)
+ return ret;
+
+ config->seq_31_event = val & ETM_EVENT_MASK;
+ return size;
+}
+static DEVICE_ATTR_RW(seq_31_event);
+
+static ssize_t seq_32_event_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ unsigned long val;
+ struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etm_config *config = &drvdata->config;
+
+ val = config->seq_32_event;
+ return sprintf(buf, "%#lx\n", val);
+}
+
+static ssize_t seq_32_event_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ int ret;
+ unsigned long val;
+ struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etm_config *config = &drvdata->config;
+
+ ret = kstrtoul(buf, 16, &val);
+ if (ret)
+ return ret;
+
+ config->seq_32_event = val & ETM_EVENT_MASK;
+ return size;
+}
+static DEVICE_ATTR_RW(seq_32_event);
+
+static ssize_t seq_13_event_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ unsigned long val;
+ struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etm_config *config = &drvdata->config;
+
+ val = config->seq_13_event;
+ return sprintf(buf, "%#lx\n", val);
+}
+
+static ssize_t seq_13_event_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ int ret;
+ unsigned long val;
+ struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etm_config *config = &drvdata->config;
+
+ ret = kstrtoul(buf, 16, &val);
+ if (ret)
+ return ret;
+
+ config->seq_13_event = val & ETM_EVENT_MASK;
+ return size;
+}
+static DEVICE_ATTR_RW(seq_13_event);
+
+static ssize_t seq_curr_state_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ unsigned long val, flags;
+ struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etm_config *config = &drvdata->config;
+
+ if (!local_read(&drvdata->mode)) {
+ val = config->seq_curr_state;
+ goto out;
+ }
+
+ pm_runtime_get_sync(drvdata->dev);
+ spin_lock_irqsave(&drvdata->spinlock, flags);
+
+ CS_UNLOCK(drvdata->base);
+ val = (etm_readl(drvdata, ETMSQR) & ETM_SQR_MASK);
+ CS_LOCK(drvdata->base);
+
+ spin_unlock_irqrestore(&drvdata->spinlock, flags);
+ pm_runtime_put(drvdata->dev);
+out:
+ return sprintf(buf, "%#lx\n", val);
+}
+
+static ssize_t seq_curr_state_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ int ret;
+ unsigned long val;
+ struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etm_config *config = &drvdata->config;
+
+ ret = kstrtoul(buf, 16, &val);
+ if (ret)
+ return ret;
+
+ if (val > ETM_SEQ_STATE_MAX_VAL)
+ return -EINVAL;
+
+ config->seq_curr_state = val;
+
+ return size;
+}
+static DEVICE_ATTR_RW(seq_curr_state);
+
+static ssize_t ctxid_idx_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ unsigned long val;
+ struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etm_config *config = &drvdata->config;
+
+ val = config->ctxid_idx;
+ return sprintf(buf, "%#lx\n", val);
+}
+
+static ssize_t ctxid_idx_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ int ret;
+ unsigned long val;
+ struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etm_config *config = &drvdata->config;
+
+ ret = kstrtoul(buf, 16, &val);
+ if (ret)
+ return ret;
+
+ if (val >= drvdata->nr_ctxid_cmp)
+ return -EINVAL;
+
+ /*
+ * Use spinlock to ensure index doesn't change while it gets
+ * dereferenced multiple times within a spinlock block elsewhere.
+ */
+ spin_lock(&drvdata->spinlock);
+ config->ctxid_idx = val;
+ spin_unlock(&drvdata->spinlock);
+
+ return size;
+}
+static DEVICE_ATTR_RW(ctxid_idx);
+
+static ssize_t ctxid_pid_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ unsigned long val;
+ struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etm_config *config = &drvdata->config;
+
+ spin_lock(&drvdata->spinlock);
+ val = config->ctxid_vpid[config->ctxid_idx];
+ spin_unlock(&drvdata->spinlock);
+
+ return sprintf(buf, "%#lx\n", val);
+}
+
+static ssize_t ctxid_pid_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ int ret;
+ unsigned long vpid, pid;
+ struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etm_config *config = &drvdata->config;
+
+ ret = kstrtoul(buf, 16, &vpid);
+ if (ret)
+ return ret;
+
+ pid = coresight_vpid_to_pid(vpid);
+
+ spin_lock(&drvdata->spinlock);
+ config->ctxid_pid[config->ctxid_idx] = pid;
+ config->ctxid_vpid[config->ctxid_idx] = vpid;
+ spin_unlock(&drvdata->spinlock);
+
+ return size;
+}
+static DEVICE_ATTR_RW(ctxid_pid);
+
+static ssize_t ctxid_mask_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ unsigned long val;
+ struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etm_config *config = &drvdata->config;
+
+ val = config->ctxid_mask;
+ return sprintf(buf, "%#lx\n", val);
+}
+
+static ssize_t ctxid_mask_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ int ret;
+ unsigned long val;
+ struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etm_config *config = &drvdata->config;
+
+ ret = kstrtoul(buf, 16, &val);
+ if (ret)
+ return ret;
+
+ config->ctxid_mask = val;
+ return size;
+}
+static DEVICE_ATTR_RW(ctxid_mask);
+
+static ssize_t sync_freq_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ unsigned long val;
+ struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etm_config *config = &drvdata->config;
+
+ val = config->sync_freq;
+ return sprintf(buf, "%#lx\n", val);
+}
+
+static ssize_t sync_freq_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ int ret;
+ unsigned long val;
+ struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etm_config *config = &drvdata->config;
+
+ ret = kstrtoul(buf, 16, &val);
+ if (ret)
+ return ret;
+
+ config->sync_freq = val & ETM_SYNC_MASK;
+ return size;
+}
+static DEVICE_ATTR_RW(sync_freq);
+
+static ssize_t timestamp_event_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ unsigned long val;
+ struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etm_config *config = &drvdata->config;
+
+ val = config->timestamp_event;
+ return sprintf(buf, "%#lx\n", val);
+}
+
+static ssize_t timestamp_event_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ int ret;
+ unsigned long val;
+ struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etm_config *config = &drvdata->config;
+
+ ret = kstrtoul(buf, 16, &val);
+ if (ret)
+ return ret;
+
+ config->timestamp_event = val & ETM_EVENT_MASK;
+ return size;
+}
+static DEVICE_ATTR_RW(timestamp_event);
+
+static ssize_t cpu_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ int val;
+ struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+
+ val = drvdata->cpu;
+ return scnprintf(buf, PAGE_SIZE, "%d\n", val);
+
+}
+static DEVICE_ATTR_RO(cpu);
+
+static ssize_t traceid_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ unsigned long val;
+ struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+
+ val = etm_get_trace_id(drvdata);
+
+ return sprintf(buf, "%#lx\n", val);
+}
+
+static ssize_t traceid_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ int ret;
+ unsigned long val;
+ struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+
+ ret = kstrtoul(buf, 16, &val);
+ if (ret)
+ return ret;
+
+ drvdata->traceid = val & ETM_TRACEID_MASK;
+ return size;
+}
+static DEVICE_ATTR_RW(traceid);
+
+static struct attribute *coresight_etm_attrs[] = {
+ &dev_attr_nr_addr_cmp.attr,
+ &dev_attr_nr_cntr.attr,
+ &dev_attr_nr_ctxid_cmp.attr,
+ &dev_attr_etmsr.attr,
+ &dev_attr_reset.attr,
+ &dev_attr_mode.attr,
+ &dev_attr_trigger_event.attr,
+ &dev_attr_enable_event.attr,
+ &dev_attr_fifofull_level.attr,
+ &dev_attr_addr_idx.attr,
+ &dev_attr_addr_single.attr,
+ &dev_attr_addr_range.attr,
+ &dev_attr_addr_start.attr,
+ &dev_attr_addr_stop.attr,
+ &dev_attr_addr_acctype.attr,
+ &dev_attr_cntr_idx.attr,
+ &dev_attr_cntr_rld_val.attr,
+ &dev_attr_cntr_event.attr,
+ &dev_attr_cntr_rld_event.attr,
+ &dev_attr_cntr_val.attr,
+ &dev_attr_seq_12_event.attr,
+ &dev_attr_seq_21_event.attr,
+ &dev_attr_seq_23_event.attr,
+ &dev_attr_seq_31_event.attr,
+ &dev_attr_seq_32_event.attr,
+ &dev_attr_seq_13_event.attr,
+ &dev_attr_seq_curr_state.attr,
+ &dev_attr_ctxid_idx.attr,
+ &dev_attr_ctxid_pid.attr,
+ &dev_attr_ctxid_mask.attr,
+ &dev_attr_sync_freq.attr,
+ &dev_attr_timestamp_event.attr,
+ &dev_attr_traceid.attr,
+ &dev_attr_cpu.attr,
+ NULL,
+};
+
+#define coresight_simple_func(name, offset) \
+static ssize_t name##_show(struct device *_dev, \
+ struct device_attribute *attr, char *buf) \
+{ \
+ struct etm_drvdata *drvdata = dev_get_drvdata(_dev->parent); \
+ return scnprintf(buf, PAGE_SIZE, "0x%x\n", \
+ readl_relaxed(drvdata->base + offset)); \
+} \
+DEVICE_ATTR_RO(name)
+
+coresight_simple_func(etmccr, ETMCCR);
+coresight_simple_func(etmccer, ETMCCER);
+coresight_simple_func(etmscr, ETMSCR);
+coresight_simple_func(etmidr, ETMIDR);
+coresight_simple_func(etmcr, ETMCR);
+coresight_simple_func(etmtraceidr, ETMTRACEIDR);
+coresight_simple_func(etmteevr, ETMTEEVR);
+coresight_simple_func(etmtssvr, ETMTSSCR);
+coresight_simple_func(etmtecr1, ETMTECR1);
+coresight_simple_func(etmtecr2, ETMTECR2);
+
+static struct attribute *coresight_etm_mgmt_attrs[] = {
+ &dev_attr_etmccr.attr,
+ &dev_attr_etmccer.attr,
+ &dev_attr_etmscr.attr,
+ &dev_attr_etmidr.attr,
+ &dev_attr_etmcr.attr,
+ &dev_attr_etmtraceidr.attr,
+ &dev_attr_etmteevr.attr,
+ &dev_attr_etmtssvr.attr,
+ &dev_attr_etmtecr1.attr,
+ &dev_attr_etmtecr2.attr,
+ NULL,
+};
+
+static const struct attribute_group coresight_etm_group = {
+ .attrs = coresight_etm_attrs,
+};
+
+static const struct attribute_group coresight_etm_mgmt_group = {
+ .attrs = coresight_etm_mgmt_attrs,
+ .name = "mgmt",
+};
+
+const struct attribute_group *coresight_etm_groups[] = {
+ &coresight_etm_group,
+ &coresight_etm_mgmt_group,
+ NULL,
+};
diff --git a/drivers/hwtracing/coresight/coresight-etm3x.c b/drivers/hwtracing/coresight/coresight-etm3x.c
index d630b7e..d83ab82 100644
--- a/drivers/hwtracing/coresight/coresight-etm3x.c
+++ b/drivers/hwtracing/coresight/coresight-etm3x.c
@@ -1,5 +1,7 @@
/* Copyright (c) 2011-2012, The Linux Foundation. All rights reserved.
*
+ * Description: CoreSight Program Flow Trace driver
+ *
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
@@ -11,7 +13,7 @@
*/
#include <linux/kernel.h>
-#include <linux/module.h>
+#include <linux/moduleparam.h>
#include <linux/init.h>
#include <linux/types.h>
#include <linux/device.h>
@@ -27,14 +29,21 @@
#include <linux/cpu.h>
#include <linux/of.h>
#include <linux/coresight.h>
+#include <linux/coresight-pmu.h>
#include <linux/amba/bus.h>
#include <linux/seq_file.h>
#include <linux/uaccess.h>
#include <linux/clk.h>
+#include <linux/perf_event.h>
#include <asm/sections.h>
#include "coresight-etm.h"
+#include "coresight-etm-perf.h"
+/*
+ * Not really modular but using module_param is the easiest way to
+ * remain consistent with existing use cases for now.
+ */
static int boot_enable;
module_param_named(boot_enable, boot_enable, int, S_IRUGO);
@@ -42,45 +51,16 @@
static int etm_count;
static struct etm_drvdata *etmdrvdata[NR_CPUS];
-static inline void etm_writel(struct etm_drvdata *drvdata,
- u32 val, u32 off)
-{
- if (drvdata->use_cp14) {
- if (etm_writel_cp14(off, val)) {
- dev_err(drvdata->dev,
- "invalid CP14 access to ETM reg: %#x", off);
- }
- } else {
- writel_relaxed(val, drvdata->base + off);
- }
-}
-
-static inline unsigned int etm_readl(struct etm_drvdata *drvdata, u32 off)
-{
- u32 val;
-
- if (drvdata->use_cp14) {
- if (etm_readl_cp14(off, &val)) {
- dev_err(drvdata->dev,
- "invalid CP14 access to ETM reg: %#x", off);
- }
- } else {
- val = readl_relaxed(drvdata->base + off);
- }
-
- return val;
-}
-
/*
* Memory mapped writes to clear os lock are not supported on some processors
* and OS lock must be unlocked before any memory mapped access on such
* processors, otherwise memory mapped reads/writes will be invalid.
*/
-static void etm_os_unlock(void *info)
+static void etm_os_unlock(struct etm_drvdata *drvdata)
{
- struct etm_drvdata *drvdata = (struct etm_drvdata *)info;
/* Writing any value to ETMOSLAR unlocks the trace registers */
etm_writel(drvdata, 0x0, ETMOSLAR);
+ drvdata->os_unlock = true;
isb();
}
@@ -215,36 +195,156 @@
}
}
-static void etm_set_default(struct etm_drvdata *drvdata)
+void etm_set_default(struct etm_config *config)
{
int i;
- drvdata->trigger_event = ETM_DEFAULT_EVENT_VAL;
- drvdata->enable_event = ETM_HARD_WIRE_RES_A;
+ if (WARN_ON_ONCE(!config))
+ return;
- drvdata->seq_12_event = ETM_DEFAULT_EVENT_VAL;
- drvdata->seq_21_event = ETM_DEFAULT_EVENT_VAL;
- drvdata->seq_23_event = ETM_DEFAULT_EVENT_VAL;
- drvdata->seq_31_event = ETM_DEFAULT_EVENT_VAL;
- drvdata->seq_32_event = ETM_DEFAULT_EVENT_VAL;
- drvdata->seq_13_event = ETM_DEFAULT_EVENT_VAL;
- drvdata->timestamp_event = ETM_DEFAULT_EVENT_VAL;
+ /*
+ * Taken verbatim from the TRM:
+ *
+ * To trace all memory:
+ * set bit [24] in register 0x009, the ETMTECR1, to 1
+ * set all other bits in register 0x009, the ETMTECR1, to 0
+ * set all bits in register 0x007, the ETMTECR2, to 0
+ * set register 0x008, the ETMTEEVR, to 0x6F (TRUE).
+ */
+ config->enable_ctrl1 = BIT(24);
+ config->enable_ctrl2 = 0x0;
+ config->enable_event = ETM_HARD_WIRE_RES_A;
- for (i = 0; i < drvdata->nr_cntr; i++) {
- drvdata->cntr_rld_val[i] = 0x0;
- drvdata->cntr_event[i] = ETM_DEFAULT_EVENT_VAL;
- drvdata->cntr_rld_event[i] = ETM_DEFAULT_EVENT_VAL;
- drvdata->cntr_val[i] = 0x0;
+ config->trigger_event = ETM_DEFAULT_EVENT_VAL;
+ config->enable_event = ETM_HARD_WIRE_RES_A;
+
+ config->seq_12_event = ETM_DEFAULT_EVENT_VAL;
+ config->seq_21_event = ETM_DEFAULT_EVENT_VAL;
+ config->seq_23_event = ETM_DEFAULT_EVENT_VAL;
+ config->seq_31_event = ETM_DEFAULT_EVENT_VAL;
+ config->seq_32_event = ETM_DEFAULT_EVENT_VAL;
+ config->seq_13_event = ETM_DEFAULT_EVENT_VAL;
+ config->timestamp_event = ETM_DEFAULT_EVENT_VAL;
+
+ for (i = 0; i < ETM_MAX_CNTR; i++) {
+ config->cntr_rld_val[i] = 0x0;
+ config->cntr_event[i] = ETM_DEFAULT_EVENT_VAL;
+ config->cntr_rld_event[i] = ETM_DEFAULT_EVENT_VAL;
+ config->cntr_val[i] = 0x0;
}
- drvdata->seq_curr_state = 0x0;
- drvdata->ctxid_idx = 0x0;
- for (i = 0; i < drvdata->nr_ctxid_cmp; i++) {
- drvdata->ctxid_pid[i] = 0x0;
- drvdata->ctxid_vpid[i] = 0x0;
+ config->seq_curr_state = 0x0;
+ config->ctxid_idx = 0x0;
+ for (i = 0; i < ETM_MAX_CTXID_CMP; i++) {
+ config->ctxid_pid[i] = 0x0;
+ config->ctxid_vpid[i] = 0x0;
}
- drvdata->ctxid_mask = 0x0;
+ config->ctxid_mask = 0x0;
+}
+
+void etm_config_trace_mode(struct etm_config *config)
+{
+ u32 flags, mode;
+
+ mode = config->mode;
+
+ mode &= (ETM_MODE_EXCL_KERN | ETM_MODE_EXCL_USER);
+
+ /* excluding kernel AND user space doesn't make sense */
+ if (mode == (ETM_MODE_EXCL_KERN | ETM_MODE_EXCL_USER))
+ return;
+
+ /* nothing to do if neither flags are set */
+ if (!(mode & ETM_MODE_EXCL_KERN) && !(mode & ETM_MODE_EXCL_USER))
+ return;
+
+ flags = (1 << 0 | /* instruction execute */
+ 3 << 3 | /* ARM instruction */
+ 0 << 5 | /* No data value comparison */
+ 0 << 7 | /* No exact mach */
+ 0 << 8); /* Ignore context ID */
+
+ /* No need to worry about single address comparators. */
+ config->enable_ctrl2 = 0x0;
+
+ /* Bit 0 is address range comparator 1 */
+ config->enable_ctrl1 = ETMTECR1_ADDR_COMP_1;
+
+ /*
+ * On ETMv3.5:
+ * ETMACTRn[13,11] == Non-secure state comparison control
+ * ETMACTRn[12,10] == Secure state comparison control
+ *
+ * b00 == Match in all modes in this state
+ * b01 == Do not match in any more in this state
+ * b10 == Match in all modes excepts user mode in this state
+ * b11 == Match only in user mode in this state
+ */
+
+ /* Tracing in secure mode is not supported at this time */
+ flags |= (0 << 12 | 1 << 10);
+
+ if (mode & ETM_MODE_EXCL_USER) {
+ /* exclude user, match all modes except user mode */
+ flags |= (1 << 13 | 0 << 11);
+ } else {
+ /* exclude kernel, match only in user mode */
+ flags |= (1 << 13 | 1 << 11);
+ }
+
+ /*
+ * The ETMEEVR register is already set to "hard wire A". As such
+ * all there is to do is setup an address comparator that spans
+ * the entire address range and configure the state and mode bits.
+ */
+ config->addr_val[0] = (u32) 0x0;
+ config->addr_val[1] = (u32) ~0x0;
+ config->addr_acctype[0] = flags;
+ config->addr_acctype[1] = flags;
+ config->addr_type[0] = ETM_ADDR_TYPE_RANGE;
+ config->addr_type[1] = ETM_ADDR_TYPE_RANGE;
+}
+
+#define ETM3X_SUPPORTED_OPTIONS (ETMCR_CYC_ACC | ETMCR_TIMESTAMP_EN)
+
+static int etm_parse_event_config(struct etm_drvdata *drvdata,
+ struct perf_event_attr *attr)
+{
+ struct etm_config *config = &drvdata->config;
+
+ if (!attr)
+ return -EINVAL;
+
+ /* Clear configuration from previous run */
+ memset(config, 0, sizeof(struct etm_config));
+
+ if (attr->exclude_kernel)
+ config->mode = ETM_MODE_EXCL_KERN;
+
+ if (attr->exclude_user)
+ config->mode = ETM_MODE_EXCL_USER;
+
+ /* Always start from the default config */
+ etm_set_default(config);
+
+ /*
+ * By default the tracers are configured to trace the whole address
+ * range. Narrow the field only if requested by user space.
+ */
+ if (config->mode)
+ etm_config_trace_mode(config);
+
+ /*
+ * At this time only cycle accurate and timestamp options are
+ * available.
+ */
+ if (attr->config & ~ETM3X_SUPPORTED_OPTIONS)
+ return -EINVAL;
+
+ config->ctrl = attr->config;
+
+ return 0;
}
static void etm_enable_hw(void *info)
@@ -252,6 +352,7 @@
int i;
u32 etmcr;
struct etm_drvdata *drvdata = info;
+ struct etm_config *config = &drvdata->config;
CS_UNLOCK(drvdata->base);
@@ -265,65 +366,74 @@
etm_set_prog(drvdata);
etmcr = etm_readl(drvdata, ETMCR);
- etmcr &= (ETMCR_PWD_DWN | ETMCR_ETM_PRG);
+ /* Clear setting from a previous run if need be */
+ etmcr &= ~ETM3X_SUPPORTED_OPTIONS;
etmcr |= drvdata->port_size;
- etm_writel(drvdata, drvdata->ctrl | etmcr, ETMCR);
- etm_writel(drvdata, drvdata->trigger_event, ETMTRIGGER);
- etm_writel(drvdata, drvdata->startstop_ctrl, ETMTSSCR);
- etm_writel(drvdata, drvdata->enable_event, ETMTEEVR);
- etm_writel(drvdata, drvdata->enable_ctrl1, ETMTECR1);
- etm_writel(drvdata, drvdata->fifofull_level, ETMFFLR);
+ etmcr |= ETMCR_ETM_EN;
+ etm_writel(drvdata, config->ctrl | etmcr, ETMCR);
+ etm_writel(drvdata, config->trigger_event, ETMTRIGGER);
+ etm_writel(drvdata, config->startstop_ctrl, ETMTSSCR);
+ etm_writel(drvdata, config->enable_event, ETMTEEVR);
+ etm_writel(drvdata, config->enable_ctrl1, ETMTECR1);
+ etm_writel(drvdata, config->fifofull_level, ETMFFLR);
for (i = 0; i < drvdata->nr_addr_cmp; i++) {
- etm_writel(drvdata, drvdata->addr_val[i], ETMACVRn(i));
- etm_writel(drvdata, drvdata->addr_acctype[i], ETMACTRn(i));
+ etm_writel(drvdata, config->addr_val[i], ETMACVRn(i));
+ etm_writel(drvdata, config->addr_acctype[i], ETMACTRn(i));
}
for (i = 0; i < drvdata->nr_cntr; i++) {
- etm_writel(drvdata, drvdata->cntr_rld_val[i], ETMCNTRLDVRn(i));
- etm_writel(drvdata, drvdata->cntr_event[i], ETMCNTENRn(i));
- etm_writel(drvdata, drvdata->cntr_rld_event[i],
+ etm_writel(drvdata, config->cntr_rld_val[i], ETMCNTRLDVRn(i));
+ etm_writel(drvdata, config->cntr_event[i], ETMCNTENRn(i));
+ etm_writel(drvdata, config->cntr_rld_event[i],
ETMCNTRLDEVRn(i));
- etm_writel(drvdata, drvdata->cntr_val[i], ETMCNTVRn(i));
+ etm_writel(drvdata, config->cntr_val[i], ETMCNTVRn(i));
}
- etm_writel(drvdata, drvdata->seq_12_event, ETMSQ12EVR);
- etm_writel(drvdata, drvdata->seq_21_event, ETMSQ21EVR);
- etm_writel(drvdata, drvdata->seq_23_event, ETMSQ23EVR);
- etm_writel(drvdata, drvdata->seq_31_event, ETMSQ31EVR);
- etm_writel(drvdata, drvdata->seq_32_event, ETMSQ32EVR);
- etm_writel(drvdata, drvdata->seq_13_event, ETMSQ13EVR);
- etm_writel(drvdata, drvdata->seq_curr_state, ETMSQR);
+ etm_writel(drvdata, config->seq_12_event, ETMSQ12EVR);
+ etm_writel(drvdata, config->seq_21_event, ETMSQ21EVR);
+ etm_writel(drvdata, config->seq_23_event, ETMSQ23EVR);
+ etm_writel(drvdata, config->seq_31_event, ETMSQ31EVR);
+ etm_writel(drvdata, config->seq_32_event, ETMSQ32EVR);
+ etm_writel(drvdata, config->seq_13_event, ETMSQ13EVR);
+ etm_writel(drvdata, config->seq_curr_state, ETMSQR);
for (i = 0; i < drvdata->nr_ext_out; i++)
etm_writel(drvdata, ETM_DEFAULT_EVENT_VAL, ETMEXTOUTEVRn(i));
for (i = 0; i < drvdata->nr_ctxid_cmp; i++)
- etm_writel(drvdata, drvdata->ctxid_pid[i], ETMCIDCVRn(i));
- etm_writel(drvdata, drvdata->ctxid_mask, ETMCIDCMR);
- etm_writel(drvdata, drvdata->sync_freq, ETMSYNCFR);
+ etm_writel(drvdata, config->ctxid_pid[i], ETMCIDCVRn(i));
+ etm_writel(drvdata, config->ctxid_mask, ETMCIDCMR);
+ etm_writel(drvdata, config->sync_freq, ETMSYNCFR);
/* No external input selected */
etm_writel(drvdata, 0x0, ETMEXTINSELR);
- etm_writel(drvdata, drvdata->timestamp_event, ETMTSEVR);
+ etm_writel(drvdata, config->timestamp_event, ETMTSEVR);
/* No auxiliary control selected */
etm_writel(drvdata, 0x0, ETMAUXCR);
etm_writel(drvdata, drvdata->traceid, ETMTRACEIDR);
/* No VMID comparator value selected */
etm_writel(drvdata, 0x0, ETMVMIDCVR);
- /* Ensures trace output is enabled from this ETM */
- etm_writel(drvdata, drvdata->ctrl | ETMCR_ETM_EN | etmcr, ETMCR);
-
etm_clr_prog(drvdata);
CS_LOCK(drvdata->base);
dev_dbg(drvdata->dev, "cpu: %d enable smp call done\n", drvdata->cpu);
}
-static int etm_trace_id(struct coresight_device *csdev)
+static int etm_cpu_id(struct coresight_device *csdev)
{
struct etm_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
+
+ return drvdata->cpu;
+}
+
+int etm_get_trace_id(struct etm_drvdata *drvdata)
+{
unsigned long flags;
int trace_id = -1;
- if (!drvdata->enable)
+ if (!drvdata)
+ goto out;
+
+ if (!local_read(&drvdata->mode))
return drvdata->traceid;
- pm_runtime_get_sync(csdev->dev.parent);
+
+ pm_runtime_get_sync(drvdata->dev);
spin_lock_irqsave(&drvdata->spinlock, flags);
@@ -332,17 +442,41 @@
CS_LOCK(drvdata->base);
spin_unlock_irqrestore(&drvdata->spinlock, flags);
- pm_runtime_put(csdev->dev.parent);
+ pm_runtime_put(drvdata->dev);
+out:
return trace_id;
+
}
-static int etm_enable(struct coresight_device *csdev)
+static int etm_trace_id(struct coresight_device *csdev)
+{
+ struct etm_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
+
+ return etm_get_trace_id(drvdata);
+}
+
+static int etm_enable_perf(struct coresight_device *csdev,
+ struct perf_event_attr *attr)
+{
+ struct etm_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
+
+ if (WARN_ON_ONCE(drvdata->cpu != smp_processor_id()))
+ return -EINVAL;
+
+ /* Configure the tracer based on the session's specifics */
+ etm_parse_event_config(drvdata, attr);
+ /* And enable it */
+ etm_enable_hw(drvdata);
+
+ return 0;
+}
+
+static int etm_enable_sysfs(struct coresight_device *csdev)
{
struct etm_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
int ret;
- pm_runtime_get_sync(csdev->dev.parent);
spin_lock(&drvdata->spinlock);
/*
@@ -357,16 +491,45 @@
goto err;
}
- drvdata->enable = true;
drvdata->sticky_enable = true;
-
spin_unlock(&drvdata->spinlock);
dev_info(drvdata->dev, "ETM tracing enabled\n");
return 0;
+
err:
spin_unlock(&drvdata->spinlock);
- pm_runtime_put(csdev->dev.parent);
+ return ret;
+}
+
+static int etm_enable(struct coresight_device *csdev,
+ struct perf_event_attr *attr, u32 mode)
+{
+ int ret;
+ u32 val;
+ struct etm_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
+
+ val = local_cmpxchg(&drvdata->mode, CS_MODE_DISABLED, mode);
+
+ /* Someone is already using the tracer */
+ if (val)
+ return -EBUSY;
+
+ switch (mode) {
+ case CS_MODE_SYSFS:
+ ret = etm_enable_sysfs(csdev);
+ break;
+ case CS_MODE_PERF:
+ ret = etm_enable_perf(csdev, attr);
+ break;
+ default:
+ ret = -EINVAL;
+ }
+
+ /* The tracer didn't start */
+ if (ret)
+ local_set(&drvdata->mode, CS_MODE_DISABLED);
+
return ret;
}
@@ -374,18 +537,16 @@
{
int i;
struct etm_drvdata *drvdata = info;
+ struct etm_config *config = &drvdata->config;
CS_UNLOCK(drvdata->base);
etm_set_prog(drvdata);
- /* Program trace enable to low by using always false event */
- etm_writel(drvdata, ETM_HARD_WIRE_RES_A | ETM_EVENT_NOT_A, ETMTEEVR);
-
/* Read back sequencer and counters for post trace analysis */
- drvdata->seq_curr_state = (etm_readl(drvdata, ETMSQR) & ETM_SQR_MASK);
+ config->seq_curr_state = (etm_readl(drvdata, ETMSQR) & ETM_SQR_MASK);
for (i = 0; i < drvdata->nr_cntr; i++)
- drvdata->cntr_val[i] = etm_readl(drvdata, ETMCNTVRn(i));
+ config->cntr_val[i] = etm_readl(drvdata, ETMCNTVRn(i));
etm_set_pwrdwn(drvdata);
CS_LOCK(drvdata->base);
@@ -393,7 +554,28 @@
dev_dbg(drvdata->dev, "cpu: %d disable smp call done\n", drvdata->cpu);
}
-static void etm_disable(struct coresight_device *csdev)
+static void etm_disable_perf(struct coresight_device *csdev)
+{
+ struct etm_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
+
+ if (WARN_ON_ONCE(drvdata->cpu != smp_processor_id()))
+ return;
+
+ CS_UNLOCK(drvdata->base);
+
+ /* Setting the prog bit disables tracing immediately */
+ etm_set_prog(drvdata);
+
+ /*
+ * There is no way to know when the tracer will be used again so
+ * power down the tracer.
+ */
+ etm_set_pwrdwn(drvdata);
+
+ CS_LOCK(drvdata->base);
+}
+
+static void etm_disable_sysfs(struct coresight_device *csdev)
{
struct etm_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
@@ -411,16 +593,45 @@
* ensures that register writes occur when cpu is powered.
*/
smp_call_function_single(drvdata->cpu, etm_disable_hw, drvdata, 1);
- drvdata->enable = false;
spin_unlock(&drvdata->spinlock);
put_online_cpus();
- pm_runtime_put(csdev->dev.parent);
dev_info(drvdata->dev, "ETM tracing disabled\n");
}
+static void etm_disable(struct coresight_device *csdev)
+{
+ u32 mode;
+ struct etm_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
+
+ /*
+ * For as long as the tracer isn't disabled another entity can't
+ * change its status. As such we can read the status here without
+ * fearing it will change under us.
+ */
+ mode = local_read(&drvdata->mode);
+
+ switch (mode) {
+ case CS_MODE_DISABLED:
+ break;
+ case CS_MODE_SYSFS:
+ etm_disable_sysfs(csdev);
+ break;
+ case CS_MODE_PERF:
+ etm_disable_perf(csdev);
+ break;
+ default:
+ WARN_ON_ONCE(mode);
+ return;
+ }
+
+ if (mode)
+ local_set(&drvdata->mode, CS_MODE_DISABLED);
+}
+
static const struct coresight_ops_source etm_source_ops = {
+ .cpu_id = etm_cpu_id,
.trace_id = etm_trace_id,
.enable = etm_enable,
.disable = etm_disable,
@@ -430,1218 +641,6 @@
.source_ops = &etm_source_ops,
};
-static ssize_t nr_addr_cmp_show(struct device *dev,
- struct device_attribute *attr, char *buf)
-{
- unsigned long val;
- struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- val = drvdata->nr_addr_cmp;
- return sprintf(buf, "%#lx\n", val);
-}
-static DEVICE_ATTR_RO(nr_addr_cmp);
-
-static ssize_t nr_cntr_show(struct device *dev,
- struct device_attribute *attr, char *buf)
-{ unsigned long val;
- struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- val = drvdata->nr_cntr;
- return sprintf(buf, "%#lx\n", val);
-}
-static DEVICE_ATTR_RO(nr_cntr);
-
-static ssize_t nr_ctxid_cmp_show(struct device *dev,
- struct device_attribute *attr, char *buf)
-{
- unsigned long val;
- struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- val = drvdata->nr_ctxid_cmp;
- return sprintf(buf, "%#lx\n", val);
-}
-static DEVICE_ATTR_RO(nr_ctxid_cmp);
-
-static ssize_t etmsr_show(struct device *dev,
- struct device_attribute *attr, char *buf)
-{
- unsigned long flags, val;
- struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- pm_runtime_get_sync(drvdata->dev);
- spin_lock_irqsave(&drvdata->spinlock, flags);
- CS_UNLOCK(drvdata->base);
-
- val = etm_readl(drvdata, ETMSR);
-
- CS_LOCK(drvdata->base);
- spin_unlock_irqrestore(&drvdata->spinlock, flags);
- pm_runtime_put(drvdata->dev);
-
- return sprintf(buf, "%#lx\n", val);
-}
-static DEVICE_ATTR_RO(etmsr);
-
-static ssize_t reset_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t size)
-{
- int i, ret;
- unsigned long val;
- struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- ret = kstrtoul(buf, 16, &val);
- if (ret)
- return ret;
-
- if (val) {
- spin_lock(&drvdata->spinlock);
- drvdata->mode = ETM_MODE_EXCLUDE;
- drvdata->ctrl = 0x0;
- drvdata->trigger_event = ETM_DEFAULT_EVENT_VAL;
- drvdata->startstop_ctrl = 0x0;
- drvdata->addr_idx = 0x0;
- for (i = 0; i < drvdata->nr_addr_cmp; i++) {
- drvdata->addr_val[i] = 0x0;
- drvdata->addr_acctype[i] = 0x0;
- drvdata->addr_type[i] = ETM_ADDR_TYPE_NONE;
- }
- drvdata->cntr_idx = 0x0;
-
- etm_set_default(drvdata);
- spin_unlock(&drvdata->spinlock);
- }
-
- return size;
-}
-static DEVICE_ATTR_WO(reset);
-
-static ssize_t mode_show(struct device *dev,
- struct device_attribute *attr, char *buf)
-{
- unsigned long val;
- struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- val = drvdata->mode;
- return sprintf(buf, "%#lx\n", val);
-}
-
-static ssize_t mode_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t size)
-{
- int ret;
- unsigned long val;
- struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- ret = kstrtoul(buf, 16, &val);
- if (ret)
- return ret;
-
- spin_lock(&drvdata->spinlock);
- drvdata->mode = val & ETM_MODE_ALL;
-
- if (drvdata->mode & ETM_MODE_EXCLUDE)
- drvdata->enable_ctrl1 |= ETMTECR1_INC_EXC;
- else
- drvdata->enable_ctrl1 &= ~ETMTECR1_INC_EXC;
-
- if (drvdata->mode & ETM_MODE_CYCACC)
- drvdata->ctrl |= ETMCR_CYC_ACC;
- else
- drvdata->ctrl &= ~ETMCR_CYC_ACC;
-
- if (drvdata->mode & ETM_MODE_STALL) {
- if (!(drvdata->etmccr & ETMCCR_FIFOFULL)) {
- dev_warn(drvdata->dev, "stall mode not supported\n");
- ret = -EINVAL;
- goto err_unlock;
- }
- drvdata->ctrl |= ETMCR_STALL_MODE;
- } else
- drvdata->ctrl &= ~ETMCR_STALL_MODE;
-
- if (drvdata->mode & ETM_MODE_TIMESTAMP) {
- if (!(drvdata->etmccer & ETMCCER_TIMESTAMP)) {
- dev_warn(drvdata->dev, "timestamp not supported\n");
- ret = -EINVAL;
- goto err_unlock;
- }
- drvdata->ctrl |= ETMCR_TIMESTAMP_EN;
- } else
- drvdata->ctrl &= ~ETMCR_TIMESTAMP_EN;
-
- if (drvdata->mode & ETM_MODE_CTXID)
- drvdata->ctrl |= ETMCR_CTXID_SIZE;
- else
- drvdata->ctrl &= ~ETMCR_CTXID_SIZE;
- spin_unlock(&drvdata->spinlock);
-
- return size;
-
-err_unlock:
- spin_unlock(&drvdata->spinlock);
- return ret;
-}
-static DEVICE_ATTR_RW(mode);
-
-static ssize_t trigger_event_show(struct device *dev,
- struct device_attribute *attr, char *buf)
-{
- unsigned long val;
- struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- val = drvdata->trigger_event;
- return sprintf(buf, "%#lx\n", val);
-}
-
-static ssize_t trigger_event_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t size)
-{
- int ret;
- unsigned long val;
- struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- ret = kstrtoul(buf, 16, &val);
- if (ret)
- return ret;
-
- drvdata->trigger_event = val & ETM_EVENT_MASK;
-
- return size;
-}
-static DEVICE_ATTR_RW(trigger_event);
-
-static ssize_t enable_event_show(struct device *dev,
- struct device_attribute *attr, char *buf)
-{
- unsigned long val;
- struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- val = drvdata->enable_event;
- return sprintf(buf, "%#lx\n", val);
-}
-
-static ssize_t enable_event_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t size)
-{
- int ret;
- unsigned long val;
- struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- ret = kstrtoul(buf, 16, &val);
- if (ret)
- return ret;
-
- drvdata->enable_event = val & ETM_EVENT_MASK;
-
- return size;
-}
-static DEVICE_ATTR_RW(enable_event);
-
-static ssize_t fifofull_level_show(struct device *dev,
- struct device_attribute *attr, char *buf)
-{
- unsigned long val;
- struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- val = drvdata->fifofull_level;
- return sprintf(buf, "%#lx\n", val);
-}
-
-static ssize_t fifofull_level_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t size)
-{
- int ret;
- unsigned long val;
- struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- ret = kstrtoul(buf, 16, &val);
- if (ret)
- return ret;
-
- drvdata->fifofull_level = val;
-
- return size;
-}
-static DEVICE_ATTR_RW(fifofull_level);
-
-static ssize_t addr_idx_show(struct device *dev,
- struct device_attribute *attr, char *buf)
-{
- unsigned long val;
- struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- val = drvdata->addr_idx;
- return sprintf(buf, "%#lx\n", val);
-}
-
-static ssize_t addr_idx_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t size)
-{
- int ret;
- unsigned long val;
- struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- ret = kstrtoul(buf, 16, &val);
- if (ret)
- return ret;
-
- if (val >= drvdata->nr_addr_cmp)
- return -EINVAL;
-
- /*
- * Use spinlock to ensure index doesn't change while it gets
- * dereferenced multiple times within a spinlock block elsewhere.
- */
- spin_lock(&drvdata->spinlock);
- drvdata->addr_idx = val;
- spin_unlock(&drvdata->spinlock);
-
- return size;
-}
-static DEVICE_ATTR_RW(addr_idx);
-
-static ssize_t addr_single_show(struct device *dev,
- struct device_attribute *attr, char *buf)
-{
- u8 idx;
- unsigned long val;
- struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- spin_lock(&drvdata->spinlock);
- idx = drvdata->addr_idx;
- if (!(drvdata->addr_type[idx] == ETM_ADDR_TYPE_NONE ||
- drvdata->addr_type[idx] == ETM_ADDR_TYPE_SINGLE)) {
- spin_unlock(&drvdata->spinlock);
- return -EINVAL;
- }
-
- val = drvdata->addr_val[idx];
- spin_unlock(&drvdata->spinlock);
-
- return sprintf(buf, "%#lx\n", val);
-}
-
-static ssize_t addr_single_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t size)
-{
- u8 idx;
- int ret;
- unsigned long val;
- struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- ret = kstrtoul(buf, 16, &val);
- if (ret)
- return ret;
-
- spin_lock(&drvdata->spinlock);
- idx = drvdata->addr_idx;
- if (!(drvdata->addr_type[idx] == ETM_ADDR_TYPE_NONE ||
- drvdata->addr_type[idx] == ETM_ADDR_TYPE_SINGLE)) {
- spin_unlock(&drvdata->spinlock);
- return -EINVAL;
- }
-
- drvdata->addr_val[idx] = val;
- drvdata->addr_type[idx] = ETM_ADDR_TYPE_SINGLE;
- spin_unlock(&drvdata->spinlock);
-
- return size;
-}
-static DEVICE_ATTR_RW(addr_single);
-
-static ssize_t addr_range_show(struct device *dev,
- struct device_attribute *attr, char *buf)
-{
- u8 idx;
- unsigned long val1, val2;
- struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- spin_lock(&drvdata->spinlock);
- idx = drvdata->addr_idx;
- if (idx % 2 != 0) {
- spin_unlock(&drvdata->spinlock);
- return -EPERM;
- }
- if (!((drvdata->addr_type[idx] == ETM_ADDR_TYPE_NONE &&
- drvdata->addr_type[idx + 1] == ETM_ADDR_TYPE_NONE) ||
- (drvdata->addr_type[idx] == ETM_ADDR_TYPE_RANGE &&
- drvdata->addr_type[idx + 1] == ETM_ADDR_TYPE_RANGE))) {
- spin_unlock(&drvdata->spinlock);
- return -EPERM;
- }
-
- val1 = drvdata->addr_val[idx];
- val2 = drvdata->addr_val[idx + 1];
- spin_unlock(&drvdata->spinlock);
-
- return sprintf(buf, "%#lx %#lx\n", val1, val2);
-}
-
-static ssize_t addr_range_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t size)
-{
- u8 idx;
- unsigned long val1, val2;
- struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- if (sscanf(buf, "%lx %lx", &val1, &val2) != 2)
- return -EINVAL;
- /* Lower address comparator cannot have a higher address value */
- if (val1 > val2)
- return -EINVAL;
-
- spin_lock(&drvdata->spinlock);
- idx = drvdata->addr_idx;
- if (idx % 2 != 0) {
- spin_unlock(&drvdata->spinlock);
- return -EPERM;
- }
- if (!((drvdata->addr_type[idx] == ETM_ADDR_TYPE_NONE &&
- drvdata->addr_type[idx + 1] == ETM_ADDR_TYPE_NONE) ||
- (drvdata->addr_type[idx] == ETM_ADDR_TYPE_RANGE &&
- drvdata->addr_type[idx + 1] == ETM_ADDR_TYPE_RANGE))) {
- spin_unlock(&drvdata->spinlock);
- return -EPERM;
- }
-
- drvdata->addr_val[idx] = val1;
- drvdata->addr_type[idx] = ETM_ADDR_TYPE_RANGE;
- drvdata->addr_val[idx + 1] = val2;
- drvdata->addr_type[idx + 1] = ETM_ADDR_TYPE_RANGE;
- drvdata->enable_ctrl1 |= (1 << (idx/2));
- spin_unlock(&drvdata->spinlock);
-
- return size;
-}
-static DEVICE_ATTR_RW(addr_range);
-
-static ssize_t addr_start_show(struct device *dev,
- struct device_attribute *attr, char *buf)
-{
- u8 idx;
- unsigned long val;
- struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- spin_lock(&drvdata->spinlock);
- idx = drvdata->addr_idx;
- if (!(drvdata->addr_type[idx] == ETM_ADDR_TYPE_NONE ||
- drvdata->addr_type[idx] == ETM_ADDR_TYPE_START)) {
- spin_unlock(&drvdata->spinlock);
- return -EPERM;
- }
-
- val = drvdata->addr_val[idx];
- spin_unlock(&drvdata->spinlock);
-
- return sprintf(buf, "%#lx\n", val);
-}
-
-static ssize_t addr_start_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t size)
-{
- u8 idx;
- int ret;
- unsigned long val;
- struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- ret = kstrtoul(buf, 16, &val);
- if (ret)
- return ret;
-
- spin_lock(&drvdata->spinlock);
- idx = drvdata->addr_idx;
- if (!(drvdata->addr_type[idx] == ETM_ADDR_TYPE_NONE ||
- drvdata->addr_type[idx] == ETM_ADDR_TYPE_START)) {
- spin_unlock(&drvdata->spinlock);
- return -EPERM;
- }
-
- drvdata->addr_val[idx] = val;
- drvdata->addr_type[idx] = ETM_ADDR_TYPE_START;
- drvdata->startstop_ctrl |= (1 << idx);
- drvdata->enable_ctrl1 |= BIT(25);
- spin_unlock(&drvdata->spinlock);
-
- return size;
-}
-static DEVICE_ATTR_RW(addr_start);
-
-static ssize_t addr_stop_show(struct device *dev,
- struct device_attribute *attr, char *buf)
-{
- u8 idx;
- unsigned long val;
- struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- spin_lock(&drvdata->spinlock);
- idx = drvdata->addr_idx;
- if (!(drvdata->addr_type[idx] == ETM_ADDR_TYPE_NONE ||
- drvdata->addr_type[idx] == ETM_ADDR_TYPE_STOP)) {
- spin_unlock(&drvdata->spinlock);
- return -EPERM;
- }
-
- val = drvdata->addr_val[idx];
- spin_unlock(&drvdata->spinlock);
-
- return sprintf(buf, "%#lx\n", val);
-}
-
-static ssize_t addr_stop_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t size)
-{
- u8 idx;
- int ret;
- unsigned long val;
- struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- ret = kstrtoul(buf, 16, &val);
- if (ret)
- return ret;
-
- spin_lock(&drvdata->spinlock);
- idx = drvdata->addr_idx;
- if (!(drvdata->addr_type[idx] == ETM_ADDR_TYPE_NONE ||
- drvdata->addr_type[idx] == ETM_ADDR_TYPE_STOP)) {
- spin_unlock(&drvdata->spinlock);
- return -EPERM;
- }
-
- drvdata->addr_val[idx] = val;
- drvdata->addr_type[idx] = ETM_ADDR_TYPE_STOP;
- drvdata->startstop_ctrl |= (1 << (idx + 16));
- drvdata->enable_ctrl1 |= ETMTECR1_START_STOP;
- spin_unlock(&drvdata->spinlock);
-
- return size;
-}
-static DEVICE_ATTR_RW(addr_stop);
-
-static ssize_t addr_acctype_show(struct device *dev,
- struct device_attribute *attr, char *buf)
-{
- unsigned long val;
- struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- spin_lock(&drvdata->spinlock);
- val = drvdata->addr_acctype[drvdata->addr_idx];
- spin_unlock(&drvdata->spinlock);
-
- return sprintf(buf, "%#lx\n", val);
-}
-
-static ssize_t addr_acctype_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t size)
-{
- int ret;
- unsigned long val;
- struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- ret = kstrtoul(buf, 16, &val);
- if (ret)
- return ret;
-
- spin_lock(&drvdata->spinlock);
- drvdata->addr_acctype[drvdata->addr_idx] = val;
- spin_unlock(&drvdata->spinlock);
-
- return size;
-}
-static DEVICE_ATTR_RW(addr_acctype);
-
-static ssize_t cntr_idx_show(struct device *dev,
- struct device_attribute *attr, char *buf)
-{
- unsigned long val;
- struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- val = drvdata->cntr_idx;
- return sprintf(buf, "%#lx\n", val);
-}
-
-static ssize_t cntr_idx_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t size)
-{
- int ret;
- unsigned long val;
- struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- ret = kstrtoul(buf, 16, &val);
- if (ret)
- return ret;
-
- if (val >= drvdata->nr_cntr)
- return -EINVAL;
- /*
- * Use spinlock to ensure index doesn't change while it gets
- * dereferenced multiple times within a spinlock block elsewhere.
- */
- spin_lock(&drvdata->spinlock);
- drvdata->cntr_idx = val;
- spin_unlock(&drvdata->spinlock);
-
- return size;
-}
-static DEVICE_ATTR_RW(cntr_idx);
-
-static ssize_t cntr_rld_val_show(struct device *dev,
- struct device_attribute *attr, char *buf)
-{
- unsigned long val;
- struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- spin_lock(&drvdata->spinlock);
- val = drvdata->cntr_rld_val[drvdata->cntr_idx];
- spin_unlock(&drvdata->spinlock);
-
- return sprintf(buf, "%#lx\n", val);
-}
-
-static ssize_t cntr_rld_val_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t size)
-{
- int ret;
- unsigned long val;
- struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- ret = kstrtoul(buf, 16, &val);
- if (ret)
- return ret;
-
- spin_lock(&drvdata->spinlock);
- drvdata->cntr_rld_val[drvdata->cntr_idx] = val;
- spin_unlock(&drvdata->spinlock);
-
- return size;
-}
-static DEVICE_ATTR_RW(cntr_rld_val);
-
-static ssize_t cntr_event_show(struct device *dev,
- struct device_attribute *attr, char *buf)
-{
- unsigned long val;
- struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- spin_lock(&drvdata->spinlock);
- val = drvdata->cntr_event[drvdata->cntr_idx];
- spin_unlock(&drvdata->spinlock);
-
- return sprintf(buf, "%#lx\n", val);
-}
-
-static ssize_t cntr_event_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t size)
-{
- int ret;
- unsigned long val;
- struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- ret = kstrtoul(buf, 16, &val);
- if (ret)
- return ret;
-
- spin_lock(&drvdata->spinlock);
- drvdata->cntr_event[drvdata->cntr_idx] = val & ETM_EVENT_MASK;
- spin_unlock(&drvdata->spinlock);
-
- return size;
-}
-static DEVICE_ATTR_RW(cntr_event);
-
-static ssize_t cntr_rld_event_show(struct device *dev,
- struct device_attribute *attr, char *buf)
-{
- unsigned long val;
- struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- spin_lock(&drvdata->spinlock);
- val = drvdata->cntr_rld_event[drvdata->cntr_idx];
- spin_unlock(&drvdata->spinlock);
-
- return sprintf(buf, "%#lx\n", val);
-}
-
-static ssize_t cntr_rld_event_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t size)
-{
- int ret;
- unsigned long val;
- struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- ret = kstrtoul(buf, 16, &val);
- if (ret)
- return ret;
-
- spin_lock(&drvdata->spinlock);
- drvdata->cntr_rld_event[drvdata->cntr_idx] = val & ETM_EVENT_MASK;
- spin_unlock(&drvdata->spinlock);
-
- return size;
-}
-static DEVICE_ATTR_RW(cntr_rld_event);
-
-static ssize_t cntr_val_show(struct device *dev,
- struct device_attribute *attr, char *buf)
-{
- int i, ret = 0;
- u32 val;
- struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- if (!drvdata->enable) {
- spin_lock(&drvdata->spinlock);
- for (i = 0; i < drvdata->nr_cntr; i++)
- ret += sprintf(buf, "counter %d: %x\n",
- i, drvdata->cntr_val[i]);
- spin_unlock(&drvdata->spinlock);
- return ret;
- }
-
- for (i = 0; i < drvdata->nr_cntr; i++) {
- val = etm_readl(drvdata, ETMCNTVRn(i));
- ret += sprintf(buf, "counter %d: %x\n", i, val);
- }
-
- return ret;
-}
-
-static ssize_t cntr_val_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t size)
-{
- int ret;
- unsigned long val;
- struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- ret = kstrtoul(buf, 16, &val);
- if (ret)
- return ret;
-
- spin_lock(&drvdata->spinlock);
- drvdata->cntr_val[drvdata->cntr_idx] = val;
- spin_unlock(&drvdata->spinlock);
-
- return size;
-}
-static DEVICE_ATTR_RW(cntr_val);
-
-static ssize_t seq_12_event_show(struct device *dev,
- struct device_attribute *attr, char *buf)
-{
- unsigned long val;
- struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- val = drvdata->seq_12_event;
- return sprintf(buf, "%#lx\n", val);
-}
-
-static ssize_t seq_12_event_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t size)
-{
- int ret;
- unsigned long val;
- struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- ret = kstrtoul(buf, 16, &val);
- if (ret)
- return ret;
-
- drvdata->seq_12_event = val & ETM_EVENT_MASK;
- return size;
-}
-static DEVICE_ATTR_RW(seq_12_event);
-
-static ssize_t seq_21_event_show(struct device *dev,
- struct device_attribute *attr, char *buf)
-{
- unsigned long val;
- struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- val = drvdata->seq_21_event;
- return sprintf(buf, "%#lx\n", val);
-}
-
-static ssize_t seq_21_event_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t size)
-{
- int ret;
- unsigned long val;
- struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- ret = kstrtoul(buf, 16, &val);
- if (ret)
- return ret;
-
- drvdata->seq_21_event = val & ETM_EVENT_MASK;
- return size;
-}
-static DEVICE_ATTR_RW(seq_21_event);
-
-static ssize_t seq_23_event_show(struct device *dev,
- struct device_attribute *attr, char *buf)
-{
- unsigned long val;
- struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- val = drvdata->seq_23_event;
- return sprintf(buf, "%#lx\n", val);
-}
-
-static ssize_t seq_23_event_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t size)
-{
- int ret;
- unsigned long val;
- struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- ret = kstrtoul(buf, 16, &val);
- if (ret)
- return ret;
-
- drvdata->seq_23_event = val & ETM_EVENT_MASK;
- return size;
-}
-static DEVICE_ATTR_RW(seq_23_event);
-
-static ssize_t seq_31_event_show(struct device *dev,
- struct device_attribute *attr, char *buf)
-{
- unsigned long val;
- struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- val = drvdata->seq_31_event;
- return sprintf(buf, "%#lx\n", val);
-}
-
-static ssize_t seq_31_event_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t size)
-{
- int ret;
- unsigned long val;
- struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- ret = kstrtoul(buf, 16, &val);
- if (ret)
- return ret;
-
- drvdata->seq_31_event = val & ETM_EVENT_MASK;
- return size;
-}
-static DEVICE_ATTR_RW(seq_31_event);
-
-static ssize_t seq_32_event_show(struct device *dev,
- struct device_attribute *attr, char *buf)
-{
- unsigned long val;
- struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- val = drvdata->seq_32_event;
- return sprintf(buf, "%#lx\n", val);
-}
-
-static ssize_t seq_32_event_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t size)
-{
- int ret;
- unsigned long val;
- struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- ret = kstrtoul(buf, 16, &val);
- if (ret)
- return ret;
-
- drvdata->seq_32_event = val & ETM_EVENT_MASK;
- return size;
-}
-static DEVICE_ATTR_RW(seq_32_event);
-
-static ssize_t seq_13_event_show(struct device *dev,
- struct device_attribute *attr, char *buf)
-{
- unsigned long val;
- struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- val = drvdata->seq_13_event;
- return sprintf(buf, "%#lx\n", val);
-}
-
-static ssize_t seq_13_event_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t size)
-{
- int ret;
- unsigned long val;
- struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- ret = kstrtoul(buf, 16, &val);
- if (ret)
- return ret;
-
- drvdata->seq_13_event = val & ETM_EVENT_MASK;
- return size;
-}
-static DEVICE_ATTR_RW(seq_13_event);
-
-static ssize_t seq_curr_state_show(struct device *dev,
- struct device_attribute *attr, char *buf)
-{
- unsigned long val, flags;
- struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- if (!drvdata->enable) {
- val = drvdata->seq_curr_state;
- goto out;
- }
-
- pm_runtime_get_sync(drvdata->dev);
- spin_lock_irqsave(&drvdata->spinlock, flags);
-
- CS_UNLOCK(drvdata->base);
- val = (etm_readl(drvdata, ETMSQR) & ETM_SQR_MASK);
- CS_LOCK(drvdata->base);
-
- spin_unlock_irqrestore(&drvdata->spinlock, flags);
- pm_runtime_put(drvdata->dev);
-out:
- return sprintf(buf, "%#lx\n", val);
-}
-
-static ssize_t seq_curr_state_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t size)
-{
- int ret;
- unsigned long val;
- struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- ret = kstrtoul(buf, 16, &val);
- if (ret)
- return ret;
-
- if (val > ETM_SEQ_STATE_MAX_VAL)
- return -EINVAL;
-
- drvdata->seq_curr_state = val;
-
- return size;
-}
-static DEVICE_ATTR_RW(seq_curr_state);
-
-static ssize_t ctxid_idx_show(struct device *dev,
- struct device_attribute *attr, char *buf)
-{
- unsigned long val;
- struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- val = drvdata->ctxid_idx;
- return sprintf(buf, "%#lx\n", val);
-}
-
-static ssize_t ctxid_idx_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t size)
-{
- int ret;
- unsigned long val;
- struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- ret = kstrtoul(buf, 16, &val);
- if (ret)
- return ret;
-
- if (val >= drvdata->nr_ctxid_cmp)
- return -EINVAL;
-
- /*
- * Use spinlock to ensure index doesn't change while it gets
- * dereferenced multiple times within a spinlock block elsewhere.
- */
- spin_lock(&drvdata->spinlock);
- drvdata->ctxid_idx = val;
- spin_unlock(&drvdata->spinlock);
-
- return size;
-}
-static DEVICE_ATTR_RW(ctxid_idx);
-
-static ssize_t ctxid_pid_show(struct device *dev,
- struct device_attribute *attr, char *buf)
-{
- unsigned long val;
- struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- spin_lock(&drvdata->spinlock);
- val = drvdata->ctxid_vpid[drvdata->ctxid_idx];
- spin_unlock(&drvdata->spinlock);
-
- return sprintf(buf, "%#lx\n", val);
-}
-
-static ssize_t ctxid_pid_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t size)
-{
- int ret;
- unsigned long vpid, pid;
- struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- ret = kstrtoul(buf, 16, &vpid);
- if (ret)
- return ret;
-
- pid = coresight_vpid_to_pid(vpid);
-
- spin_lock(&drvdata->spinlock);
- drvdata->ctxid_pid[drvdata->ctxid_idx] = pid;
- drvdata->ctxid_vpid[drvdata->ctxid_idx] = vpid;
- spin_unlock(&drvdata->spinlock);
-
- return size;
-}
-static DEVICE_ATTR_RW(ctxid_pid);
-
-static ssize_t ctxid_mask_show(struct device *dev,
- struct device_attribute *attr, char *buf)
-{
- unsigned long val;
- struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- val = drvdata->ctxid_mask;
- return sprintf(buf, "%#lx\n", val);
-}
-
-static ssize_t ctxid_mask_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t size)
-{
- int ret;
- unsigned long val;
- struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- ret = kstrtoul(buf, 16, &val);
- if (ret)
- return ret;
-
- drvdata->ctxid_mask = val;
- return size;
-}
-static DEVICE_ATTR_RW(ctxid_mask);
-
-static ssize_t sync_freq_show(struct device *dev,
- struct device_attribute *attr, char *buf)
-{
- unsigned long val;
- struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- val = drvdata->sync_freq;
- return sprintf(buf, "%#lx\n", val);
-}
-
-static ssize_t sync_freq_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t size)
-{
- int ret;
- unsigned long val;
- struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- ret = kstrtoul(buf, 16, &val);
- if (ret)
- return ret;
-
- drvdata->sync_freq = val & ETM_SYNC_MASK;
- return size;
-}
-static DEVICE_ATTR_RW(sync_freq);
-
-static ssize_t timestamp_event_show(struct device *dev,
- struct device_attribute *attr, char *buf)
-{
- unsigned long val;
- struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- val = drvdata->timestamp_event;
- return sprintf(buf, "%#lx\n", val);
-}
-
-static ssize_t timestamp_event_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t size)
-{
- int ret;
- unsigned long val;
- struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- ret = kstrtoul(buf, 16, &val);
- if (ret)
- return ret;
-
- drvdata->timestamp_event = val & ETM_EVENT_MASK;
- return size;
-}
-static DEVICE_ATTR_RW(timestamp_event);
-
-static ssize_t cpu_show(struct device *dev,
- struct device_attribute *attr, char *buf)
-{
- int val;
- struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- val = drvdata->cpu;
- return scnprintf(buf, PAGE_SIZE, "%d\n", val);
-
-}
-static DEVICE_ATTR_RO(cpu);
-
-static ssize_t traceid_show(struct device *dev,
- struct device_attribute *attr, char *buf)
-{
- unsigned long val, flags;
- struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- if (!drvdata->enable) {
- val = drvdata->traceid;
- goto out;
- }
-
- pm_runtime_get_sync(drvdata->dev);
- spin_lock_irqsave(&drvdata->spinlock, flags);
- CS_UNLOCK(drvdata->base);
-
- val = (etm_readl(drvdata, ETMTRACEIDR) & ETM_TRACEID_MASK);
-
- CS_LOCK(drvdata->base);
- spin_unlock_irqrestore(&drvdata->spinlock, flags);
- pm_runtime_put(drvdata->dev);
-out:
- return sprintf(buf, "%#lx\n", val);
-}
-
-static ssize_t traceid_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t size)
-{
- int ret;
- unsigned long val;
- struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- ret = kstrtoul(buf, 16, &val);
- if (ret)
- return ret;
-
- drvdata->traceid = val & ETM_TRACEID_MASK;
- return size;
-}
-static DEVICE_ATTR_RW(traceid);
-
-static struct attribute *coresight_etm_attrs[] = {
- &dev_attr_nr_addr_cmp.attr,
- &dev_attr_nr_cntr.attr,
- &dev_attr_nr_ctxid_cmp.attr,
- &dev_attr_etmsr.attr,
- &dev_attr_reset.attr,
- &dev_attr_mode.attr,
- &dev_attr_trigger_event.attr,
- &dev_attr_enable_event.attr,
- &dev_attr_fifofull_level.attr,
- &dev_attr_addr_idx.attr,
- &dev_attr_addr_single.attr,
- &dev_attr_addr_range.attr,
- &dev_attr_addr_start.attr,
- &dev_attr_addr_stop.attr,
- &dev_attr_addr_acctype.attr,
- &dev_attr_cntr_idx.attr,
- &dev_attr_cntr_rld_val.attr,
- &dev_attr_cntr_event.attr,
- &dev_attr_cntr_rld_event.attr,
- &dev_attr_cntr_val.attr,
- &dev_attr_seq_12_event.attr,
- &dev_attr_seq_21_event.attr,
- &dev_attr_seq_23_event.attr,
- &dev_attr_seq_31_event.attr,
- &dev_attr_seq_32_event.attr,
- &dev_attr_seq_13_event.attr,
- &dev_attr_seq_curr_state.attr,
- &dev_attr_ctxid_idx.attr,
- &dev_attr_ctxid_pid.attr,
- &dev_attr_ctxid_mask.attr,
- &dev_attr_sync_freq.attr,
- &dev_attr_timestamp_event.attr,
- &dev_attr_traceid.attr,
- &dev_attr_cpu.attr,
- NULL,
-};
-
-#define coresight_simple_func(name, offset) \
-static ssize_t name##_show(struct device *_dev, \
- struct device_attribute *attr, char *buf) \
-{ \
- struct etm_drvdata *drvdata = dev_get_drvdata(_dev->parent); \
- return scnprintf(buf, PAGE_SIZE, "0x%x\n", \
- readl_relaxed(drvdata->base + offset)); \
-} \
-DEVICE_ATTR_RO(name)
-
-coresight_simple_func(etmccr, ETMCCR);
-coresight_simple_func(etmccer, ETMCCER);
-coresight_simple_func(etmscr, ETMSCR);
-coresight_simple_func(etmidr, ETMIDR);
-coresight_simple_func(etmcr, ETMCR);
-coresight_simple_func(etmtraceidr, ETMTRACEIDR);
-coresight_simple_func(etmteevr, ETMTEEVR);
-coresight_simple_func(etmtssvr, ETMTSSCR);
-coresight_simple_func(etmtecr1, ETMTECR1);
-coresight_simple_func(etmtecr2, ETMTECR2);
-
-static struct attribute *coresight_etm_mgmt_attrs[] = {
- &dev_attr_etmccr.attr,
- &dev_attr_etmccer.attr,
- &dev_attr_etmscr.attr,
- &dev_attr_etmidr.attr,
- &dev_attr_etmcr.attr,
- &dev_attr_etmtraceidr.attr,
- &dev_attr_etmteevr.attr,
- &dev_attr_etmtssvr.attr,
- &dev_attr_etmtecr1.attr,
- &dev_attr_etmtecr2.attr,
- NULL,
-};
-
-static const struct attribute_group coresight_etm_group = {
- .attrs = coresight_etm_attrs,
-};
-
-
-static const struct attribute_group coresight_etm_mgmt_group = {
- .attrs = coresight_etm_mgmt_attrs,
- .name = "mgmt",
-};
-
-static const struct attribute_group *coresight_etm_groups[] = {
- &coresight_etm_group,
- &coresight_etm_mgmt_group,
- NULL,
-};
-
static int etm_cpu_callback(struct notifier_block *nfb, unsigned long action,
void *hcpu)
{
@@ -1658,7 +657,7 @@
etmdrvdata[cpu]->os_unlock = true;
}
- if (etmdrvdata[cpu]->enable)
+ if (local_read(&etmdrvdata[cpu]->mode))
etm_enable_hw(etmdrvdata[cpu]);
spin_unlock(&etmdrvdata[cpu]->spinlock);
break;
@@ -1671,7 +670,7 @@
case CPU_DYING:
spin_lock(&etmdrvdata[cpu]->spinlock);
- if (etmdrvdata[cpu]->enable)
+ if (local_read(&etmdrvdata[cpu]->mode))
etm_disable_hw(etmdrvdata[cpu]);
spin_unlock(&etmdrvdata[cpu]->spinlock);
break;
@@ -1707,6 +706,9 @@
u32 etmccr;
struct etm_drvdata *drvdata = info;
+ /* Make sure all registers are accessible */
+ etm_os_unlock(drvdata);
+
CS_UNLOCK(drvdata->base);
/* First dummy read */
@@ -1743,40 +745,9 @@
CS_LOCK(drvdata->base);
}
-static void etm_init_default_data(struct etm_drvdata *drvdata)
+static void etm_init_trace_id(struct etm_drvdata *drvdata)
{
- /*
- * A trace ID of value 0 is invalid, so let's start at some
- * random value that fits in 7 bits and will be just as good.
- */
- static int etm3x_traceid = 0x10;
-
- u32 flags = (1 << 0 | /* instruction execute*/
- 3 << 3 | /* ARM instruction */
- 0 << 5 | /* No data value comparison */
- 0 << 7 | /* No exact mach */
- 0 << 8 | /* Ignore context ID */
- 0 << 10); /* Security ignored */
-
- /*
- * Initial configuration only - guarantees sources handled by
- * this driver have a unique ID at startup time but not between
- * all other types of sources. For that we lean on the core
- * framework.
- */
- drvdata->traceid = etm3x_traceid++;
- drvdata->ctrl = (ETMCR_CYC_ACC | ETMCR_TIMESTAMP_EN);
- drvdata->enable_ctrl1 = ETMTECR1_ADDR_COMP_1;
- if (drvdata->nr_addr_cmp >= 2) {
- drvdata->addr_val[0] = (u32) _stext;
- drvdata->addr_val[1] = (u32) _etext;
- drvdata->addr_acctype[0] = flags;
- drvdata->addr_acctype[1] = flags;
- drvdata->addr_type[0] = ETM_ADDR_TYPE_RANGE;
- drvdata->addr_type[1] = ETM_ADDR_TYPE_RANGE;
- }
-
- etm_set_default(drvdata);
+ drvdata->traceid = coresight_get_trace_id(drvdata->cpu);
}
static int etm_probe(struct amba_device *adev, const struct amba_id *id)
@@ -1831,9 +802,6 @@
get_online_cpus();
etmdrvdata[drvdata->cpu] = drvdata;
- if (!smp_call_function_single(drvdata->cpu, etm_os_unlock, drvdata, 1))
- drvdata->os_unlock = true;
-
if (smp_call_function_single(drvdata->cpu,
etm_init_arch_data, drvdata, 1))
dev_err(dev, "ETM arch init failed\n");
@@ -1847,7 +815,9 @@
ret = -EINVAL;
goto err_arch_supported;
}
- etm_init_default_data(drvdata);
+
+ etm_init_trace_id(drvdata);
+ etm_set_default(&drvdata->config);
desc->type = CORESIGHT_DEV_TYPE_SOURCE;
desc->subtype.source_subtype = CORESIGHT_DEV_SUBTYPE_SOURCE_PROC;
@@ -1861,6 +831,12 @@
goto err_arch_supported;
}
+ ret = etm_perf_symlink(drvdata->csdev, true);
+ if (ret) {
+ coresight_unregister(drvdata->csdev);
+ goto err_arch_supported;
+ }
+
pm_runtime_put(&adev->dev);
dev_info(dev, "%s initialized\n", (char *)id->data);
@@ -1877,17 +853,6 @@
return ret;
}
-static int etm_remove(struct amba_device *adev)
-{
- struct etm_drvdata *drvdata = amba_get_drvdata(adev);
-
- coresight_unregister(drvdata->csdev);
- if (--etm_count == 0)
- unregister_hotcpu_notifier(&etm_cpu_notifier);
-
- return 0;
-}
-
#ifdef CONFIG_PM
static int etm_runtime_suspend(struct device *dev)
{
@@ -1948,13 +913,9 @@
.name = "coresight-etm3x",
.owner = THIS_MODULE,
.pm = &etm_dev_pm_ops,
+ .suppress_bind_attrs = true,
},
.probe = etm_probe,
- .remove = etm_remove,
.id_table = etm_ids,
};
-
-module_amba_driver(etm_driver);
-
-MODULE_LICENSE("GPL v2");
-MODULE_DESCRIPTION("CoreSight Program Flow Trace driver");
+builtin_amba_driver(etm_driver);
diff --git a/drivers/hwtracing/coresight/coresight-etm4x.c b/drivers/hwtracing/coresight/coresight-etm4x.c
index a670764..1c59bd3 100644
--- a/drivers/hwtracing/coresight/coresight-etm4x.c
+++ b/drivers/hwtracing/coresight/coresight-etm4x.c
@@ -15,7 +15,6 @@
#include <linux/init.h>
#include <linux/types.h>
#include <linux/device.h>
-#include <linux/module.h>
#include <linux/io.h>
#include <linux/err.h>
#include <linux/fs.h>
@@ -32,6 +31,7 @@
#include <linux/seq_file.h>
#include <linux/uaccess.h>
#include <linux/pm_runtime.h>
+#include <linux/perf_event.h>
#include <asm/sections.h>
#include "coresight-etm4x.h"
@@ -63,6 +63,13 @@
return true;
}
+static int etm4_cpu_id(struct coresight_device *csdev)
+{
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
+
+ return drvdata->cpu;
+}
+
static int etm4_trace_id(struct coresight_device *csdev)
{
struct etmv4_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
@@ -72,7 +79,6 @@
if (!drvdata->enable)
return drvdata->trcid;
- pm_runtime_get_sync(drvdata->dev);
spin_lock_irqsave(&drvdata->spinlock, flags);
CS_UNLOCK(drvdata->base);
@@ -81,7 +87,6 @@
CS_LOCK(drvdata->base);
spin_unlock_irqrestore(&drvdata->spinlock, flags);
- pm_runtime_put(drvdata->dev);
return trace_id;
}
@@ -182,12 +187,12 @@
dev_dbg(drvdata->dev, "cpu: %d enable smp call done\n", drvdata->cpu);
}
-static int etm4_enable(struct coresight_device *csdev)
+static int etm4_enable(struct coresight_device *csdev,
+ struct perf_event_attr *attr, u32 mode)
{
struct etmv4_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
int ret;
- pm_runtime_get_sync(drvdata->dev);
spin_lock(&drvdata->spinlock);
/*
@@ -207,7 +212,6 @@
return 0;
err:
spin_unlock(&drvdata->spinlock);
- pm_runtime_put(drvdata->dev);
return ret;
}
@@ -256,12 +260,11 @@
spin_unlock(&drvdata->spinlock);
put_online_cpus();
- pm_runtime_put(drvdata->dev);
-
dev_info(drvdata->dev, "ETM tracing disabled\n");
}
static const struct coresight_ops_source etm4_source_ops = {
+ .cpu_id = etm4_cpu_id,
.trace_id = etm4_trace_id,
.enable = etm4_enable,
.disable = etm4_disable,
@@ -2219,7 +2222,7 @@
return scnprintf(buf, PAGE_SIZE, "0x%x\n", \
readl_relaxed(drvdata->base + offset)); \
} \
-DEVICE_ATTR_RO(name)
+static DEVICE_ATTR_RO(name)
coresight_simple_func(trcoslsr, TRCOSLSR);
coresight_simple_func(trcpdcr, TRCPDCR);
@@ -2684,17 +2687,6 @@
return ret;
}
-static int etm4_remove(struct amba_device *adev)
-{
- struct etmv4_drvdata *drvdata = amba_get_drvdata(adev);
-
- coresight_unregister(drvdata->csdev);
- if (--etm4_count == 0)
- unregister_hotcpu_notifier(&etm4_cpu_notifier);
-
- return 0;
-}
-
static struct amba_id etm4_ids[] = {
{ /* ETM 4.0 - Qualcomm */
.id = 0x0003b95d,
@@ -2712,10 +2704,9 @@
static struct amba_driver etm4x_driver = {
.drv = {
.name = "coresight-etm4x",
+ .suppress_bind_attrs = true,
},
.probe = etm4_probe,
- .remove = etm4_remove,
.id_table = etm4_ids,
};
-
-module_amba_driver(etm4x_driver);
+builtin_amba_driver(etm4x_driver);
diff --git a/drivers/hwtracing/coresight/coresight-funnel.c b/drivers/hwtracing/coresight/coresight-funnel.c
index 2e36bde..0600ca3 100644
--- a/drivers/hwtracing/coresight/coresight-funnel.c
+++ b/drivers/hwtracing/coresight/coresight-funnel.c
@@ -1,5 +1,7 @@
/* Copyright (c) 2011-2012, The Linux Foundation. All rights reserved.
*
+ * Description: CoreSight Funnel driver
+ *
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
@@ -11,7 +13,6 @@
*/
#include <linux/kernel.h>
-#include <linux/module.h>
#include <linux/init.h>
#include <linux/types.h>
#include <linux/device.h>
@@ -69,7 +70,6 @@
{
struct funnel_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
- pm_runtime_get_sync(drvdata->dev);
funnel_enable_hw(drvdata, inport);
dev_info(drvdata->dev, "FUNNEL inport %d enabled\n", inport);
@@ -95,7 +95,6 @@
struct funnel_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
funnel_disable_hw(drvdata, inport);
- pm_runtime_put(drvdata->dev);
dev_info(drvdata->dev, "FUNNEL inport %d disabled\n", inport);
}
@@ -226,14 +225,6 @@
return 0;
}
-static int funnel_remove(struct amba_device *adev)
-{
- struct funnel_drvdata *drvdata = amba_get_drvdata(adev);
-
- coresight_unregister(drvdata->csdev);
- return 0;
-}
-
#ifdef CONFIG_PM
static int funnel_runtime_suspend(struct device *dev)
{
@@ -273,13 +264,9 @@
.name = "coresight-funnel",
.owner = THIS_MODULE,
.pm = &funnel_dev_pm_ops,
+ .suppress_bind_attrs = true,
},
.probe = funnel_probe,
- .remove = funnel_remove,
.id_table = funnel_ids,
};
-
-module_amba_driver(funnel_driver);
-
-MODULE_LICENSE("GPL v2");
-MODULE_DESCRIPTION("CoreSight Funnel driver");
+builtin_amba_driver(funnel_driver);
diff --git a/drivers/hwtracing/coresight/coresight-priv.h b/drivers/hwtracing/coresight/coresight-priv.h
index 62fcd98..333edda 100644
--- a/drivers/hwtracing/coresight/coresight-priv.h
+++ b/drivers/hwtracing/coresight/coresight-priv.h
@@ -34,6 +34,15 @@
#define TIMEOUT_US 100
#define BMVAL(val, lsb, msb) ((val & GENMASK(msb, lsb)) >> lsb)
+#define ETM_MODE_EXCL_KERN BIT(30)
+#define ETM_MODE_EXCL_USER BIT(31)
+
+enum cs_mode {
+ CS_MODE_DISABLED,
+ CS_MODE_SYSFS,
+ CS_MODE_PERF,
+};
+
static inline void CS_LOCK(void __iomem *addr)
{
do {
@@ -52,6 +61,12 @@
} while (0);
}
+void coresight_disable_path(struct list_head *path);
+int coresight_enable_path(struct list_head *path, u32 mode);
+struct coresight_device *coresight_get_sink(struct list_head *path);
+struct list_head *coresight_build_path(struct coresight_device *csdev);
+void coresight_release_path(struct list_head *path);
+
#ifdef CONFIG_CORESIGHT_SOURCE_ETM3X
extern int etm_readl_cp14(u32 off, unsigned int *val);
extern int etm_writel_cp14(u32 off, u32 val);
diff --git a/drivers/hwtracing/coresight/coresight-replicator-qcom.c b/drivers/hwtracing/coresight/coresight-replicator-qcom.c
index 584059e..700f710 100644
--- a/drivers/hwtracing/coresight/coresight-replicator-qcom.c
+++ b/drivers/hwtracing/coresight/coresight-replicator-qcom.c
@@ -15,7 +15,6 @@
#include <linux/clk.h>
#include <linux/coresight.h>
#include <linux/device.h>
-#include <linux/module.h>
#include <linux/err.h>
#include <linux/init.h>
#include <linux/io.h>
@@ -48,8 +47,6 @@
{
struct replicator_state *drvdata = dev_get_drvdata(csdev->dev.parent);
- pm_runtime_get_sync(drvdata->dev);
-
CS_UNLOCK(drvdata->base);
/*
@@ -86,8 +83,6 @@
CS_LOCK(drvdata->base);
- pm_runtime_put(drvdata->dev);
-
dev_info(drvdata->dev, "REPLICATOR disabled\n");
}
@@ -156,15 +151,6 @@
return 0;
}
-static int replicator_remove(struct amba_device *adev)
-{
- struct replicator_state *drvdata = amba_get_drvdata(adev);
-
- pm_runtime_disable(&adev->dev);
- coresight_unregister(drvdata->csdev);
- return 0;
-}
-
#ifdef CONFIG_PM
static int replicator_runtime_suspend(struct device *dev)
{
@@ -206,10 +192,9 @@
.drv = {
.name = "coresight-replicator-qcom",
.pm = &replicator_dev_pm_ops,
+ .suppress_bind_attrs = true,
},
.probe = replicator_probe,
- .remove = replicator_remove,
.id_table = replicator_ids,
};
-
-module_amba_driver(replicator_driver);
+builtin_amba_driver(replicator_driver);
diff --git a/drivers/hwtracing/coresight/coresight-replicator.c b/drivers/hwtracing/coresight/coresight-replicator.c
index 963ac19..4299c05 100644
--- a/drivers/hwtracing/coresight/coresight-replicator.c
+++ b/drivers/hwtracing/coresight/coresight-replicator.c
@@ -1,5 +1,7 @@
/* Copyright (c) 2011-2012, The Linux Foundation. All rights reserved.
*
+ * Description: CoreSight Replicator driver
+ *
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
@@ -11,7 +13,6 @@
*/
#include <linux/kernel.h>
-#include <linux/module.h>
#include <linux/device.h>
#include <linux/platform_device.h>
#include <linux/io.h>
@@ -41,7 +42,6 @@
{
struct replicator_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
- pm_runtime_get_sync(drvdata->dev);
dev_info(drvdata->dev, "REPLICATOR enabled\n");
return 0;
}
@@ -51,7 +51,6 @@
{
struct replicator_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
- pm_runtime_put(drvdata->dev);
dev_info(drvdata->dev, "REPLICATOR disabled\n");
}
@@ -127,20 +126,6 @@
return ret;
}
-static int replicator_remove(struct platform_device *pdev)
-{
- struct replicator_drvdata *drvdata = platform_get_drvdata(pdev);
-
- coresight_unregister(drvdata->csdev);
- pm_runtime_get_sync(&pdev->dev);
- if (!IS_ERR(drvdata->atclk))
- clk_disable_unprepare(drvdata->atclk);
- pm_runtime_put_noidle(&pdev->dev);
- pm_runtime_disable(&pdev->dev);
-
- return 0;
-}
-
#ifdef CONFIG_PM
static int replicator_runtime_suspend(struct device *dev)
{
@@ -175,15 +160,11 @@
static struct platform_driver replicator_driver = {
.probe = replicator_probe,
- .remove = replicator_remove,
.driver = {
.name = "coresight-replicator",
.of_match_table = replicator_match,
.pm = &replicator_dev_pm_ops,
+ .suppress_bind_attrs = true,
},
};
-
builtin_platform_driver(replicator_driver);
-
-MODULE_LICENSE("GPL v2");
-MODULE_DESCRIPTION("CoreSight Replicator driver");
diff --git a/drivers/hwtracing/coresight/coresight-tmc.c b/drivers/hwtracing/coresight/coresight-tmc.c
index a57c7ec..1be191f 100644
--- a/drivers/hwtracing/coresight/coresight-tmc.c
+++ b/drivers/hwtracing/coresight/coresight-tmc.c
@@ -1,5 +1,7 @@
/* Copyright (c) 2012, The Linux Foundation. All rights reserved.
*
+ * Description: CoreSight Trace Memory Controller driver
+ *
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
@@ -11,7 +13,6 @@
*/
#include <linux/kernel.h>
-#include <linux/module.h>
#include <linux/init.h>
#include <linux/types.h>
#include <linux/device.h>
@@ -124,7 +125,7 @@
bool reading;
char *buf;
dma_addr_t paddr;
- void __iomem *vaddr;
+ void *vaddr;
u32 size;
bool enable;
enum tmc_config_type config_type;
@@ -242,12 +243,9 @@
{
unsigned long flags;
- pm_runtime_get_sync(drvdata->dev);
-
spin_lock_irqsave(&drvdata->spinlock, flags);
if (drvdata->reading) {
spin_unlock_irqrestore(&drvdata->spinlock, flags);
- pm_runtime_put(drvdata->dev);
return -EBUSY;
}
@@ -268,7 +266,7 @@
return 0;
}
-static int tmc_enable_sink(struct coresight_device *csdev)
+static int tmc_enable_sink(struct coresight_device *csdev, u32 mode)
{
struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
@@ -381,8 +379,6 @@
drvdata->enable = false;
spin_unlock_irqrestore(&drvdata->spinlock, flags);
- pm_runtime_put(drvdata->dev);
-
dev_info(drvdata->dev, "TMC disabled\n");
}
@@ -766,23 +762,10 @@
err_devm_kzalloc:
if (drvdata->config_type == TMC_CONFIG_TYPE_ETR)
dma_free_coherent(dev, drvdata->size,
- &drvdata->paddr, GFP_KERNEL);
+ drvdata->vaddr, drvdata->paddr);
return ret;
}
-static int tmc_remove(struct amba_device *adev)
-{
- struct tmc_drvdata *drvdata = amba_get_drvdata(adev);
-
- misc_deregister(&drvdata->miscdev);
- coresight_unregister(drvdata->csdev);
- if (drvdata->config_type == TMC_CONFIG_TYPE_ETR)
- dma_free_coherent(drvdata->dev, drvdata->size,
- &drvdata->paddr, GFP_KERNEL);
-
- return 0;
-}
-
static struct amba_id tmc_ids[] = {
{
.id = 0x0003b961,
@@ -795,13 +778,9 @@
.drv = {
.name = "coresight-tmc",
.owner = THIS_MODULE,
+ .suppress_bind_attrs = true,
},
.probe = tmc_probe,
- .remove = tmc_remove,
.id_table = tmc_ids,
};
-
-module_amba_driver(tmc_driver);
-
-MODULE_LICENSE("GPL v2");
-MODULE_DESCRIPTION("CoreSight Trace Memory Controller driver");
+builtin_amba_driver(tmc_driver);
diff --git a/drivers/hwtracing/coresight/coresight-tpiu.c b/drivers/hwtracing/coresight/coresight-tpiu.c
index 7214efd..8fb09d9 100644
--- a/drivers/hwtracing/coresight/coresight-tpiu.c
+++ b/drivers/hwtracing/coresight/coresight-tpiu.c
@@ -1,5 +1,7 @@
/* Copyright (c) 2011-2012, The Linux Foundation. All rights reserved.
*
+ * Description: CoreSight Trace Port Interface Unit driver
+ *
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
@@ -11,7 +13,6 @@
*/
#include <linux/kernel.h>
-#include <linux/module.h>
#include <linux/init.h>
#include <linux/device.h>
#include <linux/io.h>
@@ -70,11 +71,10 @@
CS_LOCK(drvdata->base);
}
-static int tpiu_enable(struct coresight_device *csdev)
+static int tpiu_enable(struct coresight_device *csdev, u32 mode)
{
struct tpiu_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
- pm_runtime_get_sync(csdev->dev.parent);
tpiu_enable_hw(drvdata);
dev_info(drvdata->dev, "TPIU enabled\n");
@@ -98,7 +98,6 @@
struct tpiu_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
tpiu_disable_hw(drvdata);
- pm_runtime_put(csdev->dev.parent);
dev_info(drvdata->dev, "TPIU disabled\n");
}
@@ -172,14 +171,6 @@
return 0;
}
-static int tpiu_remove(struct amba_device *adev)
-{
- struct tpiu_drvdata *drvdata = amba_get_drvdata(adev);
-
- coresight_unregister(drvdata->csdev);
- return 0;
-}
-
#ifdef CONFIG_PM
static int tpiu_runtime_suspend(struct device *dev)
{
@@ -223,13 +214,9 @@
.name = "coresight-tpiu",
.owner = THIS_MODULE,
.pm = &tpiu_dev_pm_ops,
+ .suppress_bind_attrs = true,
},
.probe = tpiu_probe,
- .remove = tpiu_remove,
.id_table = tpiu_ids,
};
-
-module_amba_driver(tpiu_driver);
-
-MODULE_LICENSE("GPL v2");
-MODULE_DESCRIPTION("CoreSight Trace Port Interface Unit driver");
+builtin_amba_driver(tpiu_driver);
diff --git a/drivers/hwtracing/coresight/coresight.c b/drivers/hwtracing/coresight/coresight.c
index 93738df..2ea5961 100644
--- a/drivers/hwtracing/coresight/coresight.c
+++ b/drivers/hwtracing/coresight/coresight.c
@@ -11,7 +11,6 @@
*/
#include <linux/kernel.h>
-#include <linux/module.h>
#include <linux/init.h>
#include <linux/types.h>
#include <linux/device.h>
@@ -24,11 +23,28 @@
#include <linux/coresight.h>
#include <linux/of_platform.h>
#include <linux/delay.h>
+#include <linux/pm_runtime.h>
#include "coresight-priv.h"
static DEFINE_MUTEX(coresight_mutex);
+/**
+ * struct coresight_node - elements of a path, from source to sink
+ * @csdev: Address of an element.
+ * @link: hook to the list.
+ */
+struct coresight_node {
+ struct coresight_device *csdev;
+ struct list_head link;
+};
+
+/*
+ * When operating Coresight drivers from the sysFS interface, only a single
+ * path can exist from a tracer (associated to a CPU) to a sink.
+ */
+static DEFINE_PER_CPU(struct list_head *, sysfs_path);
+
static int coresight_id_match(struct device *dev, void *data)
{
int trace_id, i_trace_id;
@@ -68,15 +84,12 @@
csdev, coresight_id_match);
}
-static int coresight_find_link_inport(struct coresight_device *csdev)
+static int coresight_find_link_inport(struct coresight_device *csdev,
+ struct coresight_device *parent)
{
int i;
- struct coresight_device *parent;
struct coresight_connection *conn;
- parent = container_of(csdev->path_link.next,
- struct coresight_device, path_link);
-
for (i = 0; i < parent->nr_outport; i++) {
conn = &parent->conns[i];
if (conn->child_dev == csdev)
@@ -89,15 +102,12 @@
return 0;
}
-static int coresight_find_link_outport(struct coresight_device *csdev)
+static int coresight_find_link_outport(struct coresight_device *csdev,
+ struct coresight_device *child)
{
int i;
- struct coresight_device *child;
struct coresight_connection *conn;
- child = container_of(csdev->path_link.prev,
- struct coresight_device, path_link);
-
for (i = 0; i < csdev->nr_outport; i++) {
conn = &csdev->conns[i];
if (conn->child_dev == child)
@@ -110,13 +120,13 @@
return 0;
}
-static int coresight_enable_sink(struct coresight_device *csdev)
+static int coresight_enable_sink(struct coresight_device *csdev, u32 mode)
{
int ret;
if (!csdev->enable) {
if (sink_ops(csdev)->enable) {
- ret = sink_ops(csdev)->enable(csdev);
+ ret = sink_ops(csdev)->enable(csdev, mode);
if (ret)
return ret;
}
@@ -138,14 +148,19 @@
}
}
-static int coresight_enable_link(struct coresight_device *csdev)
+static int coresight_enable_link(struct coresight_device *csdev,
+ struct coresight_device *parent,
+ struct coresight_device *child)
{
int ret;
int link_subtype;
int refport, inport, outport;
- inport = coresight_find_link_inport(csdev);
- outport = coresight_find_link_outport(csdev);
+ if (!parent || !child)
+ return -EINVAL;
+
+ inport = coresight_find_link_inport(csdev, parent);
+ outport = coresight_find_link_outport(csdev, child);
link_subtype = csdev->subtype.link_subtype;
if (link_subtype == CORESIGHT_DEV_SUBTYPE_LINK_MERG)
@@ -168,14 +183,19 @@
return 0;
}
-static void coresight_disable_link(struct coresight_device *csdev)
+static void coresight_disable_link(struct coresight_device *csdev,
+ struct coresight_device *parent,
+ struct coresight_device *child)
{
int i, nr_conns;
int link_subtype;
int refport, inport, outport;
- inport = coresight_find_link_inport(csdev);
- outport = coresight_find_link_outport(csdev);
+ if (!parent || !child)
+ return;
+
+ inport = coresight_find_link_inport(csdev, parent);
+ outport = coresight_find_link_outport(csdev, child);
link_subtype = csdev->subtype.link_subtype;
if (link_subtype == CORESIGHT_DEV_SUBTYPE_LINK_MERG) {
@@ -201,7 +221,7 @@
csdev->enable = false;
}
-static int coresight_enable_source(struct coresight_device *csdev)
+static int coresight_enable_source(struct coresight_device *csdev, u32 mode)
{
int ret;
@@ -213,7 +233,7 @@
if (!csdev->enable) {
if (source_ops(csdev)->enable) {
- ret = source_ops(csdev)->enable(csdev);
+ ret = source_ops(csdev)->enable(csdev, NULL, mode);
if (ret)
return ret;
}
@@ -235,109 +255,188 @@
}
}
-static int coresight_enable_path(struct list_head *path)
+void coresight_disable_path(struct list_head *path)
{
+ struct coresight_node *nd;
+ struct coresight_device *csdev, *parent, *child;
+
+ list_for_each_entry(nd, path, link) {
+ csdev = nd->csdev;
+
+ switch (csdev->type) {
+ case CORESIGHT_DEV_TYPE_SINK:
+ case CORESIGHT_DEV_TYPE_LINKSINK:
+ coresight_disable_sink(csdev);
+ break;
+ case CORESIGHT_DEV_TYPE_SOURCE:
+ /* sources are disabled from either sysFS or Perf */
+ break;
+ case CORESIGHT_DEV_TYPE_LINK:
+ parent = list_prev_entry(nd, link)->csdev;
+ child = list_next_entry(nd, link)->csdev;
+ coresight_disable_link(csdev, parent, child);
+ break;
+ default:
+ break;
+ }
+ }
+}
+
+int coresight_enable_path(struct list_head *path, u32 mode)
+{
+
int ret = 0;
- struct coresight_device *cd;
+ struct coresight_node *nd;
+ struct coresight_device *csdev, *parent, *child;
- /*
- * At this point we have a full @path, from source to sink. The
- * sink is the first entry and the source the last one. Go through
- * all the components and enable them one by one.
- */
- list_for_each_entry(cd, path, path_link) {
- if (cd == list_first_entry(path, struct coresight_device,
- path_link)) {
- ret = coresight_enable_sink(cd);
- } else if (list_is_last(&cd->path_link, path)) {
- /*
- * Don't enable the source just yet - this needs to
- * happen at the very end when all links and sink
- * along the path have been configured properly.
- */
- ;
- } else {
- ret = coresight_enable_link(cd);
- }
- if (ret)
+ list_for_each_entry_reverse(nd, path, link) {
+ csdev = nd->csdev;
+
+ switch (csdev->type) {
+ case CORESIGHT_DEV_TYPE_SINK:
+ case CORESIGHT_DEV_TYPE_LINKSINK:
+ ret = coresight_enable_sink(csdev, mode);
+ if (ret)
+ goto err;
+ break;
+ case CORESIGHT_DEV_TYPE_SOURCE:
+ /* sources are enabled from either sysFS or Perf */
+ break;
+ case CORESIGHT_DEV_TYPE_LINK:
+ parent = list_prev_entry(nd, link)->csdev;
+ child = list_next_entry(nd, link)->csdev;
+ ret = coresight_enable_link(csdev, parent, child);
+ if (ret)
+ goto err;
+ break;
+ default:
goto err;
- }
-
- return 0;
-err:
- list_for_each_entry_continue_reverse(cd, path, path_link) {
- if (cd == list_first_entry(path, struct coresight_device,
- path_link)) {
- coresight_disable_sink(cd);
- } else if (list_is_last(&cd->path_link, path)) {
- ;
- } else {
- coresight_disable_link(cd);
}
}
+out:
return ret;
+err:
+ coresight_disable_path(path);
+ goto out;
}
-static int coresight_disable_path(struct list_head *path)
+struct coresight_device *coresight_get_sink(struct list_head *path)
{
- struct coresight_device *cd;
+ struct coresight_device *csdev;
- list_for_each_entry_reverse(cd, path, path_link) {
- if (cd == list_first_entry(path, struct coresight_device,
- path_link)) {
- coresight_disable_sink(cd);
- } else if (list_is_last(&cd->path_link, path)) {
- /*
- * The source has already been stopped, no need
- * to do it again here.
- */
- ;
- } else {
- coresight_disable_link(cd);
- }
- }
+ if (!path)
+ return NULL;
- return 0;
+ csdev = list_last_entry(path, struct coresight_node, link)->csdev;
+ if (csdev->type != CORESIGHT_DEV_TYPE_SINK &&
+ csdev->type != CORESIGHT_DEV_TYPE_LINKSINK)
+ return NULL;
+
+ return csdev;
}
-static int coresight_build_paths(struct coresight_device *csdev,
- struct list_head *path,
- bool enable)
+/**
+ * _coresight_build_path - recursively build a path from a @csdev to a sink.
+ * @csdev: The device to start from.
+ * @path: The list to add devices to.
+ *
+ * The tree of Coresight device is traversed until an activated sink is
+ * found. From there the sink is added to the list along with all the
+ * devices that led to that point - the end result is a list from source
+ * to sink. In that list the source is the first device and the sink the
+ * last one.
+ */
+static int _coresight_build_path(struct coresight_device *csdev,
+ struct list_head *path)
{
- int i, ret = -EINVAL;
+ int i;
+ bool found = false;
+ struct coresight_node *node;
struct coresight_connection *conn;
- list_add(&csdev->path_link, path);
-
+ /* An activated sink has been found. Enqueue the element */
if ((csdev->type == CORESIGHT_DEV_TYPE_SINK ||
- csdev->type == CORESIGHT_DEV_TYPE_LINKSINK) &&
- csdev->activated) {
- if (enable)
- ret = coresight_enable_path(path);
- else
- ret = coresight_disable_path(path);
- } else {
- for (i = 0; i < csdev->nr_outport; i++) {
- conn = &csdev->conns[i];
- if (coresight_build_paths(conn->child_dev,
- path, enable) == 0)
- ret = 0;
+ csdev->type == CORESIGHT_DEV_TYPE_LINKSINK) && csdev->activated)
+ goto out;
+
+ /* Not a sink - recursively explore each port found on this element */
+ for (i = 0; i < csdev->nr_outport; i++) {
+ conn = &csdev->conns[i];
+ if (_coresight_build_path(conn->child_dev, path) == 0) {
+ found = true;
+ break;
}
}
- if (list_first_entry(path, struct coresight_device, path_link) != csdev)
- dev_err(&csdev->dev, "wrong device in %s\n", __func__);
+ if (!found)
+ return -ENODEV;
- list_del(&csdev->path_link);
+out:
+ /*
+ * A path from this element to a sink has been found. The elements
+ * leading to the sink are already enqueued, all that is left to do
+ * is tell the PM runtime core we need this element and add a node
+ * for it.
+ */
+ node = kzalloc(sizeof(struct coresight_node), GFP_KERNEL);
+ if (!node)
+ return -ENOMEM;
- return ret;
+ node->csdev = csdev;
+ list_add(&node->link, path);
+ pm_runtime_get_sync(csdev->dev.parent);
+
+ return 0;
+}
+
+struct list_head *coresight_build_path(struct coresight_device *csdev)
+{
+ struct list_head *path;
+
+ path = kzalloc(sizeof(struct list_head), GFP_KERNEL);
+ if (!path)
+ return NULL;
+
+ INIT_LIST_HEAD(path);
+
+ if (_coresight_build_path(csdev, path)) {
+ kfree(path);
+ path = NULL;
+ }
+
+ return path;
+}
+
+/**
+ * coresight_release_path - release a previously built path.
+ * @path: the path to release.
+ *
+ * Go through all the elements of a path and 1) removed it from the list and
+ * 2) free the memory allocated for each node.
+ */
+void coresight_release_path(struct list_head *path)
+{
+ struct coresight_device *csdev;
+ struct coresight_node *nd, *next;
+
+ list_for_each_entry_safe(nd, next, path, link) {
+ csdev = nd->csdev;
+
+ pm_runtime_put_sync(csdev->dev.parent);
+ list_del(&nd->link);
+ kfree(nd);
+ }
+
+ kfree(path);
+ path = NULL;
}
int coresight_enable(struct coresight_device *csdev)
{
int ret = 0;
- LIST_HEAD(path);
+ int cpu;
+ struct list_head *path;
mutex_lock(&coresight_mutex);
if (csdev->type != CORESIGHT_DEV_TYPE_SOURCE) {
@@ -348,22 +447,47 @@
if (csdev->enable)
goto out;
- if (coresight_build_paths(csdev, &path, true)) {
- dev_err(&csdev->dev, "building path(s) failed\n");
+ path = coresight_build_path(csdev);
+ if (!path) {
+ pr_err("building path(s) failed\n");
goto out;
}
- if (coresight_enable_source(csdev))
- dev_err(&csdev->dev, "source enable failed\n");
+ ret = coresight_enable_path(path, CS_MODE_SYSFS);
+ if (ret)
+ goto err_path;
+
+ ret = coresight_enable_source(csdev, CS_MODE_SYSFS);
+ if (ret)
+ goto err_source;
+
+ /*
+ * When working from sysFS it is important to keep track
+ * of the paths that were created so that they can be
+ * undone in 'coresight_disable()'. Since there can only
+ * be a single session per tracer (when working from sysFS)
+ * a per-cpu variable will do just fine.
+ */
+ cpu = source_ops(csdev)->cpu_id(csdev);
+ per_cpu(sysfs_path, cpu) = path;
+
out:
mutex_unlock(&coresight_mutex);
return ret;
+
+err_source:
+ coresight_disable_path(path);
+
+err_path:
+ coresight_release_path(path);
+ goto out;
}
EXPORT_SYMBOL_GPL(coresight_enable);
void coresight_disable(struct coresight_device *csdev)
{
- LIST_HEAD(path);
+ int cpu;
+ struct list_head *path;
mutex_lock(&coresight_mutex);
if (csdev->type != CORESIGHT_DEV_TYPE_SOURCE) {
@@ -373,9 +497,12 @@
if (!csdev->enable)
goto out;
+ cpu = source_ops(csdev)->cpu_id(csdev);
+ path = per_cpu(sysfs_path, cpu);
coresight_disable_source(csdev);
- if (coresight_build_paths(csdev, &path, false))
- dev_err(&csdev->dev, "releasing path(s) failed\n");
+ coresight_disable_path(path);
+ coresight_release_path(path);
+ per_cpu(sysfs_path, cpu) = NULL;
out:
mutex_unlock(&coresight_mutex);
@@ -481,6 +608,8 @@
{
struct coresight_device *csdev = to_coresight_device(dev);
+ kfree(csdev->conns);
+ kfree(csdev->refcnt);
kfree(csdev);
}
@@ -536,7 +665,7 @@
* are hooked-up with each newly added component.
*/
bus_for_each_dev(&coresight_bustype, NULL,
- csdev, coresight_orphan_match);
+ csdev, coresight_orphan_match);
}
@@ -568,6 +697,8 @@
if (dev) {
conn->child_dev = to_coresight_device(dev);
+ /* and put reference from 'bus_find_device()' */
+ put_device(dev);
} else {
csdev->orphan = true;
conn->child_dev = NULL;
@@ -575,6 +706,50 @@
}
}
+static int coresight_remove_match(struct device *dev, void *data)
+{
+ int i;
+ struct coresight_device *csdev, *iterator;
+ struct coresight_connection *conn;
+
+ csdev = data;
+ iterator = to_coresight_device(dev);
+
+ /* No need to check oneself */
+ if (csdev == iterator)
+ return 0;
+
+ /*
+ * Circle throuch all the connection of that component. If we find
+ * a connection whose name matches @csdev, remove it.
+ */
+ for (i = 0; i < iterator->nr_outport; i++) {
+ conn = &iterator->conns[i];
+
+ if (conn->child_dev == NULL)
+ continue;
+
+ if (!strcmp(dev_name(&csdev->dev), conn->child_name)) {
+ iterator->orphan = true;
+ conn->child_dev = NULL;
+ /* No need to continue */
+ break;
+ }
+ }
+
+ /*
+ * Returning '0' ensures that all known component on the
+ * bus will be checked.
+ */
+ return 0;
+}
+
+static void coresight_remove_conns(struct coresight_device *csdev)
+{
+ bus_for_each_dev(&coresight_bustype, NULL,
+ csdev, coresight_remove_match);
+}
+
/**
* coresight_timeout - loop until a bit has changed to a specific state.
* @addr: base address of the area of interest.
@@ -713,13 +888,8 @@
void coresight_unregister(struct coresight_device *csdev)
{
- mutex_lock(&coresight_mutex);
-
- kfree(csdev->conns);
+ /* Remove references of that device in the topology */
+ coresight_remove_conns(csdev);
device_unregister(&csdev->dev);
-
- mutex_unlock(&coresight_mutex);
}
EXPORT_SYMBOL_GPL(coresight_unregister);
-
-MODULE_LICENSE("GPL v2");
diff --git a/drivers/hwtracing/coresight/of_coresight.c b/drivers/hwtracing/coresight/of_coresight.c
index b097361..b68da18 100644
--- a/drivers/hwtracing/coresight/of_coresight.c
+++ b/drivers/hwtracing/coresight/of_coresight.c
@@ -10,7 +10,6 @@
* GNU General Public License for more details.
*/
-#include <linux/module.h>
#include <linux/types.h>
#include <linux/err.h>
#include <linux/slab.h>
@@ -86,7 +85,7 @@
return -ENOMEM;
/* Children connected to this component via @outports */
- pdata->child_names = devm_kzalloc(dev, pdata->nr_outport *
+ pdata->child_names = devm_kzalloc(dev, pdata->nr_outport *
sizeof(*pdata->child_names),
GFP_KERNEL);
if (!pdata->child_names)
diff --git a/drivers/hwtracing/intel_th/Kconfig b/drivers/hwtracing/intel_th/Kconfig
index b7a9073..1b412f8 100644
--- a/drivers/hwtracing/intel_th/Kconfig
+++ b/drivers/hwtracing/intel_th/Kconfig
@@ -1,5 +1,6 @@
config INTEL_TH
tristate "Intel(R) Trace Hub controller"
+ depends on HAS_DMA && HAS_IOMEM
help
Intel(R) Trace Hub (TH) is a set of hardware blocks (subdevices) that
produce, switch and output trace data from multiple hardware and
diff --git a/drivers/hwtracing/intel_th/core.c b/drivers/hwtracing/intel_th/core.c
index 165d300..4272f2c 100644
--- a/drivers/hwtracing/intel_th/core.c
+++ b/drivers/hwtracing/intel_th/core.c
@@ -124,17 +124,34 @@
.release = intel_th_device_release,
};
+static struct intel_th *to_intel_th(struct intel_th_device *thdev)
+{
+ /*
+ * subdevice tree is flat: if this one is not a switch, its
+ * parent must be
+ */
+ if (thdev->type != INTEL_TH_SWITCH)
+ thdev = to_intel_th_hub(thdev);
+
+ if (WARN_ON_ONCE(!thdev || thdev->type != INTEL_TH_SWITCH))
+ return NULL;
+
+ return dev_get_drvdata(thdev->dev.parent);
+}
+
static char *intel_th_output_devnode(struct device *dev, umode_t *mode,
kuid_t *uid, kgid_t *gid)
{
struct intel_th_device *thdev = to_intel_th_device(dev);
+ struct intel_th *th = to_intel_th(thdev);
char *node;
if (thdev->id >= 0)
- node = kasprintf(GFP_KERNEL, "intel_th%d/%s%d", 0, thdev->name,
- thdev->id);
+ node = kasprintf(GFP_KERNEL, "intel_th%d/%s%d", th->id,
+ thdev->name, thdev->id);
else
- node = kasprintf(GFP_KERNEL, "intel_th%d/%s", 0, thdev->name);
+ node = kasprintf(GFP_KERNEL, "intel_th%d/%s", th->id,
+ thdev->name);
return node;
}
@@ -319,6 +336,7 @@
unsigned nres;
unsigned type;
unsigned otype;
+ unsigned scrpd;
int id;
} intel_th_subdevices[TH_SUBDEVICE_MAX] = {
{
@@ -352,6 +370,7 @@
.id = 0,
.type = INTEL_TH_OUTPUT,
.otype = GTH_MSU,
+ .scrpd = SCRPD_MEM_IS_PRIM_DEST | SCRPD_MSC0_IS_ENABLED,
},
{
.nres = 2,
@@ -371,6 +390,7 @@
.id = 1,
.type = INTEL_TH_OUTPUT,
.otype = GTH_MSU,
+ .scrpd = SCRPD_MEM_IS_PRIM_DEST | SCRPD_MSC1_IS_ENABLED,
},
{
.nres = 2,
@@ -403,6 +423,7 @@
.name = "pti",
.type = INTEL_TH_OUTPUT,
.otype = GTH_PTI,
+ .scrpd = SCRPD_PTI_IS_PRIM_DEST,
},
{
.nres = 1,
@@ -477,6 +498,7 @@
thdev->dev.devt = MKDEV(th->major, i);
thdev->output.type = subdev->otype;
thdev->output.port = -1;
+ thdev->output.scratchpad = subdev->scrpd;
}
err = device_add(&thdev->dev);
@@ -579,6 +601,8 @@
}
th->dev = dev;
+ dev_set_drvdata(dev, th);
+
err = intel_th_populate(th, devres, ndevres, irq);
if (err)
goto err_chrdev;
diff --git a/drivers/hwtracing/intel_th/gth.c b/drivers/hwtracing/intel_th/gth.c
index 2dc5378..9beea0b 100644
--- a/drivers/hwtracing/intel_th/gth.c
+++ b/drivers/hwtracing/intel_th/gth.c
@@ -146,24 +146,6 @@
iowrite32(val, gth->base + reg);
}
-/*static int gth_master_get(struct gth_device *gth, unsigned int master)
-{
- unsigned int reg = REG_GTH_SWDEST0 + ((master >> 1) & ~3u);
- unsigned int shift = (master & 0x7) * 4;
- u32 val;
-
- if (master >= 256) {
- reg = REG_GTH_GSWTDEST;
- shift = 0;
- }
-
- val = ioread32(gth->base + reg);
- val &= (0xf << shift);
- val >>= shift;
-
- return val ? val & 0x7 : -1;
- }*/
-
static ssize_t master_attr_show(struct device *dev,
struct device_attribute *attr,
char *buf)
@@ -304,6 +286,10 @@
if (scratchpad & SCRPD_DEBUGGER_IN_USE)
return -EBUSY;
+ /* Always save/restore STH and TU registers in S0ix entry/exit */
+ scratchpad |= SCRPD_STH_IS_ENABLED | SCRPD_TRIGGER_IS_ENABLED;
+ iowrite32(scratchpad, gth->base + REG_GTH_SCRPD0);
+
/* output ports */
for (port = 0; port < 8; port++) {
if (gth_output_parm_get(gth, port, TH_OUTPUT_PARM(port)) ==
@@ -506,6 +492,10 @@
if (!count)
dev_dbg(&thdev->dev, "timeout waiting for GTH[%d] PLE\n",
output->port);
+
+ reg = ioread32(gth->base + REG_GTH_SCRPD0);
+ reg &= ~output->scratchpad;
+ iowrite32(reg, gth->base + REG_GTH_SCRPD0);
}
/**
@@ -520,7 +510,7 @@
struct intel_th_output *output)
{
struct gth_device *gth = dev_get_drvdata(&thdev->dev);
- u32 scr = 0xfc0000;
+ u32 scr = 0xfc0000, scrpd;
int master;
spin_lock(>h->gth_lock);
@@ -535,6 +525,10 @@
output->active = true;
spin_unlock(>h->gth_lock);
+ scrpd = ioread32(gth->base + REG_GTH_SCRPD0);
+ scrpd |= output->scratchpad;
+ iowrite32(scrpd, gth->base + REG_GTH_SCRPD0);
+
iowrite32(scr, gth->base + REG_GTH_SCR);
iowrite32(0, gth->base + REG_GTH_SCR2);
}
diff --git a/drivers/hwtracing/intel_th/gth.h b/drivers/hwtracing/intel_th/gth.h
index 3b714b7..56f0d26 100644
--- a/drivers/hwtracing/intel_th/gth.h
+++ b/drivers/hwtracing/intel_th/gth.h
@@ -57,9 +57,6 @@
REG_GTH_SCRPD3 = 0xec, /* ScratchPad[3] */
};
-/* Externall debugger is using Intel TH */
-#define SCRPD_DEBUGGER_IN_USE BIT(24)
-
/* waiting for Pipeline Empty bit(s) to assert for GTH */
#define GTH_PLE_WAITLOOP_DEPTH 10000
diff --git a/drivers/hwtracing/intel_th/intel_th.h b/drivers/hwtracing/intel_th/intel_th.h
index 57fd72b..eedd093 100644
--- a/drivers/hwtracing/intel_th/intel_th.h
+++ b/drivers/hwtracing/intel_th/intel_th.h
@@ -30,6 +30,7 @@
* struct intel_th_output - descriptor INTEL_TH_OUTPUT type devices
* @port: output port number, assigned by the switch
* @type: GTH_{MSU,CTP,PTI}
+ * @scratchpad: scratchpad bits to flag when this output is enabled
* @multiblock: true for multiblock output configuration
* @active: true when this output is enabled
*
@@ -41,6 +42,7 @@
struct intel_th_output {
int port;
unsigned int type;
+ unsigned int scratchpad;
bool multiblock;
bool active;
};
@@ -241,4 +243,43 @@
GTH_PTI = 4, /* MIPI-PTI */
};
+/*
+ * Scratchpad bits: tell firmware and external debuggers
+ * what we are up to.
+ */
+enum {
+ /* Memory is the primary destination */
+ SCRPD_MEM_IS_PRIM_DEST = BIT(0),
+ /* XHCI DbC is the primary destination */
+ SCRPD_DBC_IS_PRIM_DEST = BIT(1),
+ /* PTI is the primary destination */
+ SCRPD_PTI_IS_PRIM_DEST = BIT(2),
+ /* BSSB is the primary destination */
+ SCRPD_BSSB_IS_PRIM_DEST = BIT(3),
+ /* PTI is the alternate destination */
+ SCRPD_PTI_IS_ALT_DEST = BIT(4),
+ /* BSSB is the alternate destination */
+ SCRPD_BSSB_IS_ALT_DEST = BIT(5),
+ /* DeepSx exit occurred */
+ SCRPD_DEEPSX_EXIT = BIT(6),
+ /* S4 exit occurred */
+ SCRPD_S4_EXIT = BIT(7),
+ /* S5 exit occurred */
+ SCRPD_S5_EXIT = BIT(8),
+ /* MSU controller 0/1 is enabled */
+ SCRPD_MSC0_IS_ENABLED = BIT(9),
+ SCRPD_MSC1_IS_ENABLED = BIT(10),
+ /* Sx exit occurred */
+ SCRPD_SX_EXIT = BIT(11),
+ /* Trigger Unit is enabled */
+ SCRPD_TRIGGER_IS_ENABLED = BIT(12),
+ SCRPD_ODLA_IS_ENABLED = BIT(13),
+ SCRPD_SOCHAP_IS_ENABLED = BIT(14),
+ SCRPD_STH_IS_ENABLED = BIT(15),
+ SCRPD_DCIH_IS_ENABLED = BIT(16),
+ SCRPD_VER_IS_ENABLED = BIT(17),
+ /* External debugger is using Intel TH */
+ SCRPD_DEBUGGER_IN_USE = BIT(24),
+};
+
#endif
diff --git a/drivers/hwtracing/intel_th/msu.c b/drivers/hwtracing/intel_th/msu.c
index 70ca27e4..d9d6022 100644
--- a/drivers/hwtracing/intel_th/msu.c
+++ b/drivers/hwtracing/intel_th/msu.c
@@ -408,7 +408,7 @@
* Second time (wrap_count==1), it's just like any other block,
* containing data in the range of [MSC_BDESC..data_bytes].
*/
- if (iter->block == iter->start_block && iter->wrap_count) {
+ if (iter->block == iter->start_block && iter->wrap_count == 2) {
tocopy = DATA_IN_PAGE - data_bytes;
src += data_bytes;
}
@@ -1112,12 +1112,11 @@
size = msc->nr_pages << PAGE_SHIFT;
if (!size)
- return 0;
-
- if (off >= size) {
- len = 0;
goto put_count;
- }
+
+ if (off >= size)
+ goto put_count;
+
if (off + len >= size)
len = size - off;
diff --git a/drivers/hwtracing/intel_th/pci.c b/drivers/hwtracing/intel_th/pci.c
index 641e879..bca7a2a 100644
--- a/drivers/hwtracing/intel_th/pci.c
+++ b/drivers/hwtracing/intel_th/pci.c
@@ -46,8 +46,6 @@
if (IS_ERR(th))
return PTR_ERR(th);
- pci_set_drvdata(pdev, th);
-
return 0;
}
@@ -67,6 +65,16 @@
PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0xa126),
.driver_data = (kernel_ulong_t)0,
},
+ {
+ /* Apollo Lake */
+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x5a8e),
+ .driver_data = (kernel_ulong_t)0,
+ },
+ {
+ /* Broxton */
+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x0a80),
+ .driver_data = (kernel_ulong_t)0,
+ },
{ 0 },
};
diff --git a/drivers/hwtracing/intel_th/sth.c b/drivers/hwtracing/intel_th/sth.c
index 56101c3..e1aee61 100644
--- a/drivers/hwtracing/intel_th/sth.c
+++ b/drivers/hwtracing/intel_th/sth.c
@@ -94,10 +94,13 @@
case STP_PACKET_TRIG:
if (flags & STP_PACKET_TIMESTAMPED)
reg += 4;
- iowrite8(*payload, sth->base + reg);
+ writeb_relaxed(*payload, sth->base + reg);
break;
case STP_PACKET_MERR:
+ if (size > 4)
+ size = 4;
+
sth_iowrite(&out->MERR, payload, size);
break;
@@ -107,8 +110,8 @@
else
outp = (u64 __iomem *)&out->FLAG;
- size = 1;
- sth_iowrite(outp, payload, size);
+ size = 0;
+ writeb_relaxed(0, outp);
break;
case STP_PACKET_USER:
@@ -129,6 +132,8 @@
sth_iowrite(outp, payload, size);
break;
+ default:
+ return -ENOTSUPP;
}
return size;
diff --git a/drivers/hwtracing/stm/Kconfig b/drivers/hwtracing/stm/Kconfig
index 83e9f59..847a39b 100644
--- a/drivers/hwtracing/stm/Kconfig
+++ b/drivers/hwtracing/stm/Kconfig
@@ -1,6 +1,7 @@
config STM
tristate "System Trace Module devices"
select CONFIGFS_FS
+ select SRCU
help
A System Trace Module (STM) is a device exporting data in System
Trace Protocol (STP) format as defined by MIPI STP standards.
@@ -8,6 +9,8 @@
Say Y here to enable System Trace Module device support.
+if STM
+
config STM_DUMMY
tristate "Dummy STM driver"
help
@@ -24,3 +27,16 @@
If you want to send kernel console messages over STM devices,
say Y.
+
+config STM_SOURCE_HEARTBEAT
+ tristate "Heartbeat over STM devices"
+ help
+ This is a kernel space trace source that sends periodic
+ heartbeat messages to trace hosts over STM devices. It is
+ also useful for testing stm class drivers and the stm class
+ framework itself.
+
+ If you want to send heartbeat messages over STM devices,
+ say Y.
+
+endif
diff --git a/drivers/hwtracing/stm/Makefile b/drivers/hwtracing/stm/Makefile
index f9312c3..a9ce3d4 100644
--- a/drivers/hwtracing/stm/Makefile
+++ b/drivers/hwtracing/stm/Makefile
@@ -5,5 +5,7 @@
obj-$(CONFIG_STM_DUMMY) += dummy_stm.o
obj-$(CONFIG_STM_SOURCE_CONSOLE) += stm_console.o
+obj-$(CONFIG_STM_SOURCE_HEARTBEAT) += stm_heartbeat.o
stm_console-y := console.o
+stm_heartbeat-y := heartbeat.o
diff --git a/drivers/hwtracing/stm/core.c b/drivers/hwtracing/stm/core.c
index b6445d9..de80d45 100644
--- a/drivers/hwtracing/stm/core.c
+++ b/drivers/hwtracing/stm/core.c
@@ -113,6 +113,7 @@
stm = to_stm_device(dev);
if (!try_module_get(stm->owner)) {
+ /* matches class_find_device() above */
put_device(dev);
return NULL;
}
@@ -125,7 +126,7 @@
* @stm: stm device, previously acquired by stm_find_device()
*
* This drops the module reference and device reference taken by
- * stm_find_device().
+ * stm_find_device() or stm_char_open().
*/
void stm_put_device(struct stm_device *stm)
{
@@ -185,6 +186,9 @@
{
struct stp_master *master = stm_master(stm, output->master);
+ lockdep_assert_held(&stm->mc_lock);
+ lockdep_assert_held(&output->lock);
+
if (WARN_ON_ONCE(master->nr_free < output->nr_chans))
return;
@@ -199,6 +203,9 @@
{
struct stp_master *master = stm_master(stm, output->master);
+ lockdep_assert_held(&stm->mc_lock);
+ lockdep_assert_held(&output->lock);
+
bitmap_release_region(&master->chan_map[0], output->channel,
ilog2(output->nr_chans));
@@ -233,7 +240,7 @@
return -1;
}
-static unsigned int
+static int
stm_find_master_chan(struct stm_device *stm, unsigned int width,
unsigned int *mstart, unsigned int mend,
unsigned int *cstart, unsigned int cend)
@@ -288,12 +295,13 @@
}
spin_lock(&stm->mc_lock);
+ spin_lock(&output->lock);
/* output is already assigned -- shouldn't happen */
if (WARN_ON_ONCE(output->nr_chans))
goto unlock;
ret = stm_find_master_chan(stm, width, &midx, mend, &cidx, cend);
- if (ret)
+ if (ret < 0)
goto unlock;
output->master = midx;
@@ -304,6 +312,7 @@
ret = 0;
unlock:
+ spin_unlock(&output->lock);
spin_unlock(&stm->mc_lock);
return ret;
@@ -312,11 +321,18 @@
static void stm_output_free(struct stm_device *stm, struct stm_output *output)
{
spin_lock(&stm->mc_lock);
+ spin_lock(&output->lock);
if (output->nr_chans)
stm_output_disclaim(stm, output);
+ spin_unlock(&output->lock);
spin_unlock(&stm->mc_lock);
}
+static void stm_output_init(struct stm_output *output)
+{
+ spin_lock_init(&output->lock);
+}
+
static int major_match(struct device *dev, const void *data)
{
unsigned int major = *(unsigned int *)data;
@@ -339,6 +355,7 @@
if (!stmf)
return -ENOMEM;
+ stm_output_init(&stmf->output);
stmf->stm = to_stm_device(dev);
if (!try_module_get(stmf->stm->owner))
@@ -349,6 +366,8 @@
return nonseekable_open(inode, file);
err_free:
+ /* matches class_find_device() above */
+ put_device(dev);
kfree(stmf);
return err;
@@ -357,9 +376,19 @@
static int stm_char_release(struct inode *inode, struct file *file)
{
struct stm_file *stmf = file->private_data;
+ struct stm_device *stm = stmf->stm;
- stm_output_free(stmf->stm, &stmf->output);
- stm_put_device(stmf->stm);
+ if (stm->data->unlink)
+ stm->data->unlink(stm->data, stmf->output.master,
+ stmf->output.channel);
+
+ stm_output_free(stm, &stmf->output);
+
+ /*
+ * matches the stm_char_open()'s
+ * class_find_device() + try_module_get()
+ */
+ stm_put_device(stm);
kfree(stmf);
return 0;
@@ -380,8 +409,8 @@
return ret;
}
-static void stm_write(struct stm_data *data, unsigned int master,
- unsigned int channel, const char *buf, size_t count)
+static ssize_t stm_write(struct stm_data *data, unsigned int master,
+ unsigned int channel, const char *buf, size_t count)
{
unsigned int flags = STP_PACKET_TIMESTAMPED;
const unsigned char *p = buf, nil = 0;
@@ -393,9 +422,14 @@
sz = data->packet(data, master, channel, STP_PACKET_DATA, flags,
sz, p);
flags = 0;
+
+ if (sz < 0)
+ break;
}
data->packet(data, master, channel, STP_PACKET_FLAG, 0, 0, &nil);
+
+ return pos;
}
static ssize_t stm_char_write(struct file *file, const char __user *buf,
@@ -406,6 +440,9 @@
char *kbuf;
int err;
+ if (count + 1 > PAGE_SIZE)
+ count = PAGE_SIZE - 1;
+
/*
* if no m/c have been assigned to this writer up to this
* point, use "default" policy entry
@@ -430,8 +467,8 @@
return -EFAULT;
}
- stm_write(stm->data, stmf->output.master, stmf->output.channel, kbuf,
- count);
+ count = stm_write(stm->data, stmf->output.master, stmf->output.channel,
+ kbuf, count);
kfree(kbuf);
@@ -515,10 +552,8 @@
ret = stm->data->link(stm->data, stmf->output.master,
stmf->output.channel);
- if (ret) {
+ if (ret)
stm_output_free(stmf->stm, &stmf->output);
- stm_put_device(stmf->stm);
- }
err_free:
kfree(id);
@@ -618,7 +653,7 @@
if (!stm_data->packet || !stm_data->sw_nchannels)
return -EINVAL;
- nmasters = stm_data->sw_end - stm_data->sw_start;
+ nmasters = stm_data->sw_end - stm_data->sw_start + 1;
stm = kzalloc(sizeof(*stm) + nmasters * sizeof(void *), GFP_KERNEL);
if (!stm)
return -ENOMEM;
@@ -641,6 +676,7 @@
if (err)
goto err_device;
+ mutex_init(&stm->link_mutex);
spin_lock_init(&stm->link_lock);
INIT_LIST_HEAD(&stm->link_list);
@@ -654,6 +690,7 @@
return 0;
err_device:
+ /* matches device_initialize() above */
put_device(&stm->dev);
err_free:
kfree(stm);
@@ -662,20 +699,28 @@
}
EXPORT_SYMBOL_GPL(stm_register_device);
-static void __stm_source_link_drop(struct stm_source_device *src,
- struct stm_device *stm);
+static int __stm_source_link_drop(struct stm_source_device *src,
+ struct stm_device *stm);
void stm_unregister_device(struct stm_data *stm_data)
{
struct stm_device *stm = stm_data->stm;
struct stm_source_device *src, *iter;
- int i;
+ int i, ret;
- spin_lock(&stm->link_lock);
+ mutex_lock(&stm->link_mutex);
list_for_each_entry_safe(src, iter, &stm->link_list, link_entry) {
- __stm_source_link_drop(src, stm);
+ ret = __stm_source_link_drop(src, stm);
+ /*
+ * src <-> stm link must not change under the same
+ * stm::link_mutex, so complain loudly if it has;
+ * also in this situation ret!=0 means this src is
+ * not connected to this stm and it should be otherwise
+ * safe to proceed with the tear-down of stm.
+ */
+ WARN_ON_ONCE(ret);
}
- spin_unlock(&stm->link_lock);
+ mutex_unlock(&stm->link_mutex);
synchronize_srcu(&stm_source_srcu);
@@ -686,7 +731,7 @@
stp_policy_unbind(stm->policy);
mutex_unlock(&stm->policy_mutex);
- for (i = 0; i < stm->sw_nmasters; i++)
+ for (i = stm->data->sw_start; i <= stm->data->sw_end; i++)
stp_master_free(stm, i);
device_unregister(&stm->dev);
@@ -694,6 +739,17 @@
}
EXPORT_SYMBOL_GPL(stm_unregister_device);
+/*
+ * stm::link_list access serialization uses a spinlock and a mutex; holding
+ * either of them guarantees that the list is stable; modification requires
+ * holding both of them.
+ *
+ * Lock ordering is as follows:
+ * stm::link_mutex
+ * stm::link_lock
+ * src::link_lock
+ */
+
/**
* stm_source_link_add() - connect an stm_source device to an stm device
* @src: stm_source device
@@ -710,6 +766,7 @@
char *id;
int err;
+ mutex_lock(&stm->link_mutex);
spin_lock(&stm->link_lock);
spin_lock(&src->link_lock);
@@ -719,6 +776,7 @@
spin_unlock(&src->link_lock);
spin_unlock(&stm->link_lock);
+ mutex_unlock(&stm->link_mutex);
id = kstrdup(src->data->name, GFP_KERNEL);
if (id) {
@@ -753,9 +811,9 @@
fail_free_output:
stm_output_free(stm, &src->output);
- stm_put_device(stm);
fail_detach:
+ mutex_lock(&stm->link_mutex);
spin_lock(&stm->link_lock);
spin_lock(&src->link_lock);
@@ -764,6 +822,7 @@
spin_unlock(&src->link_lock);
spin_unlock(&stm->link_lock);
+ mutex_unlock(&stm->link_mutex);
return err;
}
@@ -776,28 +835,55 @@
* If @stm is @src::link, disconnect them from one another and put the
* reference on the @stm device.
*
- * Caller must hold stm::link_lock.
+ * Caller must hold stm::link_mutex.
*/
-static void __stm_source_link_drop(struct stm_source_device *src,
- struct stm_device *stm)
+static int __stm_source_link_drop(struct stm_source_device *src,
+ struct stm_device *stm)
{
struct stm_device *link;
+ int ret = 0;
+ lockdep_assert_held(&stm->link_mutex);
+
+ /* for stm::link_list modification, we hold both mutex and spinlock */
+ spin_lock(&stm->link_lock);
spin_lock(&src->link_lock);
link = srcu_dereference_check(src->link, &stm_source_srcu, 1);
- if (WARN_ON_ONCE(link != stm)) {
- spin_unlock(&src->link_lock);
- return;
+
+ /*
+ * The linked device may have changed since we last looked, because
+ * we weren't holding the src::link_lock back then; if this is the
+ * case, tell the caller to retry.
+ */
+ if (link != stm) {
+ ret = -EAGAIN;
+ goto unlock;
}
stm_output_free(link, &src->output);
- /* caller must hold stm::link_lock */
list_del_init(&src->link_entry);
/* matches stm_find_device() from stm_source_link_store() */
stm_put_device(link);
rcu_assign_pointer(src->link, NULL);
+unlock:
spin_unlock(&src->link_lock);
+ spin_unlock(&stm->link_lock);
+
+ /*
+ * Call the unlink callbacks for both source and stm, when we know
+ * that we have actually performed the unlinking.
+ */
+ if (!ret) {
+ if (src->data->unlink)
+ src->data->unlink(src->data);
+
+ if (stm->data->unlink)
+ stm->data->unlink(stm->data, src->output.master,
+ src->output.channel);
+ }
+
+ return ret;
}
/**
@@ -813,21 +899,29 @@
static void stm_source_link_drop(struct stm_source_device *src)
{
struct stm_device *stm;
- int idx;
+ int idx, ret;
+retry:
idx = srcu_read_lock(&stm_source_srcu);
+ /*
+ * The stm device will be valid for the duration of this
+ * read section, but the link may change before we grab
+ * the src::link_lock in __stm_source_link_drop().
+ */
stm = srcu_dereference(src->link, &stm_source_srcu);
+ ret = 0;
if (stm) {
- if (src->data->unlink)
- src->data->unlink(src->data);
-
- spin_lock(&stm->link_lock);
- __stm_source_link_drop(src, stm);
- spin_unlock(&stm->link_lock);
+ mutex_lock(&stm->link_mutex);
+ ret = __stm_source_link_drop(src, stm);
+ mutex_unlock(&stm->link_mutex);
}
srcu_read_unlock(&stm_source_srcu, idx);
+
+ /* if it did change, retry */
+ if (ret == -EAGAIN)
+ goto retry;
}
static ssize_t stm_source_link_show(struct device *dev,
@@ -862,8 +956,10 @@
return -EINVAL;
err = stm_source_link_add(src, link);
- if (err)
+ if (err) {
+ /* matches the stm_find_device() above */
stm_put_device(link);
+ }
return err ? : count;
}
@@ -925,6 +1021,7 @@
if (err)
goto err;
+ stm_output_init(&src->output);
spin_lock_init(&src->link_lock);
INIT_LIST_HEAD(&src->link_entry);
src->data = data;
@@ -973,9 +1070,9 @@
stm = srcu_dereference(src->link, &stm_source_srcu);
if (stm)
- stm_write(stm->data, src->output.master,
- src->output.channel + chan,
- buf, count);
+ count = stm_write(stm->data, src->output.master,
+ src->output.channel + chan,
+ buf, count);
else
count = -ENODEV;
diff --git a/drivers/hwtracing/stm/dummy_stm.c b/drivers/hwtracing/stm/dummy_stm.c
index 3709bef..310adf5 100644
--- a/drivers/hwtracing/stm/dummy_stm.c
+++ b/drivers/hwtracing/stm/dummy_stm.c
@@ -40,22 +40,75 @@
return size;
}
-static struct stm_data dummy_stm = {
- .name = "dummy_stm",
- .sw_start = 0x0000,
- .sw_end = 0xffff,
- .sw_nchannels = 0xffff,
- .packet = dummy_stm_packet,
-};
+#define DUMMY_STM_MAX 32
+
+static struct stm_data dummy_stm[DUMMY_STM_MAX];
+
+static int nr_dummies = 4;
+
+module_param(nr_dummies, int, 0600);
+
+static unsigned int dummy_stm_nr;
+
+static unsigned int fail_mode;
+
+module_param(fail_mode, int, 0600);
+
+static int dummy_stm_link(struct stm_data *data, unsigned int master,
+ unsigned int channel)
+{
+ if (fail_mode && (channel & fail_mode))
+ return -EINVAL;
+
+ return 0;
+}
static int dummy_stm_init(void)
{
- return stm_register_device(NULL, &dummy_stm, THIS_MODULE);
+ int i, ret = -ENOMEM, __nr_dummies = ACCESS_ONCE(nr_dummies);
+
+ if (__nr_dummies < 0 || __nr_dummies > DUMMY_STM_MAX)
+ return -EINVAL;
+
+ for (i = 0; i < __nr_dummies; i++) {
+ dummy_stm[i].name = kasprintf(GFP_KERNEL, "dummy_stm.%d", i);
+ if (!dummy_stm[i].name)
+ goto fail_unregister;
+
+ dummy_stm[i].sw_start = 0x0000;
+ dummy_stm[i].sw_end = 0xffff;
+ dummy_stm[i].sw_nchannels = 0xffff;
+ dummy_stm[i].packet = dummy_stm_packet;
+ dummy_stm[i].link = dummy_stm_link;
+
+ ret = stm_register_device(NULL, &dummy_stm[i], THIS_MODULE);
+ if (ret)
+ goto fail_free;
+ }
+
+ dummy_stm_nr = __nr_dummies;
+
+ return 0;
+
+fail_unregister:
+ for (i--; i >= 0; i--) {
+ stm_unregister_device(&dummy_stm[i]);
+fail_free:
+ kfree(dummy_stm[i].name);
+ }
+
+ return ret;
+
}
static void dummy_stm_exit(void)
{
- stm_unregister_device(&dummy_stm);
+ int i;
+
+ for (i = 0; i < dummy_stm_nr; i++) {
+ stm_unregister_device(&dummy_stm[i]);
+ kfree(dummy_stm[i].name);
+ }
}
module_init(dummy_stm_init);
diff --git a/drivers/hwtracing/stm/heartbeat.c b/drivers/hwtracing/stm/heartbeat.c
new file mode 100644
index 0000000..0133571
--- /dev/null
+++ b/drivers/hwtracing/stm/heartbeat.c
@@ -0,0 +1,130 @@
+/*
+ * Simple heartbeat STM source driver
+ * Copyright (c) 2016, Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * Heartbeat STM source will send repetitive messages over STM devices to a
+ * trace host.
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/hrtimer.h>
+#include <linux/slab.h>
+#include <linux/stm.h>
+
+#define STM_HEARTBEAT_MAX 32
+
+static int nr_devs = 4;
+static int interval_ms = 10;
+
+module_param(nr_devs, int, 0600);
+module_param(interval_ms, int, 0600);
+
+static struct stm_heartbeat {
+ struct stm_source_data data;
+ struct hrtimer hrtimer;
+ unsigned int active;
+} stm_heartbeat[STM_HEARTBEAT_MAX];
+
+static unsigned int nr_instances;
+
+static const char str[] = "heartbeat stm source driver is here to serve you";
+
+static enum hrtimer_restart stm_heartbeat_hrtimer_handler(struct hrtimer *hr)
+{
+ struct stm_heartbeat *heartbeat = container_of(hr, struct stm_heartbeat,
+ hrtimer);
+
+ stm_source_write(&heartbeat->data, 0, str, sizeof str);
+ if (heartbeat->active)
+ hrtimer_forward_now(hr, ms_to_ktime(interval_ms));
+
+ return heartbeat->active ? HRTIMER_RESTART : HRTIMER_NORESTART;
+}
+
+static int stm_heartbeat_link(struct stm_source_data *data)
+{
+ struct stm_heartbeat *heartbeat =
+ container_of(data, struct stm_heartbeat, data);
+
+ heartbeat->active = 1;
+ hrtimer_start(&heartbeat->hrtimer, ms_to_ktime(interval_ms),
+ HRTIMER_MODE_ABS);
+
+ return 0;
+}
+
+static void stm_heartbeat_unlink(struct stm_source_data *data)
+{
+ struct stm_heartbeat *heartbeat =
+ container_of(data, struct stm_heartbeat, data);
+
+ heartbeat->active = 0;
+ hrtimer_cancel(&heartbeat->hrtimer);
+}
+
+static int stm_heartbeat_init(void)
+{
+ int i, ret = -ENOMEM, __nr_instances = ACCESS_ONCE(nr_devs);
+
+ if (__nr_instances < 0 || __nr_instances > STM_HEARTBEAT_MAX)
+ return -EINVAL;
+
+ for (i = 0; i < __nr_instances; i++) {
+ stm_heartbeat[i].data.name =
+ kasprintf(GFP_KERNEL, "heartbeat.%d", i);
+ if (!stm_heartbeat[i].data.name)
+ goto fail_unregister;
+
+ stm_heartbeat[i].data.nr_chans = 1;
+ stm_heartbeat[i].data.link = stm_heartbeat_link;
+ stm_heartbeat[i].data.unlink = stm_heartbeat_unlink;
+ hrtimer_init(&stm_heartbeat[i].hrtimer, CLOCK_MONOTONIC,
+ HRTIMER_MODE_ABS);
+ stm_heartbeat[i].hrtimer.function =
+ stm_heartbeat_hrtimer_handler;
+
+ ret = stm_source_register_device(NULL, &stm_heartbeat[i].data);
+ if (ret)
+ goto fail_free;
+ }
+
+ nr_instances = __nr_instances;
+
+ return 0;
+
+fail_unregister:
+ for (i--; i >= 0; i--) {
+ stm_source_unregister_device(&stm_heartbeat[i].data);
+fail_free:
+ kfree(stm_heartbeat[i].data.name);
+ }
+
+ return ret;
+}
+
+static void stm_heartbeat_exit(void)
+{
+ int i;
+
+ for (i = 0; i < nr_instances; i++) {
+ stm_source_unregister_device(&stm_heartbeat[i].data);
+ kfree(stm_heartbeat[i].data.name);
+ }
+}
+
+module_init(stm_heartbeat_init);
+module_exit(stm_heartbeat_exit);
+
+MODULE_LICENSE("GPL v2");
+MODULE_DESCRIPTION("stm_heartbeat driver");
+MODULE_AUTHOR("Alexander Shishkin <alexander.shishkin@linux.intel.com>");
diff --git a/drivers/hwtracing/stm/policy.c b/drivers/hwtracing/stm/policy.c
index 11ab6d0..1db1896 100644
--- a/drivers/hwtracing/stm/policy.c
+++ b/drivers/hwtracing/stm/policy.c
@@ -272,13 +272,17 @@
{
struct stm_device *stm = policy->stm;
+ /*
+ * stp_policy_release() will not call here if the policy is already
+ * unbound; other users should not either, as no link exists between
+ * this policy and anything else in that case
+ */
if (WARN_ON_ONCE(!policy->stm))
return;
- mutex_lock(&stm->policy_mutex);
- stm->policy = NULL;
- mutex_unlock(&stm->policy_mutex);
+ lockdep_assert_held(&stm->policy_mutex);
+ stm->policy = NULL;
policy->stm = NULL;
stm_put_device(stm);
@@ -287,8 +291,16 @@
static void stp_policy_release(struct config_item *item)
{
struct stp_policy *policy = to_stp_policy(item);
+ struct stm_device *stm = policy->stm;
+ /* a policy *can* be unbound and still exist in configfs tree */
+ if (!stm)
+ return;
+
+ mutex_lock(&stm->policy_mutex);
stp_policy_unbind(policy);
+ mutex_unlock(&stm->policy_mutex);
+
kfree(policy);
}
@@ -320,10 +332,11 @@
/*
* node must look like <device_name>.<policy_name>, where
- * <device_name> is the name of an existing stm device and
- * <policy_name> is an arbitrary string
+ * <device_name> is the name of an existing stm device; may
+ * contain dots;
+ * <policy_name> is an arbitrary string; may not contain dots
*/
- p = strchr(devname, '.');
+ p = strrchr(devname, '.');
if (!p) {
kfree(devname);
return ERR_PTR(-EINVAL);
diff --git a/drivers/hwtracing/stm/stm.h b/drivers/hwtracing/stm/stm.h
index 95ece02..4e8c6926 100644
--- a/drivers/hwtracing/stm/stm.h
+++ b/drivers/hwtracing/stm/stm.h
@@ -45,6 +45,7 @@
int major;
unsigned int sw_nmasters;
struct stm_data *data;
+ struct mutex link_mutex;
spinlock_t link_lock;
struct list_head link_list;
/* master allocation */
@@ -56,6 +57,7 @@
container_of((_d), struct stm_device, dev)
struct stm_output {
+ spinlock_t lock;
unsigned int master;
unsigned int channel;
unsigned int nr_chans;
diff --git a/drivers/i2c/busses/i2c-i801.c b/drivers/i2c/busses/i2c-i801.c
index f62d697..27fa0cb 100644
--- a/drivers/i2c/busses/i2c-i801.c
+++ b/drivers/i2c/busses/i2c-i801.c
@@ -1271,6 +1271,8 @@
switch (dev->device) {
case PCI_DEVICE_ID_INTEL_SUNRISEPOINT_H_SMBUS:
case PCI_DEVICE_ID_INTEL_SUNRISEPOINT_LP_SMBUS:
+ case PCI_DEVICE_ID_INTEL_LEWISBURG_SMBUS:
+ case PCI_DEVICE_ID_INTEL_LEWISBURG_SSKU_SMBUS:
case PCI_DEVICE_ID_INTEL_DNV_SMBUS:
priv->features |= FEATURE_I2C_BLOCK_READ;
priv->features |= FEATURE_IRQ;
diff --git a/drivers/i2c/busses/i2c-omap.c b/drivers/i2c/busses/i2c-omap.c
index 08d26ba..13c4529 100644
--- a/drivers/i2c/busses/i2c-omap.c
+++ b/drivers/i2c/busses/i2c-omap.c
@@ -1450,7 +1450,8 @@
err_unuse_clocks:
omap_i2c_write_reg(omap, OMAP_I2C_CON_REG, 0);
- pm_runtime_put(omap->dev);
+ pm_runtime_dont_use_autosuspend(omap->dev);
+ pm_runtime_put_sync(omap->dev);
pm_runtime_disable(&pdev->dev);
err_free_mem:
@@ -1468,6 +1469,7 @@
return ret;
omap_i2c_write_reg(omap, OMAP_I2C_CON_REG, 0);
+ pm_runtime_dont_use_autosuspend(&pdev->dev);
pm_runtime_put_sync(&pdev->dev);
pm_runtime_disable(&pdev->dev);
return 0;
diff --git a/drivers/i2c/busses/i2c-uniphier-f.c b/drivers/i2c/busses/i2c-uniphier-f.c
index f3e5ff8..213ba55 100644
--- a/drivers/i2c/busses/i2c-uniphier-f.c
+++ b/drivers/i2c/busses/i2c-uniphier-f.c
@@ -467,7 +467,7 @@
bus_speed = UNIPHIER_FI2C_DEFAULT_SPEED;
if (!bus_speed) {
- dev_err(dev, "clock-freqyency should not be zero\n");
+ dev_err(dev, "clock-frequency should not be zero\n");
return -EINVAL;
}
diff --git a/drivers/i2c/busses/i2c-uniphier.c b/drivers/i2c/busses/i2c-uniphier.c
index 1f4f3f5..89eaa8a7 100644
--- a/drivers/i2c/busses/i2c-uniphier.c
+++ b/drivers/i2c/busses/i2c-uniphier.c
@@ -328,7 +328,7 @@
bus_speed = UNIPHIER_I2C_DEFAULT_SPEED;
if (!bus_speed) {
- dev_err(dev, "clock-freqyency should not be zero\n");
+ dev_err(dev, "clock-frequency should not be zero\n");
return -EINVAL;
}
diff --git a/drivers/infiniband/core/sysfs.c b/drivers/infiniband/core/sysfs.c
index 3de9351..14606af 100644
--- a/drivers/infiniband/core/sysfs.c
+++ b/drivers/infiniband/core/sysfs.c
@@ -336,7 +336,6 @@
union ib_gid gid;
struct ib_gid_attr gid_attr = {};
ssize_t ret;
- va_list args;
ret = ib_query_gid(p->ibdev, p->port_num, tab_attr->index, &gid,
&gid_attr);
@@ -348,7 +347,6 @@
err:
if (gid_attr.ndev)
dev_put(gid_attr.ndev);
- va_end(args);
return ret;
}
@@ -722,12 +720,11 @@
if (get_perf_mad(dev, port_num, IB_PMA_CLASS_PORT_INFO,
&cpi, 40, sizeof(cpi)) >= 0) {
-
- if (cpi.capability_mask && IB_PMA_CLASS_CAP_EXT_WIDTH)
+ if (cpi.capability_mask & IB_PMA_CLASS_CAP_EXT_WIDTH)
/* We have extended counters */
return &pma_group_ext;
- if (cpi.capability_mask && IB_PMA_CLASS_CAP_EXT_WIDTH_NOIETF)
+ if (cpi.capability_mask & IB_PMA_CLASS_CAP_EXT_WIDTH_NOIETF)
/* But not the IETF ones */
return &pma_group_noietf;
}
diff --git a/drivers/infiniband/hw/mlx4/mad.c b/drivers/infiniband/hw/mlx4/mad.c
index 26833bf..d68f506 100644
--- a/drivers/infiniband/hw/mlx4/mad.c
+++ b/drivers/infiniband/hw/mlx4/mad.c
@@ -817,17 +817,48 @@
return IB_MAD_RESULT_SUCCESS | IB_MAD_RESULT_REPLY;
}
-static void edit_counter(struct mlx4_counter *cnt,
- struct ib_pma_portcounters *pma_cnt)
+static void edit_counter(struct mlx4_counter *cnt, void *counters,
+ __be16 attr_id)
{
- ASSIGN_32BIT_COUNTER(pma_cnt->port_xmit_data,
- (be64_to_cpu(cnt->tx_bytes) >> 2));
- ASSIGN_32BIT_COUNTER(pma_cnt->port_rcv_data,
- (be64_to_cpu(cnt->rx_bytes) >> 2));
- ASSIGN_32BIT_COUNTER(pma_cnt->port_xmit_packets,
- be64_to_cpu(cnt->tx_frames));
- ASSIGN_32BIT_COUNTER(pma_cnt->port_rcv_packets,
- be64_to_cpu(cnt->rx_frames));
+ switch (attr_id) {
+ case IB_PMA_PORT_COUNTERS:
+ {
+ struct ib_pma_portcounters *pma_cnt =
+ (struct ib_pma_portcounters *)counters;
+
+ ASSIGN_32BIT_COUNTER(pma_cnt->port_xmit_data,
+ (be64_to_cpu(cnt->tx_bytes) >> 2));
+ ASSIGN_32BIT_COUNTER(pma_cnt->port_rcv_data,
+ (be64_to_cpu(cnt->rx_bytes) >> 2));
+ ASSIGN_32BIT_COUNTER(pma_cnt->port_xmit_packets,
+ be64_to_cpu(cnt->tx_frames));
+ ASSIGN_32BIT_COUNTER(pma_cnt->port_rcv_packets,
+ be64_to_cpu(cnt->rx_frames));
+ break;
+ }
+ case IB_PMA_PORT_COUNTERS_EXT:
+ {
+ struct ib_pma_portcounters_ext *pma_cnt_ext =
+ (struct ib_pma_portcounters_ext *)counters;
+
+ pma_cnt_ext->port_xmit_data =
+ cpu_to_be64(be64_to_cpu(cnt->tx_bytes) >> 2);
+ pma_cnt_ext->port_rcv_data =
+ cpu_to_be64(be64_to_cpu(cnt->rx_bytes) >> 2);
+ pma_cnt_ext->port_xmit_packets = cnt->tx_frames;
+ pma_cnt_ext->port_rcv_packets = cnt->rx_frames;
+ break;
+ }
+ }
+}
+
+static int iboe_process_mad_port_info(void *out_mad)
+{
+ struct ib_class_port_info cpi = {};
+
+ cpi.capability_mask = IB_PMA_CLASS_CAP_EXT_WIDTH;
+ memcpy(out_mad, &cpi, sizeof(cpi));
+ return IB_MAD_RESULT_SUCCESS | IB_MAD_RESULT_REPLY;
}
static int iboe_process_mad(struct ib_device *ibdev, int mad_flags, u8 port_num,
@@ -842,6 +873,9 @@
if (in_mad->mad_hdr.mgmt_class != IB_MGMT_CLASS_PERF_MGMT)
return -EINVAL;
+ if (in_mad->mad_hdr.attr_id == IB_PMA_CLASS_PORT_INFO)
+ return iboe_process_mad_port_info((void *)(out_mad->data + 40));
+
memset(&counter_stats, 0, sizeof(counter_stats));
mutex_lock(&dev->counters_table[port_num - 1].mutex);
list_for_each_entry(tmp_counter,
@@ -863,7 +897,8 @@
switch (counter_stats.counter_mode & 0xf) {
case 0:
edit_counter(&counter_stats,
- (void *)(out_mad->data + 40));
+ (void *)(out_mad->data + 40),
+ in_mad->mad_hdr.attr_id);
err = IB_MAD_RESULT_SUCCESS | IB_MAD_RESULT_REPLY;
break;
default:
@@ -894,8 +929,10 @@
*/
if (link == IB_LINK_LAYER_INFINIBAND) {
if (mlx4_is_slave(dev->dev) &&
- in_mad->mad_hdr.mgmt_class == IB_MGMT_CLASS_PERF_MGMT &&
- in_mad->mad_hdr.attr_id == IB_PMA_PORT_COUNTERS)
+ (in_mad->mad_hdr.mgmt_class == IB_MGMT_CLASS_PERF_MGMT &&
+ (in_mad->mad_hdr.attr_id == IB_PMA_PORT_COUNTERS ||
+ in_mad->mad_hdr.attr_id == IB_PMA_PORT_COUNTERS_EXT ||
+ in_mad->mad_hdr.attr_id == IB_PMA_CLASS_PORT_INFO)))
return iboe_process_mad(ibdev, mad_flags, port_num, in_wc,
in_grh, in_mad, out_mad);
diff --git a/drivers/infiniband/hw/mlx4/qp.c b/drivers/infiniband/hw/mlx4/qp.c
index bc5536f..fd97534 100644
--- a/drivers/infiniband/hw/mlx4/qp.c
+++ b/drivers/infiniband/hw/mlx4/qp.c
@@ -1681,9 +1681,12 @@
}
if (qp->ibqp.uobject)
- context->usr_page = cpu_to_be32(to_mucontext(ibqp->uobject->context)->uar.index);
+ context->usr_page = cpu_to_be32(
+ mlx4_to_hw_uar_index(dev->dev,
+ to_mucontext(ibqp->uobject->context)->uar.index));
else
- context->usr_page = cpu_to_be32(dev->priv_uar.index);
+ context->usr_page = cpu_to_be32(
+ mlx4_to_hw_uar_index(dev->dev, dev->priv_uar.index));
if (attr_mask & IB_QP_DEST_QPN)
context->remote_qpn = cpu_to_be32(attr->dest_qp_num);
diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index 9116bc3..34cb8e8 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -270,8 +270,10 @@
/* fall through */
case IB_QPT_RC:
size += sizeof(struct mlx5_wqe_ctrl_seg) +
- sizeof(struct mlx5_wqe_atomic_seg) +
- sizeof(struct mlx5_wqe_raddr_seg);
+ max(sizeof(struct mlx5_wqe_atomic_seg) +
+ sizeof(struct mlx5_wqe_raddr_seg),
+ sizeof(struct mlx5_wqe_umr_ctrl_seg) +
+ sizeof(struct mlx5_mkey_seg));
break;
case IB_QPT_XRC_TGT:
@@ -279,9 +281,9 @@
case IB_QPT_UC:
size += sizeof(struct mlx5_wqe_ctrl_seg) +
- sizeof(struct mlx5_wqe_raddr_seg) +
- sizeof(struct mlx5_wqe_umr_ctrl_seg) +
- sizeof(struct mlx5_mkey_seg);
+ max(sizeof(struct mlx5_wqe_raddr_seg),
+ sizeof(struct mlx5_wqe_umr_ctrl_seg) +
+ sizeof(struct mlx5_mkey_seg));
break;
case IB_QPT_UD:
diff --git a/drivers/infiniband/hw/ocrdma/ocrdma.h b/drivers/infiniband/hw/ocrdma/ocrdma.h
index 040bb8b..12503f1 100644
--- a/drivers/infiniband/hw/ocrdma/ocrdma.h
+++ b/drivers/infiniband/hw/ocrdma/ocrdma.h
@@ -323,9 +323,6 @@
*/
u32 max_hw_cqe;
bool phase_change;
- bool deferred_arm, deferred_sol;
- bool first_arm;
-
spinlock_t cq_lock ____cacheline_aligned; /* provide synchronization
* to cq polling
*/
diff --git a/drivers/infiniband/hw/ocrdma/ocrdma_main.c b/drivers/infiniband/hw/ocrdma/ocrdma_main.c
index 5738493..f387430 100644
--- a/drivers/infiniband/hw/ocrdma/ocrdma_main.c
+++ b/drivers/infiniband/hw/ocrdma/ocrdma_main.c
@@ -228,6 +228,11 @@
ocrdma_alloc_pd_pool(dev);
+ if (!ocrdma_alloc_stats_resources(dev)) {
+ pr_err("%s: stats resource allocation failed\n", __func__);
+ goto alloc_err;
+ }
+
spin_lock_init(&dev->av_tbl.lock);
spin_lock_init(&dev->flush_q_lock);
return 0;
@@ -238,6 +243,7 @@
static void ocrdma_free_resources(struct ocrdma_dev *dev)
{
+ ocrdma_release_stats_resources(dev);
kfree(dev->stag_arr);
kfree(dev->qp_tbl);
kfree(dev->cq_tbl);
diff --git a/drivers/infiniband/hw/ocrdma/ocrdma_stats.c b/drivers/infiniband/hw/ocrdma/ocrdma_stats.c
index 86c303a..255f774 100644
--- a/drivers/infiniband/hw/ocrdma/ocrdma_stats.c
+++ b/drivers/infiniband/hw/ocrdma/ocrdma_stats.c
@@ -64,10 +64,11 @@
return cpy_len;
}
-static bool ocrdma_alloc_stats_mem(struct ocrdma_dev *dev)
+bool ocrdma_alloc_stats_resources(struct ocrdma_dev *dev)
{
struct stats_mem *mem = &dev->stats_mem;
+ mutex_init(&dev->stats_lock);
/* Alloc mbox command mem*/
mem->size = max_t(u32, sizeof(struct ocrdma_rdma_stats_req),
sizeof(struct ocrdma_rdma_stats_resp));
@@ -91,13 +92,14 @@
return true;
}
-static void ocrdma_release_stats_mem(struct ocrdma_dev *dev)
+void ocrdma_release_stats_resources(struct ocrdma_dev *dev)
{
struct stats_mem *mem = &dev->stats_mem;
if (mem->va)
dma_free_coherent(&dev->nic_info.pdev->dev, mem->size,
mem->va, mem->pa);
+ mem->va = NULL;
kfree(mem->debugfs_mem);
}
@@ -838,15 +840,9 @@
&dev->reset_stats, &ocrdma_dbg_ops))
goto err;
- /* Now create dma_mem for stats mbx command */
- if (!ocrdma_alloc_stats_mem(dev))
- goto err;
-
- mutex_init(&dev->stats_lock);
return;
err:
- ocrdma_release_stats_mem(dev);
debugfs_remove_recursive(dev->dir);
dev->dir = NULL;
}
@@ -855,9 +851,7 @@
{
if (!dev->dir)
return;
- debugfs_remove(dev->dir);
- mutex_destroy(&dev->stats_lock);
- ocrdma_release_stats_mem(dev);
+ debugfs_remove_recursive(dev->dir);
}
void ocrdma_init_debugfs(void)
diff --git a/drivers/infiniband/hw/ocrdma/ocrdma_stats.h b/drivers/infiniband/hw/ocrdma/ocrdma_stats.h
index c9e58d0..bba1fec 100644
--- a/drivers/infiniband/hw/ocrdma/ocrdma_stats.h
+++ b/drivers/infiniband/hw/ocrdma/ocrdma_stats.h
@@ -65,6 +65,8 @@
void ocrdma_rem_debugfs(void);
void ocrdma_init_debugfs(void);
+bool ocrdma_alloc_stats_resources(struct ocrdma_dev *dev);
+void ocrdma_release_stats_resources(struct ocrdma_dev *dev);
void ocrdma_rem_port_stats(struct ocrdma_dev *dev);
void ocrdma_add_port_stats(struct ocrdma_dev *dev);
int ocrdma_pma_counters(struct ocrdma_dev *dev,
diff --git a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c
index d4c687b54..12420e4 100644
--- a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c
+++ b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c
@@ -125,8 +125,8 @@
IB_DEVICE_SYS_IMAGE_GUID |
IB_DEVICE_LOCAL_DMA_LKEY |
IB_DEVICE_MEM_MGT_EXTENSIONS;
- attr->max_sge = min(dev->attr.max_send_sge, dev->attr.max_srq_sge);
- attr->max_sge_rd = 0;
+ attr->max_sge = dev->attr.max_send_sge;
+ attr->max_sge_rd = attr->max_sge;
attr->max_cq = dev->attr.max_cq;
attr->max_cqe = dev->attr.max_cqe;
attr->max_mr = dev->attr.max_mr;
@@ -1094,7 +1094,6 @@
spin_lock_init(&cq->comp_handler_lock);
INIT_LIST_HEAD(&cq->sq_head);
INIT_LIST_HEAD(&cq->rq_head);
- cq->first_arm = true;
if (ib_ctx) {
uctx = get_ocrdma_ucontext(ib_ctx);
@@ -2726,8 +2725,7 @@
OCRDMA_CQE_UD_STATUS_MASK) >> OCRDMA_CQE_UD_STATUS_SHIFT;
ibwc->src_qp = le32_to_cpu(cqe->flags_status_srcqpn) &
OCRDMA_CQE_SRCQP_MASK;
- ibwc->pkey_index = le32_to_cpu(cqe->ud.rxlen_pkey) &
- OCRDMA_CQE_PKEY_MASK;
+ ibwc->pkey_index = 0;
ibwc->wc_flags = IB_WC_GRH;
ibwc->byte_len = (le32_to_cpu(cqe->ud.rxlen_pkey) >>
OCRDMA_CQE_UD_XFER_LEN_SHIFT);
@@ -2911,12 +2909,9 @@
}
stop_cqe:
cq->getp = cur_getp;
- if (cq->deferred_arm || polled_hw_cqes) {
- ocrdma_ring_cq_db(dev, cq->id, cq->deferred_arm,
- cq->deferred_sol, polled_hw_cqes);
- cq->deferred_arm = false;
- cq->deferred_sol = false;
- }
+
+ if (polled_hw_cqes)
+ ocrdma_ring_cq_db(dev, cq->id, false, false, polled_hw_cqes);
return i;
}
@@ -3000,13 +2995,7 @@
if (cq_flags & IB_CQ_SOLICITED)
sol_needed = true;
- if (cq->first_arm) {
- ocrdma_ring_cq_db(dev, cq_id, arm_needed, sol_needed, 0);
- cq->first_arm = false;
- }
-
- cq->deferred_arm = true;
- cq->deferred_sol = sol_needed;
+ ocrdma_ring_cq_db(dev, cq_id, arm_needed, sol_needed, 0);
spin_unlock_irqrestore(&cq->cq_lock, flags);
return 0;
diff --git a/drivers/infiniband/ulp/ipoib/ipoib_ib.c b/drivers/infiniband/ulp/ipoib/ipoib_ib.c
index 5ea0c14..fa9c42f 100644
--- a/drivers/infiniband/ulp/ipoib/ipoib_ib.c
+++ b/drivers/infiniband/ulp/ipoib/ipoib_ib.c
@@ -245,8 +245,6 @@
skb_reset_mac_header(skb);
skb_pull(skb, IPOIB_ENCAP_LEN);
- skb->truesize = SKB_TRUESIZE(skb->len);
-
++dev->stats.rx_packets;
dev->stats.rx_bytes += skb->len;
diff --git a/drivers/infiniband/ulp/ipoib/ipoib_multicast.c b/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
index 050dfa1..2588931 100644
--- a/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
+++ b/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
@@ -456,7 +456,10 @@
return status;
}
-static void ipoib_mcast_join(struct net_device *dev, struct ipoib_mcast *mcast)
+/*
+ * Caller must hold 'priv->lock'
+ */
+static int ipoib_mcast_join(struct net_device *dev, struct ipoib_mcast *mcast)
{
struct ipoib_dev_priv *priv = netdev_priv(dev);
struct ib_sa_multicast *multicast;
@@ -466,6 +469,10 @@
ib_sa_comp_mask comp_mask;
int ret = 0;
+ if (!priv->broadcast ||
+ !test_bit(IPOIB_FLAG_OPER_UP, &priv->flags))
+ return -EINVAL;
+
ipoib_dbg_mcast(priv, "joining MGID %pI6\n", mcast->mcmember.mgid.raw);
rec.mgid = mcast->mcmember.mgid;
@@ -525,20 +532,23 @@
rec.join_state = 4;
#endif
}
+ spin_unlock_irq(&priv->lock);
multicast = ib_sa_join_multicast(&ipoib_sa_client, priv->ca, priv->port,
&rec, comp_mask, GFP_KERNEL,
ipoib_mcast_join_complete, mcast);
+ spin_lock_irq(&priv->lock);
if (IS_ERR(multicast)) {
ret = PTR_ERR(multicast);
ipoib_warn(priv, "ib_sa_join_multicast failed, status %d\n", ret);
- spin_lock_irq(&priv->lock);
/* Requeue this join task with a backoff delay */
__ipoib_mcast_schedule_join_thread(priv, mcast, 1);
clear_bit(IPOIB_MCAST_FLAG_BUSY, &mcast->flags);
spin_unlock_irq(&priv->lock);
complete(&mcast->done);
+ spin_lock_irq(&priv->lock);
}
+ return 0;
}
void ipoib_mcast_join_task(struct work_struct *work)
@@ -620,9 +630,10 @@
/* Found the next unjoined group */
init_completion(&mcast->done);
set_bit(IPOIB_MCAST_FLAG_BUSY, &mcast->flags);
- spin_unlock_irq(&priv->lock);
- ipoib_mcast_join(dev, mcast);
- spin_lock_irq(&priv->lock);
+ if (ipoib_mcast_join(dev, mcast)) {
+ spin_unlock_irq(&priv->lock);
+ return;
+ }
} else if (!delay_until ||
time_before(mcast->delay_until, delay_until))
delay_until = mcast->delay_until;
@@ -641,10 +652,9 @@
if (mcast) {
init_completion(&mcast->done);
set_bit(IPOIB_MCAST_FLAG_BUSY, &mcast->flags);
+ ipoib_mcast_join(dev, mcast);
}
spin_unlock_irq(&priv->lock);
- if (mcast)
- ipoib_mcast_join(dev, mcast);
}
int ipoib_mcast_start_thread(struct net_device *dev)
diff --git a/drivers/input/joystick/xpad.c b/drivers/input/joystick/xpad.c
index 6727954..e8a84d1 100644
--- a/drivers/input/joystick/xpad.c
+++ b/drivers/input/joystick/xpad.c
@@ -1207,7 +1207,6 @@
#else
static int xpad_led_probe(struct usb_xpad *xpad) { return 0; }
static void xpad_led_disconnect(struct usb_xpad *xpad) { }
-static void xpad_identify_controller(struct usb_xpad *xpad) { }
#endif
static int xpad_start_input(struct usb_xpad *xpad)
diff --git a/drivers/input/keyboard/adp5589-keys.c b/drivers/input/keyboard/adp5589-keys.c
index 4d446d5..c01a1d6 100644
--- a/drivers/input/keyboard/adp5589-keys.c
+++ b/drivers/input/keyboard/adp5589-keys.c
@@ -235,7 +235,7 @@
unsigned short gpimapsize;
unsigned extend_cfg;
bool is_adp5585;
- bool adp5585_support_row5;
+ bool support_row5;
#ifdef CONFIG_GPIOLIB
unsigned char gpiomap[ADP5589_MAXGPIO];
bool export_gpio;
@@ -485,7 +485,7 @@
if (kpad->extend_cfg & C4_EXTEND_CFG)
pin_used[kpad->var->c4_extend_cfg] = true;
- if (!kpad->adp5585_support_row5)
+ if (!kpad->support_row5)
pin_used[5] = true;
for (i = 0; i < kpad->var->maxgpio; i++)
@@ -884,12 +884,13 @@
switch (id->driver_data) {
case ADP5585_02:
- kpad->adp5585_support_row5 = true;
+ kpad->support_row5 = true;
case ADP5585_01:
kpad->is_adp5585 = true;
kpad->var = &const_adp5585;
break;
case ADP5589:
+ kpad->support_row5 = true;
kpad->var = &const_adp5589;
break;
}
diff --git a/drivers/input/keyboard/cap11xx.c b/drivers/input/keyboard/cap11xx.c
index 378db10..4401be2 100644
--- a/drivers/input/keyboard/cap11xx.c
+++ b/drivers/input/keyboard/cap11xx.c
@@ -304,8 +304,10 @@
led->cdev.brightness = LED_OFF;
error = of_property_read_u32(child, "reg", ®);
- if (error != 0 || reg >= num_leds)
+ if (error != 0 || reg >= num_leds) {
+ of_node_put(child);
return -EINVAL;
+ }
led->reg = reg;
led->priv = priv;
@@ -313,8 +315,10 @@
INIT_WORK(&led->work, cap11xx_led_work);
error = devm_led_classdev_register(dev, &led->cdev);
- if (error)
+ if (error) {
+ of_node_put(child);
return error;
+ }
priv->num_leds++;
led++;
diff --git a/drivers/input/misc/Kconfig b/drivers/input/misc/Kconfig
index d6d16fa..1f2337a 100644
--- a/drivers/input/misc/Kconfig
+++ b/drivers/input/misc/Kconfig
@@ -733,7 +733,7 @@
module will be called xen-kbdfront.
config INPUT_SIRFSOC_ONKEY
- bool "CSR SiRFSoC power on/off/suspend key support"
+ tristate "CSR SiRFSoC power on/off/suspend key support"
depends on ARCH_SIRF && OF
default y
help
diff --git a/drivers/input/misc/sirfsoc-onkey.c b/drivers/input/misc/sirfsoc-onkey.c
index 9d5b89b..ed7237f 100644
--- a/drivers/input/misc/sirfsoc-onkey.c
+++ b/drivers/input/misc/sirfsoc-onkey.c
@@ -101,7 +101,7 @@
static const struct of_device_id sirfsoc_pwrc_of_match[] = {
{ .compatible = "sirf,prima2-pwrc" },
{},
-}
+};
MODULE_DEVICE_TABLE(of, sirfsoc_pwrc_of_match);
static int sirfsoc_pwrc_probe(struct platform_device *pdev)
diff --git a/drivers/input/mouse/vmmouse.c b/drivers/input/mouse/vmmouse.c
index e272f06..a3f0f5a 100644
--- a/drivers/input/mouse/vmmouse.c
+++ b/drivers/input/mouse/vmmouse.c
@@ -458,8 +458,6 @@
priv->abs_dev = abs_dev;
psmouse->private = priv;
- input_set_capability(rel_dev, EV_REL, REL_WHEEL);
-
/* Set up and register absolute device */
snprintf(priv->phys, sizeof(priv->phys), "%s/input1",
psmouse->ps2dev.serio->phys);
@@ -475,10 +473,6 @@
abs_dev->id.version = psmouse->model;
abs_dev->dev.parent = &psmouse->ps2dev.serio->dev;
- error = input_register_device(priv->abs_dev);
- if (error)
- goto init_fail;
-
/* Set absolute device capabilities */
input_set_capability(abs_dev, EV_KEY, BTN_LEFT);
input_set_capability(abs_dev, EV_KEY, BTN_RIGHT);
@@ -488,6 +482,13 @@
input_set_abs_params(abs_dev, ABS_X, 0, VMMOUSE_MAX_X, 0, 0);
input_set_abs_params(abs_dev, ABS_Y, 0, VMMOUSE_MAX_Y, 0, 0);
+ error = input_register_device(priv->abs_dev);
+ if (error)
+ goto init_fail;
+
+ /* Add wheel capability to the relative device */
+ input_set_capability(rel_dev, EV_REL, REL_WHEEL);
+
psmouse->protocol_handler = vmmouse_process_byte;
psmouse->disconnect = vmmouse_disconnect;
psmouse->reconnect = vmmouse_reconnect;
diff --git a/drivers/input/serio/serio.c b/drivers/input/serio/serio.c
index 8f82897..1ca7f55 100644
--- a/drivers/input/serio/serio.c
+++ b/drivers/input/serio/serio.c
@@ -134,7 +134,7 @@
int error;
error = device_attach(&serio->dev);
- if (error < 0)
+ if (error < 0 && error != -EPROBE_DEFER)
dev_warn(&serio->dev,
"device_attach() failed for %s (%s), error: %d\n",
serio->phys, serio->name, error);
diff --git a/drivers/input/touchscreen/colibri-vf50-ts.c b/drivers/input/touchscreen/colibri-vf50-ts.c
index 5d4903a..69828d0 100644
--- a/drivers/input/touchscreen/colibri-vf50-ts.c
+++ b/drivers/input/touchscreen/colibri-vf50-ts.c
@@ -21,6 +21,7 @@
#include <linux/interrupt.h>
#include <linux/kernel.h>
#include <linux/module.h>
+#include <linux/of.h>
#include <linux/pinctrl/consumer.h>
#include <linux/platform_device.h>
#include <linux/slab.h>
diff --git a/drivers/input/touchscreen/edt-ft5x06.c b/drivers/input/touchscreen/edt-ft5x06.c
index 0b0f8c1..23fbe38 100644
--- a/drivers/input/touchscreen/edt-ft5x06.c
+++ b/drivers/input/touchscreen/edt-ft5x06.c
@@ -822,16 +822,22 @@
int error;
error = device_property_read_u32(dev, "threshold", &val);
- if (!error)
- reg_addr->reg_threshold = val;
+ if (!error) {
+ edt_ft5x06_register_write(tsdata, reg_addr->reg_threshold, val);
+ tsdata->threshold = val;
+ }
error = device_property_read_u32(dev, "gain", &val);
- if (!error)
- reg_addr->reg_gain = val;
+ if (!error) {
+ edt_ft5x06_register_write(tsdata, reg_addr->reg_gain, val);
+ tsdata->gain = val;
+ }
error = device_property_read_u32(dev, "offset", &val);
- if (!error)
- reg_addr->reg_offset = val;
+ if (!error) {
+ edt_ft5x06_register_write(tsdata, reg_addr->reg_offset, val);
+ tsdata->offset = val;
+ }
}
static void
diff --git a/drivers/iommu/dmar.c b/drivers/iommu/dmar.c
index 62a400c..fb092f3 100644
--- a/drivers/iommu/dmar.c
+++ b/drivers/iommu/dmar.c
@@ -1353,7 +1353,7 @@
raw_spin_lock_irqsave(&iommu->register_lock, flags);
- sts = dmar_readq(iommu->reg + DMAR_GSTS_REG);
+ sts = readl(iommu->reg + DMAR_GSTS_REG);
if (!(sts & DMA_GSTS_QIES))
goto end;
diff --git a/drivers/iommu/intel-svm.c b/drivers/iommu/intel-svm.c
index 5046483..d9939fa 100644
--- a/drivers/iommu/intel-svm.c
+++ b/drivers/iommu/intel-svm.c
@@ -249,12 +249,30 @@
static void intel_mm_release(struct mmu_notifier *mn, struct mm_struct *mm)
{
struct intel_svm *svm = container_of(mn, struct intel_svm, notifier);
+ struct intel_svm_dev *sdev;
+ /* This might end up being called from exit_mmap(), *before* the page
+ * tables are cleared. And __mmu_notifier_release() will delete us from
+ * the list of notifiers so that our invalidate_range() callback doesn't
+ * get called when the page tables are cleared. So we need to protect
+ * against hardware accessing those page tables.
+ *
+ * We do it by clearing the entry in the PASID table and then flushing
+ * the IOTLB and the PASID table caches. This might upset hardware;
+ * perhaps we'll want to point the PASID to a dummy PGD (like the zero
+ * page) so that we end up taking a fault that the hardware really
+ * *has* to handle gracefully without affecting other processes.
+ */
svm->iommu->pasid_table[svm->pasid].val = 0;
+ wmb();
- /* There's no need to do any flush because we can't get here if there
- * are any devices left anyway. */
- WARN_ON(!list_empty(&svm->devs));
+ rcu_read_lock();
+ list_for_each_entry_rcu(sdev, &svm->devs, list) {
+ intel_flush_pasid_dev(svm, sdev, svm->pasid);
+ intel_flush_svm_range_dev(svm, sdev, 0, -1, 0, !svm->mm);
+ }
+ rcu_read_unlock();
+
}
static const struct mmu_notifier_ops intel_mmuops = {
@@ -379,7 +397,6 @@
goto out;
}
iommu->pasid_table[svm->pasid].val = (u64)__pa(mm->pgd) | 1;
- mm = NULL;
} else
iommu->pasid_table[svm->pasid].val = (u64)__pa(init_mm.pgd) | 1 | (1ULL << 11);
wmb();
@@ -442,11 +459,11 @@
kfree_rcu(sdev, rcu);
if (list_empty(&svm->devs)) {
- mmu_notifier_unregister(&svm->notifier, svm->mm);
idr_remove(&svm->iommu->pasid_idr, svm->pasid);
if (svm->mm)
- mmput(svm->mm);
+ mmu_notifier_unregister(&svm->notifier, svm->mm);
+
/* We mandate that no page faults may be outstanding
* for the PASID when intel_svm_unbind_mm() is called.
* If that is not obeyed, subtle errors will happen.
@@ -507,6 +524,10 @@
struct intel_svm *svm = NULL;
int head, tail, handled = 0;
+ /* Clear PPR bit before reading head/tail registers, to
+ * ensure that we get a new interrupt if needed. */
+ writel(DMA_PRS_PPR, iommu->reg + DMAR_PRS_REG);
+
tail = dmar_readq(iommu->reg + DMAR_PQT_REG) & PRQ_RING_MASK;
head = dmar_readq(iommu->reg + DMAR_PQH_REG) & PRQ_RING_MASK;
while (head != tail) {
@@ -551,6 +572,9 @@
* any faults on kernel addresses. */
if (!svm->mm)
goto bad_req;
+ /* If the mm is already defunct, don't handle faults. */
+ if (!atomic_inc_not_zero(&svm->mm->mm_users))
+ goto bad_req;
down_read(&svm->mm->mmap_sem);
vma = find_extend_vma(svm->mm, address);
if (!vma || address < vma->vm_start)
@@ -567,6 +591,7 @@
result = QI_RESP_SUCCESS;
invalid:
up_read(&svm->mm->mmap_sem);
+ mmput(svm->mm);
bad_req:
/* Accounting for major/minor faults? */
rcu_read_lock();
diff --git a/drivers/iommu/intel_irq_remapping.c b/drivers/iommu/intel_irq_remapping.c
index c12ba45..ac59692 100644
--- a/drivers/iommu/intel_irq_remapping.c
+++ b/drivers/iommu/intel_irq_remapping.c
@@ -629,7 +629,7 @@
raw_spin_lock_irqsave(&iommu->register_lock, flags);
- sts = dmar_readq(iommu->reg + DMAR_GSTS_REG);
+ sts = readl(iommu->reg + DMAR_GSTS_REG);
if (!(sts & DMA_GSTS_IRES))
goto end;
diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
index 3447549..43dfd15 100644
--- a/drivers/irqchip/irq-gic-v3-its.c
+++ b/drivers/irqchip/irq-gic-v3-its.c
@@ -66,7 +66,10 @@
unsigned long phys_base;
struct its_cmd_block *cmd_base;
struct its_cmd_block *cmd_write;
- void *tables[GITS_BASER_NR_REGS];
+ struct {
+ void *base;
+ u32 order;
+ } tables[GITS_BASER_NR_REGS];
struct its_collection *collections;
struct list_head its_device_list;
u64 flags;
@@ -75,6 +78,9 @@
#define ITS_ITT_ALIGN SZ_256
+/* Convert page order to size in bytes */
+#define PAGE_ORDER_TO_SIZE(o) (PAGE_SIZE << (o))
+
struct event_lpi_map {
unsigned long *lpi_map;
u16 *col_map;
@@ -597,11 +603,6 @@
lpi_set_config(d, true);
}
-static void its_eoi_irq(struct irq_data *d)
-{
- gic_write_eoir(d->hwirq);
-}
-
static int its_set_affinity(struct irq_data *d, const struct cpumask *mask_val,
bool force)
{
@@ -638,7 +639,7 @@
.name = "ITS",
.irq_mask = its_mask_irq,
.irq_unmask = its_unmask_irq,
- .irq_eoi = its_eoi_irq,
+ .irq_eoi = irq_chip_eoi_parent,
.irq_set_affinity = its_set_affinity,
.irq_compose_msi_msg = its_irq_compose_msi_msg,
};
@@ -807,9 +808,10 @@
int i;
for (i = 0; i < GITS_BASER_NR_REGS; i++) {
- if (its->tables[i]) {
- free_page((unsigned long)its->tables[i]);
- its->tables[i] = NULL;
+ if (its->tables[i].base) {
+ free_pages((unsigned long)its->tables[i].base,
+ its->tables[i].order);
+ its->tables[i].base = NULL;
}
}
}
@@ -842,7 +844,6 @@
u64 type = GITS_BASER_TYPE(val);
u64 entry_size = GITS_BASER_ENTRY_SIZE(val);
int order = get_order(psz);
- int alloc_size;
int alloc_pages;
u64 tmp;
void *base;
@@ -874,9 +875,8 @@
}
}
- alloc_size = (1 << order) * PAGE_SIZE;
retry_alloc_baser:
- alloc_pages = (alloc_size / psz);
+ alloc_pages = (PAGE_ORDER_TO_SIZE(order) / psz);
if (alloc_pages > GITS_BASER_PAGES_MAX) {
alloc_pages = GITS_BASER_PAGES_MAX;
order = get_order(GITS_BASER_PAGES_MAX * psz);
@@ -890,7 +890,8 @@
goto out_free;
}
- its->tables[i] = base;
+ its->tables[i].base = base;
+ its->tables[i].order = order;
retry_baser:
val = (virt_to_phys(base) |
@@ -928,7 +929,7 @@
shr = tmp & GITS_BASER_SHAREABILITY_MASK;
if (!shr) {
cache = GITS_BASER_nC;
- __flush_dcache_area(base, alloc_size);
+ __flush_dcache_area(base, PAGE_ORDER_TO_SIZE(order));
}
goto retry_baser;
}
@@ -940,7 +941,7 @@
* something is horribly wrong...
*/
free_pages((unsigned long)base, order);
- its->tables[i] = NULL;
+ its->tables[i].base = NULL;
switch (psz) {
case SZ_16K:
@@ -961,7 +962,7 @@
}
pr_info("ITS: allocated %d %s @%lx (psz %dK, shr %d)\n",
- (int)(alloc_size / entry_size),
+ (int)(PAGE_ORDER_TO_SIZE(order) / entry_size),
its_base_type_string[type],
(unsigned long)virt_to_phys(base),
psz / SZ_1K, (int)shr >> GITS_BASER_SHAREABILITY_SHIFT);
diff --git a/drivers/irqchip/irq-gic.c b/drivers/irqchip/irq-gic.c
index 911758c..8f9ebf7 100644
--- a/drivers/irqchip/irq-gic.c
+++ b/drivers/irqchip/irq-gic.c
@@ -384,9 +384,6 @@
.irq_unmask = gic_unmask_irq,
.irq_eoi = gic_eoi_irq,
.irq_set_type = gic_set_type,
-#ifdef CONFIG_SMP
- .irq_set_affinity = gic_set_affinity,
-#endif
.irq_get_irqchip_state = gic_irq_get_irqchip_state,
.irq_set_irqchip_state = gic_irq_set_irqchip_state,
.flags = IRQCHIP_SET_TYPE_MASKED |
@@ -400,9 +397,6 @@
.irq_unmask = gic_unmask_irq,
.irq_eoi = gic_eoimode1_eoi_irq,
.irq_set_type = gic_set_type,
-#ifdef CONFIG_SMP
- .irq_set_affinity = gic_set_affinity,
-#endif
.irq_get_irqchip_state = gic_irq_get_irqchip_state,
.irq_set_irqchip_state = gic_irq_set_irqchip_state,
.irq_set_vcpu_affinity = gic_irq_set_vcpu_affinity,
@@ -443,7 +437,7 @@
u32 bypass = 0;
u32 mode = 0;
- if (static_key_true(&supports_deactivate))
+ if (gic == &gic_data[0] && static_key_true(&supports_deactivate))
mode = GIC_CPU_CTRL_EOImodeNS;
/*
@@ -1039,6 +1033,11 @@
gic->chip.name = kasprintf(GFP_KERNEL, "GIC-%d", gic_nr);
}
+#ifdef CONFIG_SMP
+ if (gic_nr == 0)
+ gic->chip.irq_set_affinity = gic_set_affinity;
+#endif
+
#ifdef CONFIG_GIC_NON_BANKED
if (percpu_offset) { /* Frankein-GIC without banked registers... */
unsigned int cpu;
diff --git a/drivers/irqchip/irq-sun4i.c b/drivers/irqchip/irq-sun4i.c
index 0704362..376b280 100644
--- a/drivers/irqchip/irq-sun4i.c
+++ b/drivers/irqchip/irq-sun4i.c
@@ -22,7 +22,6 @@
#include <linux/of_irq.h>
#include <asm/exception.h>
-#include <asm/mach/irq.h>
#define SUN4I_IRQ_VECTOR_REG 0x00
#define SUN4I_IRQ_PROTECTION_REG 0x08
diff --git a/drivers/isdn/gigaset/ser-gigaset.c b/drivers/isdn/gigaset/ser-gigaset.c
index 2a506fe..d1f8ab9 100644
--- a/drivers/isdn/gigaset/ser-gigaset.c
+++ b/drivers/isdn/gigaset/ser-gigaset.c
@@ -373,13 +373,7 @@
static void gigaset_device_release(struct device *dev)
{
- struct cardstate *cs = dev_get_drvdata(dev);
-
- if (!cs)
- return;
- dev_set_drvdata(dev, NULL);
- kfree(cs->hw.ser);
- cs->hw.ser = NULL;
+ kfree(container_of(dev, struct ser_cardstate, dev.dev));
}
/*
@@ -408,7 +402,6 @@
cs->hw.ser = NULL;
return rc;
}
- dev_set_drvdata(&cs->hw.ser->dev.dev, cs);
tasklet_init(&cs->write_tasklet,
gigaset_modem_fill, (unsigned long) cs);
diff --git a/drivers/isdn/hardware/mISDN/netjet.c b/drivers/isdn/hardware/mISDN/netjet.c
index 8e29447..afde4ed 100644
--- a/drivers/isdn/hardware/mISDN/netjet.c
+++ b/drivers/isdn/hardware/mISDN/netjet.c
@@ -392,7 +392,7 @@
}
stat = bchannel_get_rxbuf(&bc->bch, cnt);
/* only transparent use the count here, HDLC overun is detected later */
- if (stat == ENOMEM) {
+ if (stat == -ENOMEM) {
pr_warning("%s.B%d: No memory for %d bytes\n",
card->name, bc->bch.nr, cnt);
return;
diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c
index 33224cb..9f6acd5 100644
--- a/drivers/lightnvm/core.c
+++ b/drivers/lightnvm/core.c
@@ -572,11 +572,13 @@
}
}
- ret = nvm_get_sysblock(dev, &dev->sb);
- if (!ret)
- pr_err("nvm: device not initialized.\n");
- else if (ret < 0)
- pr_err("nvm: err (%d) on device initialization\n", ret);
+ if (dev->identity.cap & NVM_ID_DCAP_BBLKMGMT) {
+ ret = nvm_get_sysblock(dev, &dev->sb);
+ if (!ret)
+ pr_err("nvm: device not initialized.\n");
+ else if (ret < 0)
+ pr_err("nvm: err (%d) on device initialization\n", ret);
+ }
/* register device with a supported media manager */
down_write(&nvm_lock);
@@ -1055,9 +1057,11 @@
strncpy(info.mmtype, init->mmtype, NVM_MMTYPE_LEN);
info.fs_ppa.ppa = -1;
- ret = nvm_init_sysblock(dev, &info);
- if (ret)
- return ret;
+ if (dev->identity.cap & NVM_ID_DCAP_BBLKMGMT) {
+ ret = nvm_init_sysblock(dev, &info);
+ if (ret)
+ return ret;
+ }
memcpy(&dev->sb, &info, sizeof(struct nvm_sb_info));
@@ -1117,7 +1121,10 @@
dev->mt = NULL;
}
- return nvm_dev_factory(dev, fact.flags);
+ if (dev->identity.cap & NVM_ID_DCAP_BBLKMGMT)
+ return nvm_dev_factory(dev, fact.flags);
+
+ return 0;
}
static long nvm_ctl_ioctl(struct file *file, uint cmd, unsigned long arg)
diff --git a/drivers/lightnvm/rrpc.c b/drivers/lightnvm/rrpc.c
index d8c7595..307db1e 100644
--- a/drivers/lightnvm/rrpc.c
+++ b/drivers/lightnvm/rrpc.c
@@ -300,8 +300,10 @@
}
page = mempool_alloc(rrpc->page_pool, GFP_NOIO);
- if (!page)
+ if (!page) {
+ bio_put(bio);
return -ENOMEM;
+ }
while ((slot = find_first_zero_bit(rblk->invalid_pages,
nr_pgs_per_blk)) < nr_pgs_per_blk) {
diff --git a/drivers/lightnvm/rrpc.h b/drivers/lightnvm/rrpc.h
index ef13ac7..f7b3733 100644
--- a/drivers/lightnvm/rrpc.h
+++ b/drivers/lightnvm/rrpc.h
@@ -174,8 +174,7 @@
static inline int request_intersects(struct rrpc_inflight_rq *r,
sector_t laddr_start, sector_t laddr_end)
{
- return (laddr_end >= r->l_start && laddr_end <= r->l_end) &&
- (laddr_start >= r->l_start && laddr_start <= r->l_end);
+ return (laddr_end >= r->l_start) && (laddr_start <= r->l_end);
}
static int __rrpc_lock_laddr(struct rrpc *rrpc, sector_t laddr,
@@ -184,6 +183,8 @@
sector_t laddr_end = laddr + pages - 1;
struct rrpc_inflight_rq *rtmp;
+ WARN_ON(irqs_disabled());
+
spin_lock_irq(&rrpc->inflights.lock);
list_for_each_entry(rtmp, &rrpc->inflights.reqs, list) {
if (unlikely(request_intersects(rtmp, laddr, laddr_end))) {
diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index 5df4048..dd83492 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -1191,6 +1191,8 @@
if (clone)
free_rq_clone(clone);
+ else if (!tio->md->queue->mq_ops)
+ free_rq_tio(tio);
}
/*
diff --git a/drivers/mfd/db8500-prcmu.c b/drivers/mfd/db8500-prcmu.c
index e6e4bac..12099b0 100644
--- a/drivers/mfd/db8500-prcmu.c
+++ b/drivers/mfd/db8500-prcmu.c
@@ -2048,6 +2048,7 @@
return 0;
}
+EXPORT_SYMBOL_GPL(db8500_prcmu_config_hotmon);
static int config_hot_period(u16 val)
{
@@ -2074,11 +2075,13 @@
return config_hot_period(cycles32k);
}
+EXPORT_SYMBOL_GPL(db8500_prcmu_start_temp_sense);
int db8500_prcmu_stop_temp_sense(void)
{
return config_hot_period(0xFFFF);
}
+EXPORT_SYMBOL_GPL(db8500_prcmu_stop_temp_sense);
static int prcmu_a9wdog(u8 cmd, u8 d0, u8 d1, u8 d2, u8 d3)
{
diff --git a/drivers/misc/Kconfig b/drivers/misc/Kconfig
index f0ba782..a216b46 100644
--- a/drivers/misc/Kconfig
+++ b/drivers/misc/Kconfig
@@ -440,7 +440,7 @@
still useful.
config BMP085
- bool
+ tristate
depends on SYSFS
config BMP085_I2C
@@ -470,7 +470,7 @@
config PCH_PHUB
tristate "Intel EG20T PCH/LAPIS Semicon IOH(ML7213/ML7223/ML7831) PHUB"
select GENERIC_NET_UTILS
- depends on PCI && (X86_32 || COMPILE_TEST)
+ depends on PCI && (X86_32 || MIPS || COMPILE_TEST)
help
This driver is for PCH(Platform controller Hub) PHUB(Packet Hub) of
Intel Topcliff which is an IOH(Input/Output Hub) for x86 embedded
diff --git a/drivers/misc/apds990x.c b/drivers/misc/apds990x.c
index a3e789b..dfb72ec 100644
--- a/drivers/misc/apds990x.c
+++ b/drivers/misc/apds990x.c
@@ -1215,7 +1215,7 @@
#ifdef CONFIG_PM_SLEEP
static int apds990x_suspend(struct device *dev)
{
- struct i2c_client *client = container_of(dev, struct i2c_client, dev);
+ struct i2c_client *client = to_i2c_client(dev);
struct apds990x_chip *chip = i2c_get_clientdata(client);
apds990x_chip_off(chip);
@@ -1224,7 +1224,7 @@
static int apds990x_resume(struct device *dev)
{
- struct i2c_client *client = container_of(dev, struct i2c_client, dev);
+ struct i2c_client *client = to_i2c_client(dev);
struct apds990x_chip *chip = i2c_get_clientdata(client);
/*
@@ -1240,7 +1240,7 @@
#ifdef CONFIG_PM
static int apds990x_runtime_suspend(struct device *dev)
{
- struct i2c_client *client = container_of(dev, struct i2c_client, dev);
+ struct i2c_client *client = to_i2c_client(dev);
struct apds990x_chip *chip = i2c_get_clientdata(client);
apds990x_chip_off(chip);
@@ -1249,7 +1249,7 @@
static int apds990x_runtime_resume(struct device *dev)
{
- struct i2c_client *client = container_of(dev, struct i2c_client, dev);
+ struct i2c_client *client = to_i2c_client(dev);
struct apds990x_chip *chip = i2c_get_clientdata(client);
apds990x_chip_on(chip);
diff --git a/drivers/misc/arm-charlcd.c b/drivers/misc/arm-charlcd.c
index c65b5ea..b3176ee 100644
--- a/drivers/misc/arm-charlcd.c
+++ b/drivers/misc/arm-charlcd.c
@@ -8,7 +8,6 @@
* Author: Linus Walleij <triad@df.lth.se>
*/
#include <linux/init.h>
-#include <linux/module.h>
#include <linux/interrupt.h>
#include <linux/platform_device.h>
#include <linux/of.h>
@@ -328,20 +327,6 @@
return ret;
}
-static int __exit charlcd_remove(struct platform_device *pdev)
-{
- struct charlcd *lcd = platform_get_drvdata(pdev);
-
- if (lcd) {
- free_irq(lcd->irq, lcd);
- iounmap(lcd->virtbase);
- release_mem_region(lcd->phybase, lcd->physize);
- kfree(lcd);
- }
-
- return 0;
-}
-
static int charlcd_suspend(struct device *dev)
{
struct platform_device *pdev = to_platform_device(dev);
@@ -376,13 +361,8 @@
.driver = {
.name = DRIVERNAME,
.pm = &charlcd_pm_ops,
+ .suppress_bind_attrs = true,
.of_match_table = of_match_ptr(charlcd_match),
},
- .remove = __exit_p(charlcd_remove),
};
-
-module_platform_driver_probe(charlcd_driver, charlcd_probe);
-
-MODULE_AUTHOR("Linus Walleij <triad@df.lth.se>");
-MODULE_DESCRIPTION("ARM Character LCD Driver");
-MODULE_LICENSE("GPL v2");
+builtin_platform_driver_probe(charlcd_driver, charlcd_probe);
diff --git a/drivers/misc/bh1770glc.c b/drivers/misc/bh1770glc.c
index 753d7ec..845466e 100644
--- a/drivers/misc/bh1770glc.c
+++ b/drivers/misc/bh1770glc.c
@@ -1323,7 +1323,7 @@
#ifdef CONFIG_PM_SLEEP
static int bh1770_suspend(struct device *dev)
{
- struct i2c_client *client = container_of(dev, struct i2c_client, dev);
+ struct i2c_client *client = to_i2c_client(dev);
struct bh1770_chip *chip = i2c_get_clientdata(client);
bh1770_chip_off(chip);
@@ -1333,7 +1333,7 @@
static int bh1770_resume(struct device *dev)
{
- struct i2c_client *client = container_of(dev, struct i2c_client, dev);
+ struct i2c_client *client = to_i2c_client(dev);
struct bh1770_chip *chip = i2c_get_clientdata(client);
int ret = 0;
@@ -1361,7 +1361,7 @@
#ifdef CONFIG_PM
static int bh1770_runtime_suspend(struct device *dev)
{
- struct i2c_client *client = container_of(dev, struct i2c_client, dev);
+ struct i2c_client *client = to_i2c_client(dev);
struct bh1770_chip *chip = i2c_get_clientdata(client);
bh1770_chip_off(chip);
@@ -1371,7 +1371,7 @@
static int bh1770_runtime_resume(struct device *dev)
{
- struct i2c_client *client = container_of(dev, struct i2c_client, dev);
+ struct i2c_client *client = to_i2c_client(dev);
struct bh1770_chip *chip = i2c_get_clientdata(client);
bh1770_chip_on(chip);
diff --git a/drivers/misc/c2port/core.c b/drivers/misc/c2port/core.c
index cc8645b..1922cb8 100644
--- a/drivers/misc/c2port/core.c
+++ b/drivers/misc/c2port/core.c
@@ -721,9 +721,7 @@
struct bin_attribute *attr,
char *buffer, loff_t offset, size_t count)
{
- struct c2port_device *c2dev =
- dev_get_drvdata(container_of(kobj,
- struct device, kobj));
+ struct c2port_device *c2dev = dev_get_drvdata(kobj_to_dev(kobj));
ssize_t ret;
/* Check the device and flash access status */
@@ -838,9 +836,7 @@
struct bin_attribute *attr,
char *buffer, loff_t offset, size_t count)
{
- struct c2port_device *c2dev =
- dev_get_drvdata(container_of(kobj,
- struct device, kobj));
+ struct c2port_device *c2dev = dev_get_drvdata(kobj_to_dev(kobj));
int ret;
/* Check the device access status */
diff --git a/drivers/misc/cxl/sysfs.c b/drivers/misc/cxl/sysfs.c
index 02006f71..038af5d 100644
--- a/drivers/misc/cxl/sysfs.c
+++ b/drivers/misc/cxl/sysfs.c
@@ -386,8 +386,7 @@
struct bin_attribute *bin_attr, char *buf,
loff_t off, size_t count)
{
- struct cxl_afu *afu = to_cxl_afu(container_of(kobj,
- struct device, kobj));
+ struct cxl_afu *afu = to_cxl_afu(kobj_to_dev(kobj));
return cxl_afu_read_err_buffer(afu, buf, off, count);
}
@@ -467,7 +466,7 @@
loff_t off, size_t count)
{
struct afu_config_record *cr = to_cr(kobj);
- struct cxl_afu *afu = to_cxl_afu(container_of(kobj->parent, struct device, kobj));
+ struct cxl_afu *afu = to_cxl_afu(kobj_to_dev(kobj->parent));
u64 i, j, val;
diff --git a/drivers/misc/eeprom/at24.c b/drivers/misc/eeprom/at24.c
index 5d7c090..d105c25 100644
--- a/drivers/misc/eeprom/at24.c
+++ b/drivers/misc/eeprom/at24.c
@@ -289,7 +289,7 @@
{
struct at24_data *at24;
- at24 = dev_get_drvdata(container_of(kobj, struct device, kobj));
+ at24 = dev_get_drvdata(kobj_to_dev(kobj));
return at24_read(at24, buf, off, count);
}
@@ -420,7 +420,7 @@
{
struct at24_data *at24;
- at24 = dev_get_drvdata(container_of(kobj, struct device, kobj));
+ at24 = dev_get_drvdata(kobj_to_dev(kobj));
return at24_write(at24, buf, off, count);
}
diff --git a/drivers/misc/eeprom/at25.c b/drivers/misc/eeprom/at25.c
index f850ef5..3e9e5a28 100644
--- a/drivers/misc/eeprom/at25.c
+++ b/drivers/misc/eeprom/at25.c
@@ -139,7 +139,7 @@
struct device *dev;
struct at25_data *at25;
- dev = container_of(kobj, struct device, kobj);
+ dev = kobj_to_dev(kobj);
at25 = dev_get_drvdata(dev);
return at25_ee_read(at25, buf, off, count);
@@ -273,7 +273,7 @@
struct device *dev;
struct at25_data *at25;
- dev = container_of(kobj, struct device, kobj);
+ dev = kobj_to_dev(kobj);
at25 = dev_get_drvdata(dev);
return at25_ee_write(at25, buf, off, count);
diff --git a/drivers/misc/eeprom/eeprom.c b/drivers/misc/eeprom/eeprom.c
index 7342fd6..3d1d551 100644
--- a/drivers/misc/eeprom/eeprom.c
+++ b/drivers/misc/eeprom/eeprom.c
@@ -84,7 +84,7 @@
struct bin_attribute *bin_attr,
char *buf, loff_t off, size_t count)
{
- struct i2c_client *client = to_i2c_client(container_of(kobj, struct device, kobj));
+ struct i2c_client *client = to_i2c_client(kobj_to_dev(kobj));
struct eeprom_data *data = i2c_get_clientdata(client);
u8 slice;
diff --git a/drivers/misc/eeprom/eeprom_93xx46.c b/drivers/misc/eeprom/eeprom_93xx46.c
index ff63f05..f62ab29 100644
--- a/drivers/misc/eeprom/eeprom_93xx46.c
+++ b/drivers/misc/eeprom/eeprom_93xx46.c
@@ -10,9 +10,13 @@
#include <linux/delay.h>
#include <linux/device.h>
+#include <linux/gpio/consumer.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/mutex.h>
+#include <linux/of.h>
+#include <linux/of_device.h>
+#include <linux/of_gpio.h>
#include <linux/slab.h>
#include <linux/spi/spi.h>
#include <linux/sysfs.h>
@@ -25,6 +29,15 @@
#define ADDR_ERAL 0x20
#define ADDR_EWEN 0x30
+struct eeprom_93xx46_devtype_data {
+ unsigned int quirks;
+};
+
+static const struct eeprom_93xx46_devtype_data atmel_at93c46d_data = {
+ .quirks = EEPROM_93XX46_QUIRK_SINGLE_WORD_READ |
+ EEPROM_93XX46_QUIRK_INSTRUCTION_LENGTH,
+};
+
struct eeprom_93xx46_dev {
struct spi_device *spi;
struct eeprom_93xx46_platform_data *pdata;
@@ -33,6 +46,16 @@
int addrlen;
};
+static inline bool has_quirk_single_word_read(struct eeprom_93xx46_dev *edev)
+{
+ return edev->pdata->quirks & EEPROM_93XX46_QUIRK_SINGLE_WORD_READ;
+}
+
+static inline bool has_quirk_instruction_length(struct eeprom_93xx46_dev *edev)
+{
+ return edev->pdata->quirks & EEPROM_93XX46_QUIRK_INSTRUCTION_LENGTH;
+}
+
static ssize_t
eeprom_93xx46_bin_read(struct file *filp, struct kobject *kobj,
struct bin_attribute *bin_attr,
@@ -40,58 +63,73 @@
{
struct eeprom_93xx46_dev *edev;
struct device *dev;
- struct spi_message m;
- struct spi_transfer t[2];
- int bits, ret;
- u16 cmd_addr;
+ ssize_t ret = 0;
- dev = container_of(kobj, struct device, kobj);
+ dev = kobj_to_dev(kobj);
edev = dev_get_drvdata(dev);
- cmd_addr = OP_READ << edev->addrlen;
-
- if (edev->addrlen == 7) {
- cmd_addr |= off & 0x7f;
- bits = 10;
- } else {
- cmd_addr |= off & 0x3f;
- bits = 9;
- }
-
- dev_dbg(&edev->spi->dev, "read cmd 0x%x, %d Hz\n",
- cmd_addr, edev->spi->max_speed_hz);
-
- spi_message_init(&m);
- memset(t, 0, sizeof(t));
-
- t[0].tx_buf = (char *)&cmd_addr;
- t[0].len = 2;
- t[0].bits_per_word = bits;
- spi_message_add_tail(&t[0], &m);
-
- t[1].rx_buf = buf;
- t[1].len = count;
- t[1].bits_per_word = 8;
- spi_message_add_tail(&t[1], &m);
-
mutex_lock(&edev->lock);
if (edev->pdata->prepare)
edev->pdata->prepare(edev);
- ret = spi_sync(edev->spi, &m);
- /* have to wait at least Tcsl ns */
- ndelay(250);
- if (ret) {
- dev_err(&edev->spi->dev, "read %zu bytes at %d: err. %d\n",
- count, (int)off, ret);
+ while (count) {
+ struct spi_message m;
+ struct spi_transfer t[2] = { { 0 } };
+ u16 cmd_addr = OP_READ << edev->addrlen;
+ size_t nbytes = count;
+ int bits;
+ int err;
+
+ if (edev->addrlen == 7) {
+ cmd_addr |= off & 0x7f;
+ bits = 10;
+ if (has_quirk_single_word_read(edev))
+ nbytes = 1;
+ } else {
+ cmd_addr |= (off >> 1) & 0x3f;
+ bits = 9;
+ if (has_quirk_single_word_read(edev))
+ nbytes = 2;
+ }
+
+ dev_dbg(&edev->spi->dev, "read cmd 0x%x, %d Hz\n",
+ cmd_addr, edev->spi->max_speed_hz);
+
+ spi_message_init(&m);
+
+ t[0].tx_buf = (char *)&cmd_addr;
+ t[0].len = 2;
+ t[0].bits_per_word = bits;
+ spi_message_add_tail(&t[0], &m);
+
+ t[1].rx_buf = buf;
+ t[1].len = count;
+ t[1].bits_per_word = 8;
+ spi_message_add_tail(&t[1], &m);
+
+ err = spi_sync(edev->spi, &m);
+ /* have to wait at least Tcsl ns */
+ ndelay(250);
+
+ if (err) {
+ dev_err(&edev->spi->dev, "read %zu bytes at %d: err. %d\n",
+ nbytes, (int)off, err);
+ ret = err;
+ break;
+ }
+
+ buf += nbytes;
+ off += nbytes;
+ count -= nbytes;
+ ret += nbytes;
}
if (edev->pdata->finish)
edev->pdata->finish(edev);
mutex_unlock(&edev->lock);
- return ret ? : count;
+ return ret;
}
static int eeprom_93xx46_ew(struct eeprom_93xx46_dev *edev, int is_on)
@@ -110,7 +148,13 @@
bits = 9;
}
- dev_dbg(&edev->spi->dev, "ew cmd 0x%04x\n", cmd_addr);
+ if (has_quirk_instruction_length(edev)) {
+ cmd_addr <<= 2;
+ bits += 2;
+ }
+
+ dev_dbg(&edev->spi->dev, "ew%s cmd 0x%04x, %d bits\n",
+ is_on ? "en" : "ds", cmd_addr, bits);
spi_message_init(&m);
memset(&t, 0, sizeof(t));
@@ -155,7 +199,7 @@
bits = 10;
data_len = 1;
} else {
- cmd_addr |= off & 0x3f;
+ cmd_addr |= (off >> 1) & 0x3f;
bits = 9;
data_len = 2;
}
@@ -190,7 +234,7 @@
struct device *dev;
int i, ret, step = 1;
- dev = container_of(kobj, struct device, kobj);
+ dev = kobj_to_dev(kobj);
edev = dev_get_drvdata(dev);
/* only write even number of bytes on 16-bit devices */
@@ -245,6 +289,13 @@
bits = 9;
}
+ if (has_quirk_instruction_length(edev)) {
+ cmd_addr <<= 2;
+ bits += 2;
+ }
+
+ dev_dbg(&edev->spi->dev, "eral cmd 0x%04x, %d bits\n", cmd_addr, bits);
+
spi_message_init(&m);
memset(&t, 0, sizeof(t));
@@ -294,12 +345,100 @@
}
static DEVICE_ATTR(erase, S_IWUSR, NULL, eeprom_93xx46_store_erase);
+static void select_assert(void *context)
+{
+ struct eeprom_93xx46_dev *edev = context;
+
+ gpiod_set_value_cansleep(edev->pdata->select, 1);
+}
+
+static void select_deassert(void *context)
+{
+ struct eeprom_93xx46_dev *edev = context;
+
+ gpiod_set_value_cansleep(edev->pdata->select, 0);
+}
+
+static const struct of_device_id eeprom_93xx46_of_table[] = {
+ { .compatible = "eeprom-93xx46", },
+ { .compatible = "atmel,at93c46d", .data = &atmel_at93c46d_data, },
+ {}
+};
+MODULE_DEVICE_TABLE(of, eeprom_93xx46_of_table);
+
+static int eeprom_93xx46_probe_dt(struct spi_device *spi)
+{
+ const struct of_device_id *of_id =
+ of_match_device(eeprom_93xx46_of_table, &spi->dev);
+ struct device_node *np = spi->dev.of_node;
+ struct eeprom_93xx46_platform_data *pd;
+ u32 tmp;
+ int gpio;
+ enum of_gpio_flags of_flags;
+ int ret;
+
+ pd = devm_kzalloc(&spi->dev, sizeof(*pd), GFP_KERNEL);
+ if (!pd)
+ return -ENOMEM;
+
+ ret = of_property_read_u32(np, "data-size", &tmp);
+ if (ret < 0) {
+ dev_err(&spi->dev, "data-size property not found\n");
+ return ret;
+ }
+
+ if (tmp == 8) {
+ pd->flags |= EE_ADDR8;
+ } else if (tmp == 16) {
+ pd->flags |= EE_ADDR16;
+ } else {
+ dev_err(&spi->dev, "invalid data-size (%d)\n", tmp);
+ return -EINVAL;
+ }
+
+ if (of_property_read_bool(np, "read-only"))
+ pd->flags |= EE_READONLY;
+
+ gpio = of_get_named_gpio_flags(np, "select-gpios", 0, &of_flags);
+ if (gpio_is_valid(gpio)) {
+ unsigned long flags =
+ of_flags == OF_GPIO_ACTIVE_LOW ? GPIOF_ACTIVE_LOW : 0;
+
+ ret = devm_gpio_request_one(&spi->dev, gpio, flags,
+ "eeprom_93xx46_select");
+ if (ret)
+ return ret;
+
+ pd->select = gpio_to_desc(gpio);
+ pd->prepare = select_assert;
+ pd->finish = select_deassert;
+
+ gpiod_direction_output(pd->select, 0);
+ }
+
+ if (of_id->data) {
+ const struct eeprom_93xx46_devtype_data *data = of_id->data;
+
+ pd->quirks = data->quirks;
+ }
+
+ spi->dev.platform_data = pd;
+
+ return 0;
+}
+
static int eeprom_93xx46_probe(struct spi_device *spi)
{
struct eeprom_93xx46_platform_data *pd;
struct eeprom_93xx46_dev *edev;
int err;
+ if (spi->dev.of_node) {
+ err = eeprom_93xx46_probe_dt(spi);
+ if (err < 0)
+ return err;
+ }
+
pd = spi->dev.platform_data;
if (!pd) {
dev_err(&spi->dev, "missing platform data\n");
@@ -370,6 +509,7 @@
static struct spi_driver eeprom_93xx46_driver = {
.driver = {
.name = "93xx46",
+ .of_match_table = of_match_ptr(eeprom_93xx46_of_table),
},
.probe = eeprom_93xx46_probe,
.remove = eeprom_93xx46_remove,
diff --git a/drivers/misc/genwqe/card_sysfs.c b/drivers/misc/genwqe/card_sysfs.c
index 6ab31ef..c24c9b7 100644
--- a/drivers/misc/genwqe/card_sysfs.c
+++ b/drivers/misc/genwqe/card_sysfs.c
@@ -278,7 +278,7 @@
struct attribute *attr, int n)
{
unsigned int j;
- struct device *dev = container_of(kobj, struct device, kobj);
+ struct device *dev = kobj_to_dev(kobj);
struct genwqe_dev *cd = dev_get_drvdata(dev);
umode_t mode = attr->mode;
diff --git a/drivers/misc/ibmasm/ibmasm.h b/drivers/misc/ibmasm/ibmasm.h
index 5bd1277..9fea49d 100644
--- a/drivers/misc/ibmasm/ibmasm.h
+++ b/drivers/misc/ibmasm/ibmasm.h
@@ -34,6 +34,7 @@
#include <linux/kref.h>
#include <linux/device.h>
#include <linux/input.h>
+#include <linux/time64.h>
/* Driver identification */
#define DRIVER_NAME "ibmasm"
@@ -53,9 +54,11 @@
static inline char *get_timestamp(char *buf)
{
- struct timeval now;
- do_gettimeofday(&now);
- sprintf(buf, "%lu.%lu", now.tv_sec, now.tv_usec);
+ struct timespec64 now;
+
+ ktime_get_real_ts64(&now);
+ sprintf(buf, "%llu.%.08lu", (long long)now.tv_sec,
+ now.tv_nsec / NSEC_PER_USEC);
return buf;
}
diff --git a/drivers/misc/lis3lv02d/lis3lv02d_i2c.c b/drivers/misc/lis3lv02d/lis3lv02d_i2c.c
index 0c3bb7e..14b7d53 100644
--- a/drivers/misc/lis3lv02d/lis3lv02d_i2c.c
+++ b/drivers/misc/lis3lv02d/lis3lv02d_i2c.c
@@ -209,7 +209,7 @@
#ifdef CONFIG_PM_SLEEP
static int lis3lv02d_i2c_suspend(struct device *dev)
{
- struct i2c_client *client = container_of(dev, struct i2c_client, dev);
+ struct i2c_client *client = to_i2c_client(dev);
struct lis3lv02d *lis3 = i2c_get_clientdata(client);
if (!lis3->pdata || !lis3->pdata->wakeup_flags)
@@ -219,7 +219,7 @@
static int lis3lv02d_i2c_resume(struct device *dev)
{
- struct i2c_client *client = container_of(dev, struct i2c_client, dev);
+ struct i2c_client *client = to_i2c_client(dev);
struct lis3lv02d *lis3 = i2c_get_clientdata(client);
/*
@@ -238,7 +238,7 @@
#ifdef CONFIG_PM
static int lis3_i2c_runtime_suspend(struct device *dev)
{
- struct i2c_client *client = container_of(dev, struct i2c_client, dev);
+ struct i2c_client *client = to_i2c_client(dev);
struct lis3lv02d *lis3 = i2c_get_clientdata(client);
lis3lv02d_poweroff(lis3);
@@ -247,7 +247,7 @@
static int lis3_i2c_runtime_resume(struct device *dev)
{
- struct i2c_client *client = container_of(dev, struct i2c_client, dev);
+ struct i2c_client *client = to_i2c_client(dev);
struct lis3lv02d *lis3 = i2c_get_clientdata(client);
lis3lv02d_poweron(lis3);
diff --git a/drivers/misc/lkdtm.c b/drivers/misc/lkdtm.c
index 11fdadc..5c1351b 100644
--- a/drivers/misc/lkdtm.c
+++ b/drivers/misc/lkdtm.c
@@ -335,7 +335,7 @@
memset((void *)data, 0, 64);
}
-static void execute_location(void *dst)
+static void noinline execute_location(void *dst)
{
void (*func)(void) = dst;
diff --git a/drivers/misc/mei/Kconfig b/drivers/misc/mei/Kconfig
index d23384d..c49e1d2 100644
--- a/drivers/misc/mei/Kconfig
+++ b/drivers/misc/mei/Kconfig
@@ -1,6 +1,6 @@
config INTEL_MEI
tristate "Intel Management Engine Interface"
- depends on X86 && PCI && WATCHDOG_CORE
+ depends on X86 && PCI
help
The Intel Management Engine (Intel ME) provides Manageability,
Security and Media services for system containing Intel chipsets.
@@ -12,7 +12,7 @@
config INTEL_MEI_ME
tristate "ME Enabled Intel Chipsets"
select INTEL_MEI
- depends on X86 && PCI && WATCHDOG_CORE
+ depends on X86 && PCI
help
MEI support for ME Enabled Intel chipsets.
@@ -37,7 +37,7 @@
config INTEL_MEI_TXE
tristate "Intel Trusted Execution Environment with ME Interface"
select INTEL_MEI
- depends on X86 && PCI && WATCHDOG_CORE
+ depends on X86 && PCI
help
MEI Support for Trusted Execution Environment device on Intel SoCs
diff --git a/drivers/misc/mei/Makefile b/drivers/misc/mei/Makefile
index 01447ca..59e6b0a 100644
--- a/drivers/misc/mei/Makefile
+++ b/drivers/misc/mei/Makefile
@@ -9,7 +9,6 @@
mei-objs += client.o
mei-objs += main.o
mei-objs += amthif.o
-mei-objs += wd.o
mei-objs += bus.o
mei-objs += bus-fixup.o
mei-$(CONFIG_DEBUG_FS) += debugfs.o
diff --git a/drivers/misc/mei/amthif.c b/drivers/misc/mei/amthif.c
index cd0403f..194360a 100644
--- a/drivers/misc/mei/amthif.c
+++ b/drivers/misc/mei/amthif.c
@@ -50,7 +50,6 @@
dev->iamthif_current_cb = NULL;
dev->iamthif_canceled = false;
dev->iamthif_state = MEI_IAMTHIF_IDLE;
- dev->iamthif_timer = 0;
dev->iamthif_stall_timer = 0;
dev->iamthif_open_count = 0;
}
@@ -68,11 +67,14 @@
struct mei_cl *cl = &dev->iamthif_cl;
int ret;
+ if (mei_cl_is_connected(cl))
+ return 0;
+
dev->iamthif_state = MEI_IAMTHIF_IDLE;
mei_cl_init(cl, dev);
- ret = mei_cl_link(cl, MEI_IAMTHIF_HOST_CLIENT_ID);
+ ret = mei_cl_link(cl);
if (ret < 0) {
dev_err(dev->dev, "amthif: failed cl_link %d\n", ret);
return ret;
@@ -80,32 +82,10 @@
ret = mei_cl_connect(cl, me_cl, NULL);
- dev->iamthif_state = MEI_IAMTHIF_IDLE;
-
return ret;
}
/**
- * mei_amthif_find_read_list_entry - finds a amthilist entry for current file
- *
- * @dev: the device structure
- * @file: pointer to file object
- *
- * Return: returned a list entry on success, NULL on failure.
- */
-struct mei_cl_cb *mei_amthif_find_read_list_entry(struct mei_device *dev,
- struct file *file)
-{
- struct mei_cl_cb *cb;
-
- list_for_each_entry(cb, &dev->amthif_rd_complete_list.list, list)
- if (cb->file_object == file)
- return cb;
- return NULL;
-}
-
-
-/**
* mei_amthif_read - read data from AMTHIF client
*
* @dev: the device structure
@@ -126,18 +106,11 @@
{
struct mei_cl *cl = file->private_data;
struct mei_cl_cb *cb;
- unsigned long timeout;
int rets;
int wait_ret;
- /* Only possible if we are in timeout */
- if (!cl) {
- dev_err(dev->dev, "bad file ext.\n");
- return -ETIME;
- }
-
dev_dbg(dev->dev, "checking amthif data\n");
- cb = mei_amthif_find_read_list_entry(dev, file);
+ cb = mei_cl_read_cb(cl, file);
/* Check for if we can block or not*/
if (cb == NULL && file->f_flags & O_NONBLOCK)
@@ -149,8 +122,9 @@
/* unlock the Mutex */
mutex_unlock(&dev->device_lock);
- wait_ret = wait_event_interruptible(dev->iamthif_cl.wait,
- (cb = mei_amthif_find_read_list_entry(dev, file)));
+ wait_ret = wait_event_interruptible(cl->rx_wait,
+ !list_empty(&cl->rd_completed) ||
+ !mei_cl_is_connected(cl));
/* Locking again the Mutex */
mutex_lock(&dev->device_lock);
@@ -158,7 +132,12 @@
if (wait_ret)
return -ERESTARTSYS;
- dev_dbg(dev->dev, "woke up from sleep\n");
+ if (!mei_cl_is_connected(cl)) {
+ rets = -EBUSY;
+ goto out;
+ }
+
+ cb = mei_cl_read_cb(cl, file);
}
if (cb->status) {
@@ -168,24 +147,10 @@
}
dev_dbg(dev->dev, "Got amthif data\n");
- dev->iamthif_timer = 0;
-
- timeout = cb->read_time +
- mei_secs_to_jiffies(MEI_IAMTHIF_READ_TIMER);
- dev_dbg(dev->dev, "amthif timeout = %lud\n",
- timeout);
-
- if (time_after(jiffies, timeout)) {
- dev_dbg(dev->dev, "amthif Time out\n");
- /* 15 sec for the message has expired */
- list_del_init(&cb->list);
- rets = -ETIME;
- goto free;
- }
/* if the whole message will fit remove it from the list */
if (cb->buf_idx >= *offset && length >= (cb->buf_idx - *offset))
list_del_init(&cb->list);
- else if (cb->buf_idx > 0 && cb->buf_idx <= *offset) {
+ else if (cb->buf_idx <= *offset) {
/* end of the message has been reached */
list_del_init(&cb->list);
rets = 0;
@@ -195,9 +160,8 @@
* remove message from deletion list
*/
- dev_dbg(dev->dev, "amthif cb->buf size - %d\n",
- cb->buf.size);
- dev_dbg(dev->dev, "amthif cb->buf_idx - %lu\n", cb->buf_idx);
+ dev_dbg(dev->dev, "amthif cb->buf.size - %zu cb->buf_idx - %zu\n",
+ cb->buf.size, cb->buf_idx);
/* length is being truncated to PAGE_SIZE, however,
* the buf_idx may point beyond */
@@ -229,7 +193,7 @@
*
* Return: 0 on success, <0 on failure.
*/
-static int mei_amthif_read_start(struct mei_cl *cl, struct file *file)
+static int mei_amthif_read_start(struct mei_cl *cl, const struct file *file)
{
struct mei_device *dev = cl->dev;
struct mei_cl_cb *cb;
@@ -248,7 +212,7 @@
list_add_tail(&cb->list, &dev->ctrl_wr_list.list);
dev->iamthif_state = MEI_IAMTHIF_READING;
- dev->iamthif_file_object = cb->file_object;
+ dev->iamthif_fp = cb->fp;
dev->iamthif_current_cb = cb;
return 0;
@@ -277,7 +241,7 @@
dev->iamthif_state = MEI_IAMTHIF_WRITING;
dev->iamthif_current_cb = cb;
- dev->iamthif_file_object = cb->file_object;
+ dev->iamthif_fp = cb->fp;
dev->iamthif_canceled = false;
ret = mei_cl_write(cl, cb, false);
@@ -285,7 +249,7 @@
return ret;
if (cb->completed)
- cb->status = mei_amthif_read_start(cl, cb->file_object);
+ cb->status = mei_amthif_read_start(cl, cb->fp);
return 0;
}
@@ -304,8 +268,7 @@
dev->iamthif_canceled = false;
dev->iamthif_state = MEI_IAMTHIF_IDLE;
- dev->iamthif_timer = 0;
- dev->iamthif_file_object = NULL;
+ dev->iamthif_fp = NULL;
dev_dbg(dev->dev, "complete amthif cmd_list cb.\n");
@@ -329,17 +292,17 @@
int mei_amthif_write(struct mei_cl *cl, struct mei_cl_cb *cb)
{
- struct mei_device *dev;
-
- if (WARN_ON(!cl || !cl->dev))
- return -ENODEV;
-
- if (WARN_ON(!cb))
- return -EINVAL;
-
- dev = cl->dev;
+ struct mei_device *dev = cl->dev;
list_add_tail(&cb->list, &dev->amthif_cmd_list.list);
+
+ /*
+ * The previous request is still in processing, queue this one.
+ */
+ if (dev->iamthif_state > MEI_IAMTHIF_IDLE &&
+ dev->iamthif_state < MEI_IAMTHIF_READ_COMPLETE)
+ return 0;
+
return mei_amthif_run_next_cmd(dev);
}
@@ -360,10 +323,10 @@
{
unsigned int mask = 0;
- poll_wait(file, &dev->iamthif_cl.wait, wait);
+ poll_wait(file, &dev->iamthif_cl.rx_wait, wait);
if (dev->iamthif_state == MEI_IAMTHIF_READ_COMPLETE &&
- dev->iamthif_file_object == file) {
+ dev->iamthif_fp == file) {
mask |= POLLIN | POLLRDNORM;
mei_amthif_run_next_cmd(dev);
@@ -393,7 +356,7 @@
return ret;
if (cb->completed)
- cb->status = mei_amthif_read_start(cl, cb->file_object);
+ cb->status = mei_amthif_read_start(cl, cb->fp);
return 0;
}
@@ -437,11 +400,12 @@
/**
* mei_amthif_complete - complete amthif callback.
*
- * @dev: the device structure.
+ * @cl: host client
* @cb: callback block.
*/
-void mei_amthif_complete(struct mei_device *dev, struct mei_cl_cb *cb)
+void mei_amthif_complete(struct mei_cl *cl, struct mei_cl_cb *cb)
{
+ struct mei_device *dev = cl->dev;
if (cb->fop_type == MEI_FOP_WRITE) {
if (!cb->status) {
@@ -453,25 +417,22 @@
* in case of error enqueue the write cb to complete read list
* so it can be propagated to the reader
*/
- list_add_tail(&cb->list, &dev->amthif_rd_complete_list.list);
- wake_up_interruptible(&dev->iamthif_cl.wait);
+ list_add_tail(&cb->list, &cl->rd_completed);
+ wake_up_interruptible(&cl->rx_wait);
return;
}
if (!dev->iamthif_canceled) {
dev->iamthif_state = MEI_IAMTHIF_READ_COMPLETE;
dev->iamthif_stall_timer = 0;
- list_add_tail(&cb->list, &dev->amthif_rd_complete_list.list);
+ list_add_tail(&cb->list, &cl->rd_completed);
dev_dbg(dev->dev, "amthif read completed\n");
- dev->iamthif_timer = jiffies;
- dev_dbg(dev->dev, "dev->iamthif_timer = %ld\n",
- dev->iamthif_timer);
} else {
mei_amthif_run_next_cmd(dev);
}
dev_dbg(dev->dev, "completing amthif call back.\n");
- wake_up_interruptible(&dev->iamthif_cl.wait);
+ wake_up_interruptible(&cl->rx_wait);
}
/**
@@ -497,7 +458,7 @@
/* list all list member */
list_for_each_entry_safe(cb, next, mei_cb_list, list) {
/* check if list member associated with a file */
- if (file == cb->file_object) {
+ if (file == cb->fp) {
/* check if cb equal to current iamthif cb */
if (dev->iamthif_current_cb == cb) {
dev->iamthif_current_cb = NULL;
@@ -523,13 +484,14 @@
*
* Return: true if callback removed from the list, false otherwise
*/
-static bool mei_clear_lists(struct mei_device *dev, struct file *file)
+static bool mei_clear_lists(struct mei_device *dev, const struct file *file)
{
bool removed = false;
+ struct mei_cl *cl = &dev->iamthif_cl;
/* remove callbacks associated with a file */
mei_clear_list(dev, file, &dev->amthif_cmd_list.list);
- if (mei_clear_list(dev, file, &dev->amthif_rd_complete_list.list))
+ if (mei_clear_list(dev, file, &cl->rd_completed))
removed = true;
mei_clear_list(dev, file, &dev->ctrl_rd_list.list);
@@ -546,7 +508,7 @@
/* check if iamthif_current_cb not NULL */
if (dev->iamthif_current_cb && !removed) {
/* check file and iamthif current cb association */
- if (dev->iamthif_current_cb->file_object == file) {
+ if (dev->iamthif_current_cb->fp == file) {
/* remove cb */
mei_io_cb_free(dev->iamthif_current_cb);
dev->iamthif_current_cb = NULL;
@@ -569,7 +531,7 @@
if (dev->iamthif_open_count > 0)
dev->iamthif_open_count--;
- if (dev->iamthif_file_object == file &&
+ if (dev->iamthif_fp == file &&
dev->iamthif_state != MEI_IAMTHIF_IDLE) {
dev_dbg(dev->dev, "amthif canceled iamthif state %d\n",
diff --git a/drivers/misc/mei/bus-fixup.c b/drivers/misc/mei/bus-fixup.c
index 020de59..e9e6ea3 100644
--- a/drivers/misc/mei/bus-fixup.c
+++ b/drivers/misc/mei/bus-fixup.c
@@ -35,6 +35,9 @@
#define MEI_UUID_NFC_HCI UUID_LE(0x0bb17a78, 0x2a8e, 0x4c50, \
0x94, 0xd4, 0x50, 0x26, 0x67, 0x23, 0x77, 0x5c)
+#define MEI_UUID_WD UUID_LE(0x05B79A6F, 0x4628, 0x4D7F, \
+ 0x89, 0x9D, 0xA9, 0x15, 0x14, 0xCB, 0x32, 0xAB)
+
#define MEI_UUID_ANY NULL_UUID_LE
/**
@@ -48,8 +51,7 @@
*/
static void number_of_connections(struct mei_cl_device *cldev)
{
- dev_dbg(&cldev->dev, "running hook %s on %pUl\n",
- __func__, mei_me_cl_uuid(cldev->me_cl));
+ dev_dbg(&cldev->dev, "running hook %s\n", __func__);
if (cldev->me_cl->props.max_number_of_connections > 1)
cldev->do_match = 0;
@@ -62,11 +64,36 @@
*/
static void blacklist(struct mei_cl_device *cldev)
{
- dev_dbg(&cldev->dev, "running hook %s on %pUl\n",
- __func__, mei_me_cl_uuid(cldev->me_cl));
+ dev_dbg(&cldev->dev, "running hook %s\n", __func__);
+
cldev->do_match = 0;
}
+/**
+ * mei_wd - wd client on the bus, change protocol version
+ * as the API has changed.
+ *
+ * @cldev: me clients device
+ */
+#if IS_ENABLED(CONFIG_INTEL_MEI_ME)
+#include <linux/pci.h>
+#include "hw-me-regs.h"
+static void mei_wd(struct mei_cl_device *cldev)
+{
+ struct pci_dev *pdev = to_pci_dev(cldev->dev.parent);
+
+ dev_dbg(&cldev->dev, "running hook %s\n", __func__);
+ if (pdev->device == MEI_DEV_ID_WPT_LP ||
+ pdev->device == MEI_DEV_ID_SPT ||
+ pdev->device == MEI_DEV_ID_SPT_H)
+ cldev->me_cl->props.protocol_version = 0x2;
+
+ cldev->do_match = 1;
+}
+#else
+static inline void mei_wd(struct mei_cl_device *cldev) {}
+#endif /* CONFIG_INTEL_MEI_ME */
+
struct mei_nfc_cmd {
u8 command;
u8 status;
@@ -208,12 +235,11 @@
bus = cldev->bus;
- dev_dbg(bus->dev, "running hook %s: %pUl match=%d\n",
- __func__, mei_me_cl_uuid(cldev->me_cl), cldev->do_match);
+ dev_dbg(&cldev->dev, "running hook %s\n", __func__);
mutex_lock(&bus->device_lock);
/* we need to connect to INFO GUID */
- cl = mei_cl_alloc_linked(bus, MEI_HOST_CLIENT_ID_ANY);
+ cl = mei_cl_alloc_linked(bus);
if (IS_ERR(cl)) {
ret = PTR_ERR(cl);
cl = NULL;
@@ -282,6 +308,7 @@
MEI_FIXUP(MEI_UUID_ANY, number_of_connections),
MEI_FIXUP(MEI_UUID_NFC_INFO, blacklist),
MEI_FIXUP(MEI_UUID_NFC_HCI, mei_nfc),
+ MEI_FIXUP(MEI_UUID_WD, mei_wd),
};
/**
diff --git a/drivers/misc/mei/bus.c b/drivers/misc/mei/bus.c
index 0b05aa9..5d5996e 100644
--- a/drivers/misc/mei/bus.c
+++ b/drivers/misc/mei/bus.c
@@ -44,7 +44,7 @@
bool blocking)
{
struct mei_device *bus;
- struct mei_cl_cb *cb = NULL;
+ struct mei_cl_cb *cb;
ssize_t rets;
if (WARN_ON(!cl || !cl->dev))
@@ -53,6 +53,11 @@
bus = cl->dev;
mutex_lock(&bus->device_lock);
+ if (bus->dev_state != MEI_DEV_ENABLED) {
+ rets = -ENODEV;
+ goto out;
+ }
+
if (!mei_cl_is_connected(cl)) {
rets = -ENODEV;
goto out;
@@ -81,8 +86,6 @@
out:
mutex_unlock(&bus->device_lock);
- if (rets < 0)
- mei_io_cb_free(cb);
return rets;
}
@@ -109,6 +112,10 @@
bus = cl->dev;
mutex_lock(&bus->device_lock);
+ if (bus->dev_state != MEI_DEV_ENABLED) {
+ rets = -ENODEV;
+ goto out;
+ }
cb = mei_cl_read_cb(cl, NULL);
if (cb)
@@ -230,45 +237,55 @@
* mei_cl_bus_notify_event - schedule notify cb on bus client
*
* @cl: host client
+ *
+ * Return: true if event was scheduled
+ * false if the client is not waiting for event
*/
-void mei_cl_bus_notify_event(struct mei_cl *cl)
+bool mei_cl_bus_notify_event(struct mei_cl *cl)
{
struct mei_cl_device *cldev = cl->cldev;
if (!cldev || !cldev->event_cb)
- return;
+ return false;
if (!(cldev->events_mask & BIT(MEI_CL_EVENT_NOTIF)))
- return;
+ return false;
if (!cl->notify_ev)
- return;
+ return false;
set_bit(MEI_CL_EVENT_NOTIF, &cldev->events);
schedule_work(&cldev->event_work);
cl->notify_ev = false;
+
+ return true;
}
/**
- * mei_cl_bus_rx_event - schedule rx evenet
+ * mei_cl_bus_rx_event - schedule rx event
*
* @cl: host client
+ *
+ * Return: true if event was scheduled
+ * false if the client is not waiting for event
*/
-void mei_cl_bus_rx_event(struct mei_cl *cl)
+bool mei_cl_bus_rx_event(struct mei_cl *cl)
{
struct mei_cl_device *cldev = cl->cldev;
if (!cldev || !cldev->event_cb)
- return;
+ return false;
if (!(cldev->events_mask & BIT(MEI_CL_EVENT_RX)))
- return;
+ return false;
set_bit(MEI_CL_EVENT_RX, &cldev->events);
schedule_work(&cldev->event_work);
+
+ return true;
}
/**
@@ -398,7 +415,7 @@
if (!cl) {
mutex_lock(&bus->device_lock);
- cl = mei_cl_alloc_linked(bus, MEI_HOST_CLIENT_ID_ANY);
+ cl = mei_cl_alloc_linked(bus);
mutex_unlock(&bus->device_lock);
if (IS_ERR(cl))
return PTR_ERR(cl);
@@ -958,6 +975,22 @@
dev_dbg(bus->dev, "rescan end");
}
+void mei_cl_bus_rescan_work(struct work_struct *work)
+{
+ struct mei_device *bus =
+ container_of(work, struct mei_device, bus_rescan_work);
+ struct mei_me_client *me_cl;
+
+ mutex_lock(&bus->device_lock);
+ me_cl = mei_me_cl_by_uuid(bus, &mei_amthif_guid);
+ if (me_cl)
+ mei_amthif_host_init(bus, me_cl);
+ mei_me_cl_put(me_cl);
+ mutex_unlock(&bus->device_lock);
+
+ mei_cl_bus_rescan(bus);
+}
+
int __mei_cldev_driver_register(struct mei_cl_driver *cldrv,
struct module *owner)
{
diff --git a/drivers/misc/mei/client.c b/drivers/misc/mei/client.c
index a6c87c7..bab17e4 100644
--- a/drivers/misc/mei/client.c
+++ b/drivers/misc/mei/client.c
@@ -359,7 +359,7 @@
* Return: mei_cl_cb pointer or NULL;
*/
struct mei_cl_cb *mei_io_cb_init(struct mei_cl *cl, enum mei_cb_file_ops type,
- struct file *fp)
+ const struct file *fp)
{
struct mei_cl_cb *cb;
@@ -368,7 +368,7 @@
return NULL;
INIT_LIST_HEAD(&cb->list);
- cb->file_object = fp;
+ cb->fp = fp;
cb->cl = cl;
cb->buf_idx = 0;
cb->fop_type = type;
@@ -455,7 +455,8 @@
* Return: cb on success and NULL on failure
*/
struct mei_cl_cb *mei_cl_alloc_cb(struct mei_cl *cl, size_t length,
- enum mei_cb_file_ops type, struct file *fp)
+ enum mei_cb_file_ops type,
+ const struct file *fp)
{
struct mei_cl_cb *cb;
@@ -485,7 +486,7 @@
struct mei_cl_cb *cb;
list_for_each_entry(cb, &cl->rd_completed, list)
- if (!fp || fp == cb->file_object)
+ if (!fp || fp == cb->fp)
return cb;
return NULL;
@@ -503,12 +504,12 @@
struct mei_cl_cb *cb, *next;
list_for_each_entry_safe(cb, next, &cl->rd_completed, list)
- if (!fp || fp == cb->file_object)
+ if (!fp || fp == cb->fp)
mei_io_cb_free(cb);
list_for_each_entry_safe(cb, next, &cl->rd_pending, list)
- if (!fp || fp == cb->file_object)
+ if (!fp || fp == cb->fp)
mei_io_cb_free(cb);
}
@@ -535,7 +536,6 @@
mei_io_list_flush(&cl->dev->ctrl_wr_list, cl);
mei_io_list_flush(&cl->dev->ctrl_rd_list, cl);
mei_io_list_flush(&cl->dev->amthif_cmd_list, cl);
- mei_io_list_flush(&cl->dev->amthif_rd_complete_list, cl);
mei_cl_read_cb_flush(cl, fp);
@@ -587,27 +587,23 @@
* mei_cl_link - allocate host id in the host map
*
* @cl: host client
- * @id: fixed host id or MEI_HOST_CLIENT_ID_ANY (-1) for generic one
*
* Return: 0 on success
* -EINVAL on incorrect values
* -EMFILE if open count exceeded.
*/
-int mei_cl_link(struct mei_cl *cl, int id)
+int mei_cl_link(struct mei_cl *cl)
{
struct mei_device *dev;
long open_handle_count;
+ int id;
if (WARN_ON(!cl || !cl->dev))
return -EINVAL;
dev = cl->dev;
- /* If Id is not assigned get one*/
- if (id == MEI_HOST_CLIENT_ID_ANY)
- id = find_first_zero_bit(dev->host_clients_map,
- MEI_CLIENTS_MAX);
-
+ id = find_first_zero_bit(dev->host_clients_map, MEI_CLIENTS_MAX);
if (id >= MEI_CLIENTS_MAX) {
dev_err(dev->dev, "id exceeded %d", MEI_CLIENTS_MAX);
return -EMFILE;
@@ -648,7 +644,7 @@
if (!cl)
return 0;
- /* wd and amthif might not be initialized */
+ /* amthif might not be initialized */
if (!cl->dev)
return 0;
@@ -670,31 +666,12 @@
return 0;
}
-
-void mei_host_client_init(struct work_struct *work)
+void mei_host_client_init(struct mei_device *dev)
{
- struct mei_device *dev =
- container_of(work, struct mei_device, init_work);
- struct mei_me_client *me_cl;
-
- mutex_lock(&dev->device_lock);
-
-
- me_cl = mei_me_cl_by_uuid(dev, &mei_amthif_guid);
- if (me_cl)
- mei_amthif_host_init(dev, me_cl);
- mei_me_cl_put(me_cl);
-
- me_cl = mei_me_cl_by_uuid(dev, &mei_wd_guid);
- if (me_cl)
- mei_wd_host_init(dev, me_cl);
- mei_me_cl_put(me_cl);
-
dev->dev_state = MEI_DEV_ENABLED;
dev->reset_count = 0;
- mutex_unlock(&dev->device_lock);
- mei_cl_bus_rescan(dev);
+ schedule_work(&dev->bus_rescan_work);
pm_runtime_mark_last_busy(dev->dev);
dev_dbg(dev->dev, "rpm: autosuspend\n");
@@ -726,6 +703,33 @@
}
/**
+ * mei_cl_wake_all - wake up readers, writers and event waiters so
+ * they can be interrupted
+ *
+ * @cl: host client
+ */
+static void mei_cl_wake_all(struct mei_cl *cl)
+{
+ struct mei_device *dev = cl->dev;
+
+ /* synchronized under device mutex */
+ if (waitqueue_active(&cl->rx_wait)) {
+ cl_dbg(dev, cl, "Waking up reading client!\n");
+ wake_up_interruptible(&cl->rx_wait);
+ }
+ /* synchronized under device mutex */
+ if (waitqueue_active(&cl->tx_wait)) {
+ cl_dbg(dev, cl, "Waking up writing client!\n");
+ wake_up_interruptible(&cl->tx_wait);
+ }
+ /* synchronized under device mutex */
+ if (waitqueue_active(&cl->ev_wait)) {
+ cl_dbg(dev, cl, "Waking up waiting for event clients!\n");
+ wake_up_interruptible(&cl->ev_wait);
+ }
+}
+
+/**
* mei_cl_set_disconnected - set disconnected state and clear
* associated states and resources
*
@@ -740,8 +744,11 @@
return;
cl->state = MEI_FILE_DISCONNECTED;
+ mei_io_list_free(&dev->write_list, cl);
+ mei_io_list_free(&dev->write_waiting_list, cl);
mei_io_list_flush(&dev->ctrl_rd_list, cl);
mei_io_list_flush(&dev->ctrl_wr_list, cl);
+ mei_cl_wake_all(cl);
cl->mei_flow_ctrl_creds = 0;
cl->timer_count = 0;
@@ -1034,7 +1041,7 @@
* Return: 0 on success, <0 on failure.
*/
int mei_cl_connect(struct mei_cl *cl, struct mei_me_client *me_cl,
- struct file *file)
+ const struct file *file)
{
struct mei_device *dev;
struct mei_cl_cb *cb;
@@ -1119,11 +1126,10 @@
* mei_cl_alloc_linked - allocate and link host client
*
* @dev: the device structure
- * @id: fixed host id or MEI_HOST_CLIENT_ID_ANY (-1) for generic one
*
* Return: cl on success ERR_PTR on failure
*/
-struct mei_cl *mei_cl_alloc_linked(struct mei_device *dev, int id)
+struct mei_cl *mei_cl_alloc_linked(struct mei_device *dev)
{
struct mei_cl *cl;
int ret;
@@ -1134,7 +1140,7 @@
goto err;
}
- ret = mei_cl_link(cl, id);
+ ret = mei_cl_link(cl);
if (ret)
goto err;
@@ -1149,11 +1155,12 @@
/**
* mei_cl_flow_ctrl_creds - checks flow_control credits for cl.
*
- * @cl: private data of the file object
+ * @cl: host client
+ * @fp: the file pointer associated with the pointer
*
* Return: 1 if mei_flow_ctrl_creds >0, 0 - otherwise.
*/
-int mei_cl_flow_ctrl_creds(struct mei_cl *cl)
+static int mei_cl_flow_ctrl_creds(struct mei_cl *cl, const struct file *fp)
{
int rets;
@@ -1164,7 +1171,7 @@
return 1;
if (mei_cl_is_fixed_address(cl)) {
- rets = mei_cl_read_start(cl, mei_cl_mtu(cl), NULL);
+ rets = mei_cl_read_start(cl, mei_cl_mtu(cl), fp);
if (rets && rets != -EBUSY)
return rets;
return 1;
@@ -1186,7 +1193,7 @@
* 0 on success
* -EINVAL when ctrl credits are <= 0
*/
-int mei_cl_flow_ctrl_reduce(struct mei_cl *cl)
+static int mei_cl_flow_ctrl_reduce(struct mei_cl *cl)
{
if (WARN_ON(!cl || !cl->me_cl))
return -EINVAL;
@@ -1283,7 +1290,8 @@
*
* Return: 0 on such and error otherwise.
*/
-int mei_cl_notify_request(struct mei_cl *cl, struct file *file, u8 request)
+int mei_cl_notify_request(struct mei_cl *cl,
+ const struct file *file, u8 request)
{
struct mei_device *dev;
struct mei_cl_cb *cb;
@@ -1368,12 +1376,12 @@
cl_dbg(dev, cl, "notify event");
cl->notify_ev = true;
- wake_up_interruptible_all(&cl->ev_wait);
+ if (!mei_cl_bus_notify_event(cl))
+ wake_up_interruptible(&cl->ev_wait);
if (cl->ev_async)
kill_fasync(&cl->ev_async, SIGIO, POLL_PRI);
- mei_cl_bus_notify_event(cl);
}
/**
@@ -1422,6 +1430,25 @@
}
/**
+ * mei_cl_is_read_fc_cb - check if read cb is waiting for flow control
+ * for given host client
+ *
+ * @cl: host client
+ *
+ * Return: true, if found at least one cb.
+ */
+static bool mei_cl_is_read_fc_cb(struct mei_cl *cl)
+{
+ struct mei_device *dev = cl->dev;
+ struct mei_cl_cb *cb;
+
+ list_for_each_entry(cb, &dev->ctrl_wr_list.list, list)
+ if (cb->fop_type == MEI_FOP_READ && cb->cl == cl)
+ return true;
+ return false;
+}
+
+/**
* mei_cl_read_start - the start read client message function.
*
* @cl: host client
@@ -1430,7 +1457,7 @@
*
* Return: 0 on success, <0 on failure.
*/
-int mei_cl_read_start(struct mei_cl *cl, size_t length, struct file *fp)
+int mei_cl_read_start(struct mei_cl *cl, size_t length, const struct file *fp)
{
struct mei_device *dev;
struct mei_cl_cb *cb;
@@ -1445,7 +1472,7 @@
return -ENODEV;
/* HW currently supports only one pending read */
- if (!list_empty(&cl->rd_pending))
+ if (!list_empty(&cl->rd_pending) || mei_cl_is_read_fc_cb(cl))
return -EBUSY;
if (!mei_me_cl_is_active(cl->me_cl)) {
@@ -1524,7 +1551,7 @@
first_chunk = cb->buf_idx == 0;
- rets = first_chunk ? mei_cl_flow_ctrl_creds(cl) : 1;
+ rets = first_chunk ? mei_cl_flow_ctrl_creds(cl, cb->fp) : 1;
if (rets < 0)
return rets;
@@ -1556,7 +1583,7 @@
return 0;
}
- cl_dbg(dev, cl, "buf: size = %d idx = %lu\n",
+ cl_dbg(dev, cl, "buf: size = %zu idx = %zu\n",
cb->buf.size, cb->buf_idx);
rets = mei_write_message(dev, &mei_hdr, buf->data + cb->buf_idx);
@@ -1618,7 +1645,7 @@
if (rets < 0 && rets != -EINPROGRESS) {
pm_runtime_put_noidle(dev->dev);
cl_err(dev, cl, "rpm: get failed %d\n", rets);
- return rets;
+ goto free;
}
cb->buf_idx = 0;
@@ -1630,7 +1657,7 @@
mei_hdr.msg_complete = 0;
mei_hdr.internal = cb->internal;
- rets = mei_cl_flow_ctrl_creds(cl);
+ rets = mei_cl_flow_ctrl_creds(cl, cb->fp);
if (rets < 0)
goto err;
@@ -1677,7 +1704,8 @@
mutex_unlock(&dev->device_lock);
rets = wait_event_interruptible(cl->tx_wait,
- cl->writing_state == MEI_WRITE_COMPLETE);
+ cl->writing_state == MEI_WRITE_COMPLETE ||
+ (!mei_cl_is_connected(cl)));
mutex_lock(&dev->device_lock);
/* wait_event_interruptible returns -ERESTARTSYS */
if (rets) {
@@ -1685,6 +1713,10 @@
rets = -EINTR;
goto err;
}
+ if (cl->writing_state != MEI_WRITE_COMPLETE) {
+ rets = -EFAULT;
+ goto err;
+ }
}
rets = size;
@@ -1692,6 +1724,8 @@
cl_dbg(dev, cl, "rpm: autosuspend\n");
pm_runtime_mark_last_busy(dev->dev);
pm_runtime_put_autosuspend(dev->dev);
+free:
+ mei_io_cb_free(cb);
return rets;
}
@@ -1721,10 +1755,8 @@
case MEI_FOP_READ:
list_add_tail(&cb->list, &cl->rd_completed);
- if (waitqueue_active(&cl->rx_wait))
- wake_up_interruptible_all(&cl->rx_wait);
- else
- mei_cl_bus_rx_event(cl);
+ if (!mei_cl_bus_rx_event(cl))
+ wake_up_interruptible(&cl->rx_wait);
break;
case MEI_FOP_CONNECT:
@@ -1753,44 +1785,3 @@
list_for_each_entry(cl, &dev->file_list, link)
mei_cl_set_disconnected(cl);
}
-
-
-/**
- * mei_cl_all_wakeup - wake up all readers and writers they can be interrupted
- *
- * @dev: mei device
- */
-void mei_cl_all_wakeup(struct mei_device *dev)
-{
- struct mei_cl *cl;
-
- list_for_each_entry(cl, &dev->file_list, link) {
- if (waitqueue_active(&cl->rx_wait)) {
- cl_dbg(dev, cl, "Waking up reading client!\n");
- wake_up_interruptible(&cl->rx_wait);
- }
- if (waitqueue_active(&cl->tx_wait)) {
- cl_dbg(dev, cl, "Waking up writing client!\n");
- wake_up_interruptible(&cl->tx_wait);
- }
-
- /* synchronized under device mutex */
- if (waitqueue_active(&cl->ev_wait)) {
- cl_dbg(dev, cl, "Waking up waiting for event clients!\n");
- wake_up_interruptible(&cl->ev_wait);
- }
- }
-}
-
-/**
- * mei_cl_all_write_clear - clear all pending writes
- *
- * @dev: mei device
- */
-void mei_cl_all_write_clear(struct mei_device *dev)
-{
- mei_io_list_free(&dev->write_list, NULL);
- mei_io_list_free(&dev->write_waiting_list, NULL);
-}
-
-
diff --git a/drivers/misc/mei/client.h b/drivers/misc/mei/client.h
index 04e1aa3..0d7a3a1 100644
--- a/drivers/misc/mei/client.h
+++ b/drivers/misc/mei/client.h
@@ -18,7 +18,6 @@
#define _MEI_CLIENT_H_
#include <linux/types.h>
-#include <linux/watchdog.h>
#include <linux/poll.h>
#include <linux/mei.h>
@@ -84,7 +83,7 @@
* MEI IO Functions
*/
struct mei_cl_cb *mei_io_cb_init(struct mei_cl *cl, enum mei_cb_file_ops type,
- struct file *fp);
+ const struct file *fp);
void mei_io_cb_free(struct mei_cl_cb *priv_cb);
int mei_io_cb_alloc_buf(struct mei_cl_cb *cb, size_t length);
@@ -108,21 +107,19 @@
void mei_cl_init(struct mei_cl *cl, struct mei_device *dev);
-int mei_cl_link(struct mei_cl *cl, int id);
+int mei_cl_link(struct mei_cl *cl);
int mei_cl_unlink(struct mei_cl *cl);
-struct mei_cl *mei_cl_alloc_linked(struct mei_device *dev, int id);
+struct mei_cl *mei_cl_alloc_linked(struct mei_device *dev);
struct mei_cl_cb *mei_cl_read_cb(const struct mei_cl *cl,
const struct file *fp);
void mei_cl_read_cb_flush(const struct mei_cl *cl, const struct file *fp);
struct mei_cl_cb *mei_cl_alloc_cb(struct mei_cl *cl, size_t length,
- enum mei_cb_file_ops type, struct file *fp);
+ enum mei_cb_file_ops type,
+ const struct file *fp);
int mei_cl_flush_queues(struct mei_cl *cl, const struct file *fp);
-int mei_cl_flow_ctrl_creds(struct mei_cl *cl);
-
-int mei_cl_flow_ctrl_reduce(struct mei_cl *cl);
/*
* MEI input output function prototype
*/
@@ -217,10 +214,10 @@
int mei_cl_irq_disconnect(struct mei_cl *cl, struct mei_cl_cb *cb,
struct mei_cl_cb *cmpl_list);
int mei_cl_connect(struct mei_cl *cl, struct mei_me_client *me_cl,
- struct file *file);
+ const struct file *file);
int mei_cl_irq_connect(struct mei_cl *cl, struct mei_cl_cb *cb,
struct mei_cl_cb *cmpl_list);
-int mei_cl_read_start(struct mei_cl *cl, size_t length, struct file *fp);
+int mei_cl_read_start(struct mei_cl *cl, size_t length, const struct file *fp);
int mei_cl_irq_read_msg(struct mei_cl *cl, struct mei_msg_hdr *hdr,
struct mei_cl_cb *cmpl_list);
int mei_cl_write(struct mei_cl *cl, struct mei_cl_cb *cb, bool blocking);
@@ -229,19 +226,18 @@
void mei_cl_complete(struct mei_cl *cl, struct mei_cl_cb *cb);
-void mei_host_client_init(struct work_struct *work);
+void mei_host_client_init(struct mei_device *dev);
u8 mei_cl_notify_fop2req(enum mei_cb_file_ops fop);
enum mei_cb_file_ops mei_cl_notify_req2fop(u8 request);
-int mei_cl_notify_request(struct mei_cl *cl, struct file *file, u8 request);
+int mei_cl_notify_request(struct mei_cl *cl,
+ const struct file *file, u8 request);
int mei_cl_irq_notify(struct mei_cl *cl, struct mei_cl_cb *cb,
struct mei_cl_cb *cmpl_list);
int mei_cl_notify_get(struct mei_cl *cl, bool block, bool *notify_ev);
void mei_cl_notify(struct mei_cl *cl);
void mei_cl_all_disconnect(struct mei_device *dev);
-void mei_cl_all_wakeup(struct mei_device *dev);
-void mei_cl_all_write_clear(struct mei_device *dev);
#define MEI_CL_FMT "cl:host=%02d me=%02d "
#define MEI_CL_PRM(cl) (cl)->host_client_id, mei_cl_me_id(cl)
@@ -249,6 +245,9 @@
#define cl_dbg(dev, cl, format, arg...) \
dev_dbg((dev)->dev, MEI_CL_FMT format, MEI_CL_PRM(cl), ##arg)
+#define cl_warn(dev, cl, format, arg...) \
+ dev_warn((dev)->dev, MEI_CL_FMT format, MEI_CL_PRM(cl), ##arg)
+
#define cl_err(dev, cl, format, arg...) \
dev_err((dev)->dev, MEI_CL_FMT format, MEI_CL_PRM(cl), ##arg)
diff --git a/drivers/misc/mei/debugfs.c b/drivers/misc/mei/debugfs.c
index a138d8a..c6c051b 100644
--- a/drivers/misc/mei/debugfs.c
+++ b/drivers/misc/mei/debugfs.c
@@ -50,6 +50,7 @@
}
pos += scnprintf(buf + pos, bufsz - pos, HDR);
+#undef HDR
/* if the driver is not enabled the list won't be consistent */
if (dev->dev_state != MEI_DEV_ENABLED)
@@ -90,24 +91,38 @@
{
struct mei_device *dev = fp->private_data;
struct mei_cl *cl;
- const size_t bufsz = 1024;
+ size_t bufsz = 1;
char *buf;
int i = 0;
int pos = 0;
int ret;
+#define HDR " |me|host|state|rd|wr|\n"
+
if (!dev)
return -ENODEV;
- buf = kzalloc(bufsz, GFP_KERNEL);
- if (!buf)
- return -ENOMEM;
-
- pos += scnprintf(buf + pos, bufsz - pos,
- " |me|host|state|rd|wr|\n");
-
mutex_lock(&dev->device_lock);
+ /*
+ * if the driver is not enabled the list won't be consistent,
+ * we output empty table
+ */
+ if (dev->dev_state == MEI_DEV_ENABLED)
+ list_for_each_entry(cl, &dev->file_list, link)
+ bufsz++;
+
+ bufsz *= sizeof(HDR) + 1;
+
+ buf = kzalloc(bufsz, GFP_KERNEL);
+ if (!buf) {
+ mutex_unlock(&dev->device_lock);
+ return -ENOMEM;
+ }
+
+ pos += scnprintf(buf + pos, bufsz - pos, HDR);
+#undef HDR
+
/* if the driver is not enabled the list won't be consistent */
if (dev->dev_state != MEI_DEV_ENABLED)
goto out;
@@ -115,7 +130,7 @@
list_for_each_entry(cl, &dev->file_list, link) {
pos += scnprintf(buf + pos, bufsz - pos,
- "%2d|%2d|%4d|%5d|%2d|%2d|\n",
+ "%3d|%2d|%4d|%5d|%2d|%2d|\n",
i, mei_cl_me_id(cl), cl->host_client_id, cl->state,
!list_empty(&cl->rd_completed), cl->writing_state);
i++;
@@ -150,16 +165,21 @@
pos += scnprintf(buf + pos, bufsz - pos, "hbm: %s\n",
mei_hbm_state_str(dev->hbm_state));
- if (dev->hbm_state == MEI_HBM_STARTED) {
+ if (dev->hbm_state >= MEI_HBM_ENUM_CLIENTS &&
+ dev->hbm_state <= MEI_HBM_STARTED) {
pos += scnprintf(buf + pos, bufsz - pos, "hbm features:\n");
pos += scnprintf(buf + pos, bufsz - pos, "\tPG: %01d\n",
dev->hbm_f_pg_supported);
pos += scnprintf(buf + pos, bufsz - pos, "\tDC: %01d\n",
dev->hbm_f_dc_supported);
+ pos += scnprintf(buf + pos, bufsz - pos, "\tIE: %01d\n",
+ dev->hbm_f_ie_supported);
pos += scnprintf(buf + pos, bufsz - pos, "\tDOT: %01d\n",
dev->hbm_f_dot_supported);
pos += scnprintf(buf + pos, bufsz - pos, "\tEV: %01d\n",
dev->hbm_f_ev_supported);
+ pos += scnprintf(buf + pos, bufsz - pos, "\tFA: %01d\n",
+ dev->hbm_f_fa_supported);
}
pos += scnprintf(buf + pos, bufsz - pos, "pg: %s, %s\n",
@@ -175,6 +195,30 @@
.llseek = generic_file_llseek,
};
+static ssize_t mei_dbgfs_write_allow_fa(struct file *file,
+ const char __user *user_buf,
+ size_t count, loff_t *ppos)
+{
+ struct mei_device *dev;
+ int ret;
+
+ dev = container_of(file->private_data,
+ struct mei_device, allow_fixed_address);
+
+ ret = debugfs_write_file_bool(file, user_buf, count, ppos);
+ if (ret < 0)
+ return ret;
+ dev->override_fixed_address = true;
+ return ret;
+}
+
+static const struct file_operations mei_dbgfs_fops_allow_fa = {
+ .open = simple_open,
+ .read = debugfs_read_file_bool,
+ .write = mei_dbgfs_write_allow_fa,
+ .llseek = generic_file_llseek,
+};
+
/**
* mei_dbgfs_deregister - Remove the debugfs files and directories
*
@@ -224,8 +268,9 @@
dev_err(dev->dev, "devstate: registration failed\n");
goto err;
}
- f = debugfs_create_bool("allow_fixed_address", S_IRUSR | S_IWUSR, dir,
- &dev->allow_fixed_address);
+ f = debugfs_create_file("allow_fixed_address", S_IRUSR | S_IWUSR, dir,
+ &dev->allow_fixed_address,
+ &mei_dbgfs_fops_allow_fa);
if (!f) {
dev_err(dev->dev, "allow_fixed_address: registration failed\n");
goto err;
diff --git a/drivers/misc/mei/hbm.c b/drivers/misc/mei/hbm.c
index e7b7aad..5e305d2 100644
--- a/drivers/misc/mei/hbm.c
+++ b/drivers/misc/mei/hbm.c
@@ -301,7 +301,10 @@
enum_req = (struct hbm_host_enum_request *)dev->wr_msg.data;
memset(enum_req, 0, len);
enum_req->hbm_cmd = HOST_ENUM_REQ_CMD;
- enum_req->allow_add = dev->hbm_f_dc_supported;
+ enum_req->flags |= dev->hbm_f_dc_supported ?
+ MEI_HBM_ENUM_F_ALLOW_ADD : 0;
+ enum_req->flags |= dev->hbm_f_ie_supported ?
+ MEI_HBM_ENUM_F_IMMEDIATE_ENUM : 0;
ret = mei_write_message(dev, mei_hdr, dev->wr_msg.data);
if (ret) {
@@ -401,6 +404,9 @@
if (ret)
status = !MEI_HBMS_SUCCESS;
+ if (dev->dev_state == MEI_DEV_ENABLED)
+ schedule_work(&dev->bus_rescan_work);
+
return mei_hbm_add_cl_resp(dev, req->me_addr, status);
}
@@ -543,7 +549,7 @@
/* We got all client properties */
if (next_client_index == MEI_CLIENTS_MAX) {
dev->hbm_state = MEI_HBM_STARTED;
- schedule_work(&dev->init_work);
+ mei_host_client_init(dev);
return 0;
}
@@ -789,8 +795,11 @@
cl->state = MEI_FILE_CONNECTED;
else {
cl->state = MEI_FILE_DISCONNECT_REPLY;
- if (rs->status == MEI_CL_CONN_NOT_FOUND)
+ if (rs->status == MEI_CL_CONN_NOT_FOUND) {
mei_me_cl_del(dev, cl->me_cl);
+ if (dev->dev_state == MEI_DEV_ENABLED)
+ schedule_work(&dev->bus_rescan_work);
+ }
}
cl->status = mei_cl_conn_status_to_errno(rs->status);
}
@@ -866,7 +875,7 @@
cl = mei_hbm_cl_find_by_cmd(dev, disconnect_req);
if (cl) {
- cl_dbg(dev, cl, "fw disconnect request received\n");
+ cl_warn(dev, cl, "fw disconnect request received\n");
cl->state = MEI_FILE_DISCONNECTING;
cl->timer_count = 0;
@@ -972,6 +981,9 @@
if (dev->version.major_version >= HBM_MAJOR_VERSION_DC)
dev->hbm_f_dc_supported = 1;
+ if (dev->version.major_version >= HBM_MAJOR_VERSION_IE)
+ dev->hbm_f_ie_supported = 1;
+
/* disconnect on connect timeout instead of link reset */
if (dev->version.major_version >= HBM_MAJOR_VERSION_DOT)
dev->hbm_f_dot_supported = 1;
@@ -979,6 +991,10 @@
/* Notification Event Support */
if (dev->version.major_version >= HBM_MAJOR_VERSION_EV)
dev->hbm_f_ev_supported = 1;
+
+ /* Fixed Address Client Support */
+ if (dev->version.major_version >= HBM_MAJOR_VERSION_FA)
+ dev->hbm_f_fa_supported = 1;
}
/**
diff --git a/drivers/misc/mei/hw-me.c b/drivers/misc/mei/hw-me.c
index 25b1997..e2fb44c 100644
--- a/drivers/misc/mei/hw-me.c
+++ b/drivers/misc/mei/hw-me.c
@@ -189,8 +189,11 @@
fw_status->count = fw_src->count;
for (i = 0; i < fw_src->count && i < MEI_FW_STATUS_MAX; i++) {
- ret = pci_read_config_dword(pdev,
- fw_src->status[i], &fw_status->status[i]);
+ ret = pci_read_config_dword(pdev, fw_src->status[i],
+ &fw_status->status[i]);
+ trace_mei_pci_cfg_read(dev->dev, "PCI_CFG_HSF_X",
+ fw_src->status[i],
+ fw_status->status[i]);
if (ret)
return ret;
}
@@ -215,6 +218,7 @@
reg = 0;
pci_read_config_dword(pdev, PCI_CFG_HFS_1, ®);
+ trace_mei_pci_cfg_read(dev->dev, "PCI_CFG_HFS_1", PCI_CFG_HFS_1, reg);
hw->d0i3_supported =
((reg & PCI_CFG_HFS_1_D0I3_MSK) == PCI_CFG_HFS_1_D0I3_MSK);
@@ -1248,6 +1252,7 @@
u32 reg;
pci_read_config_dword(pdev, PCI_CFG_HFS_2, ®);
+ trace_mei_pci_cfg_read(&pdev->dev, "PCI_CFG_HFS_2", PCI_CFG_HFS_2, reg);
/* make sure that bit 9 (NM) is up and bit 10 (DM) is down */
return (reg & 0x600) == 0x200;
}
@@ -1260,6 +1265,7 @@
u32 reg;
/* Read ME FW Status check for SPS Firmware */
pci_read_config_dword(pdev, PCI_CFG_HFS_1, ®);
+ trace_mei_pci_cfg_read(&pdev->dev, "PCI_CFG_HFS_1", PCI_CFG_HFS_1, reg);
/* if bits [19:16] = 15, running SPS Firmware */
return (reg & 0xf0000) == 0xf0000;
}
diff --git a/drivers/misc/mei/hw-txe.c b/drivers/misc/mei/hw-txe.c
index bae680c..4a6c1b8 100644
--- a/drivers/misc/mei/hw-txe.c
+++ b/drivers/misc/mei/hw-txe.c
@@ -28,6 +28,9 @@
#include "client.h"
#include "hbm.h"
+#include "mei-trace.h"
+
+
/**
* mei_txe_reg_read - Reads 32bit data from the txe device
*
@@ -640,8 +643,11 @@
fw_status->count = fw_src->count;
for (i = 0; i < fw_src->count && i < MEI_FW_STATUS_MAX; i++) {
- ret = pci_read_config_dword(pdev,
- fw_src->status[i], &fw_status->status[i]);
+ ret = pci_read_config_dword(pdev, fw_src->status[i],
+ &fw_status->status[i]);
+ trace_mei_pci_cfg_read(dev->dev, "PCI_CFG_HSF_X",
+ fw_src->status[i],
+ fw_status->status[i]);
if (ret)
return ret;
}
diff --git a/drivers/misc/mei/hw.h b/drivers/misc/mei/hw.h
index 4cebde8..9daf3f9 100644
--- a/drivers/misc/mei/hw.h
+++ b/drivers/misc/mei/hw.h
@@ -29,7 +29,6 @@
#define MEI_CLIENTS_INIT_TIMEOUT 15 /* HPS: Clients Enumeration Timeout */
#define MEI_IAMTHIF_STALL_TIMER 12 /* HPS */
-#define MEI_IAMTHIF_READ_TIMER 10 /* HPS */
#define MEI_PGI_TIMEOUT 1 /* PG Isolation time response 1 sec */
#define MEI_D0I3_TIMEOUT 5 /* D0i3 set/unset max response time */
@@ -54,6 +53,12 @@
#define HBM_MAJOR_VERSION_DC 2
/*
+ * MEI version with immediate reply to enum request support
+ */
+#define HBM_MINOR_VERSION_IE 0
+#define HBM_MAJOR_VERSION_IE 2
+
+/*
* MEI version with disconnect on connection timeout support
*/
#define HBM_MINOR_VERSION_DOT 0
@@ -65,6 +70,12 @@
#define HBM_MINOR_VERSION_EV 0
#define HBM_MAJOR_VERSION_EV 2
+/*
+ * MEI version with fixed address client support
+ */
+#define HBM_MINOR_VERSION_FA 0
+#define HBM_MAJOR_VERSION_FA 2
+
/* Host bus message command opcode */
#define MEI_HBM_CMD_OP_MSK 0x7f
/* Host bus message command RESPONSE */
@@ -241,15 +252,26 @@
} __packed;
/**
- * struct hbm_host_enum_request - enumeration request from host to fw
+ * enum hbm_host_enum_flags - enumeration request flags (HBM version >= 2.0)
*
- * @hbm_cmd: bus message command header
- * @allow_add: allow dynamic clients add HBM version >= 2.0
+ * @MEI_HBM_ENUM_F_ALLOW_ADD: allow dynamic clients add
+ * @MEI_HBM_ENUM_F_IMMEDIATE_ENUM: allow FW to send answer immediately
+ */
+enum hbm_host_enum_flags {
+ MEI_HBM_ENUM_F_ALLOW_ADD = BIT(0),
+ MEI_HBM_ENUM_F_IMMEDIATE_ENUM = BIT(1),
+};
+
+/**
+ * struct hbm_host_enum_request - enumeration request from host to fw
+ *
+ * @hbm_cmd : bus message command header
+ * @flags : request flags
* @reserved: reserved
*/
struct hbm_host_enum_request {
u8 hbm_cmd;
- u8 allow_add;
+ u8 flags;
u8 reserved[2];
} __packed;
diff --git a/drivers/misc/mei/init.c b/drivers/misc/mei/init.c
index 3edafc8..f7c8dfd 100644
--- a/drivers/misc/mei/init.c
+++ b/drivers/misc/mei/init.c
@@ -91,8 +91,8 @@
*/
void mei_cancel_work(struct mei_device *dev)
{
- cancel_work_sync(&dev->init_work);
cancel_work_sync(&dev->reset_work);
+ cancel_work_sync(&dev->bus_rescan_work);
cancel_delayed_work(&dev->timer_work);
}
@@ -148,16 +148,10 @@
state != MEI_DEV_POWER_UP) {
/* remove all waiting requests */
- mei_cl_all_write_clear(dev);
-
mei_cl_all_disconnect(dev);
- /* wake up all readers and writers so they can be interrupted */
- mei_cl_all_wakeup(dev);
-
/* remove entry if already in list */
- dev_dbg(dev->dev, "remove iamthif and wd from the file list.\n");
- mei_cl_unlink(&dev->wd_cl);
+ dev_dbg(dev->dev, "remove iamthif from the file list.\n");
mei_cl_unlink(&dev->iamthif_cl);
mei_amthif_reset_params(dev);
}
@@ -165,7 +159,6 @@
mei_hbm_reset(dev);
dev->rd_msg_hdr = 0;
- dev->wd_pending = false;
if (ret) {
dev_err(dev->dev, "hw_reset failed ret = %d\n", ret);
@@ -335,16 +328,12 @@
mutex_lock(&dev->device_lock);
- mei_wd_stop(dev);
-
dev->dev_state = MEI_DEV_POWER_DOWN;
mei_reset(dev);
/* move device to disabled state unconditionally */
dev->dev_state = MEI_DEV_DISABLED;
mutex_unlock(&dev->device_lock);
-
- mei_watchdog_unregister(dev);
}
EXPORT_SYMBOL_GPL(mei_stop);
@@ -394,7 +383,6 @@
init_waitqueue_head(&dev->wait_hw_ready);
init_waitqueue_head(&dev->wait_pg);
init_waitqueue_head(&dev->wait_hbm_start);
- init_waitqueue_head(&dev->wait_stop_wd);
dev->dev_state = MEI_DEV_INITIALIZING;
dev->reset_count = 0;
@@ -404,13 +392,11 @@
mei_io_list_init(&dev->ctrl_rd_list);
INIT_DELAYED_WORK(&dev->timer_work, mei_timer);
- INIT_WORK(&dev->init_work, mei_host_client_init);
INIT_WORK(&dev->reset_work, mei_reset_work);
+ INIT_WORK(&dev->bus_rescan_work, mei_cl_bus_rescan_work);
- INIT_LIST_HEAD(&dev->wd_cl.link);
INIT_LIST_HEAD(&dev->iamthif_cl.link);
mei_io_list_init(&dev->amthif_cmd_list);
- mei_io_list_init(&dev->amthif_rd_complete_list);
bitmap_zero(dev->host_clients_map, MEI_CLIENTS_MAX);
dev->open_handle_count = 0;
diff --git a/drivers/misc/mei/interrupt.c b/drivers/misc/mei/interrupt.c
index 64b568a..1e5cb1f 100644
--- a/drivers/misc/mei/interrupt.c
+++ b/drivers/misc/mei/interrupt.c
@@ -48,7 +48,7 @@
dev_dbg(dev->dev, "completing call back.\n");
if (cl == &dev->iamthif_cl)
- mei_amthif_complete(dev, cb);
+ mei_amthif_complete(cl, cb);
else
mei_cl_complete(cl, cb);
}
@@ -104,6 +104,7 @@
struct mei_device *dev = cl->dev;
struct mei_cl_cb *cb;
unsigned char *buffer = NULL;
+ size_t buf_sz;
cb = list_first_entry_or_null(&cl->rd_pending, struct mei_cl_cb, list);
if (!cb) {
@@ -124,11 +125,21 @@
goto out;
}
- if (cb->buf.size < mei_hdr->length + cb->buf_idx) {
- cl_dbg(dev, cl, "message overflow. size %d len %d idx %ld\n",
+ buf_sz = mei_hdr->length + cb->buf_idx;
+ /* catch for integer overflow */
+ if (buf_sz < cb->buf_idx) {
+ cl_err(dev, cl, "message is too big len %d idx %zu\n",
+ mei_hdr->length, cb->buf_idx);
+
+ list_move_tail(&cb->list, &complete_list->list);
+ cb->status = -EMSGSIZE;
+ goto out;
+ }
+
+ if (cb->buf.size < buf_sz) {
+ cl_dbg(dev, cl, "message overflow. size %zu len %d idx %zu\n",
cb->buf.size, mei_hdr->length, cb->buf_idx);
- buffer = krealloc(cb->buf.data, mei_hdr->length + cb->buf_idx,
- GFP_KERNEL);
+ buffer = krealloc(cb->buf.data, buf_sz, GFP_KERNEL);
if (!buffer) {
cb->status = -ENOMEM;
@@ -136,7 +147,7 @@
goto out;
}
cb->buf.data = buffer;
- cb->buf.size = mei_hdr->length + cb->buf_idx;
+ cb->buf.size = buf_sz;
}
buffer = cb->buf.data + cb->buf_idx;
@@ -145,8 +156,7 @@
cb->buf_idx += mei_hdr->length;
if (mei_hdr->msg_complete) {
- cb->read_time = jiffies;
- cl_dbg(dev, cl, "completed read length = %lu\n", cb->buf_idx);
+ cl_dbg(dev, cl, "completed read length = %zu\n", cb->buf_idx);
list_move_tail(&cb->list, &complete_list->list);
} else {
pm_runtime_mark_last_busy(dev->dev);
@@ -229,6 +239,16 @@
return 0;
}
+static inline bool hdr_is_hbm(struct mei_msg_hdr *mei_hdr)
+{
+ return mei_hdr->host_addr == 0 && mei_hdr->me_addr == 0;
+}
+
+static inline bool hdr_is_fixed(struct mei_msg_hdr *mei_hdr)
+{
+ return mei_hdr->host_addr == 0 && mei_hdr->me_addr != 0;
+}
+
/**
* mei_irq_read_handler - bottom half read routine after ISR to
* handle the read processing.
@@ -270,7 +290,7 @@
}
/* HBM message */
- if (mei_hdr->host_addr == 0 && mei_hdr->me_addr == 0) {
+ if (hdr_is_hbm(mei_hdr)) {
ret = mei_hbm_dispatch(dev, mei_hdr);
if (ret) {
dev_dbg(dev->dev, "mei_hbm_dispatch failed ret = %d\n",
@@ -290,6 +310,14 @@
/* if no recipient cl was found we assume corrupted header */
if (&cl->link == &dev->file_list) {
+ /* A message for not connected fixed address clients
+ * should be silently discarded
+ */
+ if (hdr_is_fixed(mei_hdr)) {
+ mei_irq_discard_msg(dev, mei_hdr);
+ ret = 0;
+ goto reset_slots;
+ }
dev_err(dev->dev, "no destination client found 0x%08X\n",
dev->rd_msg_hdr);
ret = -EBADMSG;
@@ -360,21 +388,6 @@
list_move_tail(&cb->list, &cmpl_list->list);
}
- if (dev->wd_state == MEI_WD_STOPPING) {
- dev->wd_state = MEI_WD_IDLE;
- wake_up(&dev->wait_stop_wd);
- }
-
- if (mei_cl_is_connected(&dev->wd_cl)) {
- if (dev->wd_pending &&
- mei_cl_flow_ctrl_creds(&dev->wd_cl) > 0) {
- ret = mei_wd_send(dev);
- if (ret)
- return ret;
- dev->wd_pending = false;
- }
- }
-
/* complete control write list CB */
dev_dbg(dev->dev, "complete control write list cb.\n");
list_for_each_entry_safe(cb, next, &dev->ctrl_wr_list.list, list) {
@@ -462,7 +475,6 @@
*/
void mei_timer(struct work_struct *work)
{
- unsigned long timeout;
struct mei_cl *cl;
struct mei_device *dev = container_of(work,
@@ -508,45 +520,15 @@
mei_reset(dev);
dev->iamthif_canceled = false;
dev->iamthif_state = MEI_IAMTHIF_IDLE;
- dev->iamthif_timer = 0;
mei_io_cb_free(dev->iamthif_current_cb);
dev->iamthif_current_cb = NULL;
- dev->iamthif_file_object = NULL;
+ dev->iamthif_fp = NULL;
mei_amthif_run_next_cmd(dev);
}
}
- if (dev->iamthif_timer) {
-
- timeout = dev->iamthif_timer +
- mei_secs_to_jiffies(MEI_IAMTHIF_READ_TIMER);
-
- dev_dbg(dev->dev, "dev->iamthif_timer = %ld\n",
- dev->iamthif_timer);
- dev_dbg(dev->dev, "timeout = %ld\n", timeout);
- dev_dbg(dev->dev, "jiffies = %ld\n", jiffies);
- if (time_after(jiffies, timeout)) {
- /*
- * User didn't read the AMTHI data on time (15sec)
- * freeing AMTHI for other requests
- */
-
- dev_dbg(dev->dev, "freeing AMTHI for other requests\n");
-
- mei_io_list_flush(&dev->amthif_rd_complete_list,
- &dev->iamthif_cl);
- mei_io_cb_free(dev->iamthif_current_cb);
- dev->iamthif_current_cb = NULL;
-
- dev->iamthif_file_object->private_data = NULL;
- dev->iamthif_file_object = NULL;
- dev->iamthif_timer = 0;
- mei_amthif_run_next_cmd(dev);
-
- }
- }
out:
if (dev->dev_state != MEI_DEV_DISABLED)
schedule_delayed_work(&dev->timer_work, 2 * HZ);
diff --git a/drivers/misc/mei/main.c b/drivers/misc/mei/main.c
index 677d0362..52635b0 100644
--- a/drivers/misc/mei/main.c
+++ b/drivers/misc/mei/main.c
@@ -65,7 +65,7 @@
goto err_unlock;
}
- cl = mei_cl_alloc_linked(dev, MEI_HOST_CLIENT_ID_ANY);
+ cl = mei_cl_alloc_linked(dev);
if (IS_ERR(cl)) {
err = PTR_ERR(cl);
goto err_unlock;
@@ -159,27 +159,22 @@
goto out;
}
+ if (ubuf == NULL) {
+ rets = -EMSGSIZE;
+ goto out;
+ }
+
if (cl == &dev->iamthif_cl) {
rets = mei_amthif_read(dev, file, ubuf, length, offset);
goto out;
}
cb = mei_cl_read_cb(cl, file);
- if (cb) {
- /* read what left */
- if (cb->buf_idx > *offset)
- goto copy_buffer;
- /* offset is beyond buf_idx we have no more data return 0 */
- if (cb->buf_idx > 0 && cb->buf_idx <= *offset) {
- rets = 0;
- goto free;
- }
- /* Offset needs to be cleaned for contiguous reads*/
- if (cb->buf_idx == 0 && *offset > 0)
- *offset = 0;
- } else if (*offset > 0) {
+ if (cb)
+ goto copy_buffer;
+
+ if (*offset > 0)
*offset = 0;
- }
err = mei_cl_read_start(cl, length, file);
if (err && err != -EBUSY) {
@@ -214,11 +209,6 @@
cb = mei_cl_read_cb(cl, file);
if (!cb) {
- if (mei_cl_is_fixed_address(cl) && dev->allow_fixed_address) {
- cb = mei_cl_read_cb(cl, NULL);
- if (cb)
- goto copy_buffer;
- }
rets = 0;
goto out;
}
@@ -231,10 +221,10 @@
goto free;
}
- cl_dbg(dev, cl, "buf.size = %d buf.idx = %ld\n",
- cb->buf.size, cb->buf_idx);
- if (length == 0 || ubuf == NULL || *offset > cb->buf_idx) {
- rets = -EMSGSIZE;
+ cl_dbg(dev, cl, "buf.size = %zu buf.idx = %zu offset = %lld\n",
+ cb->buf.size, cb->buf_idx, *offset);
+ if (*offset >= cb->buf_idx) {
+ rets = 0;
goto free;
}
@@ -250,11 +240,13 @@
rets = length;
*offset += length;
- if ((unsigned long)*offset < cb->buf_idx)
+ /* not all data was read, keep the cb */
+ if (*offset < cb->buf_idx)
goto out;
free:
mei_io_cb_free(cb);
+ *offset = 0;
out:
cl_dbg(dev, cl, "end mei read rets = %d\n", rets);
@@ -275,9 +267,8 @@
size_t length, loff_t *offset)
{
struct mei_cl *cl = file->private_data;
- struct mei_cl_cb *write_cb = NULL;
+ struct mei_cl_cb *cb;
struct mei_device *dev;
- unsigned long timeout = 0;
int rets;
if (WARN_ON(!cl || !cl->dev))
@@ -313,52 +304,31 @@
goto out;
}
- if (cl == &dev->iamthif_cl) {
- write_cb = mei_amthif_find_read_list_entry(dev, file);
-
- if (write_cb) {
- timeout = write_cb->read_time +
- mei_secs_to_jiffies(MEI_IAMTHIF_READ_TIMER);
-
- if (time_after(jiffies, timeout)) {
- *offset = 0;
- mei_io_cb_free(write_cb);
- write_cb = NULL;
- }
- }
- }
-
*offset = 0;
- write_cb = mei_cl_alloc_cb(cl, length, MEI_FOP_WRITE, file);
- if (!write_cb) {
+ cb = mei_cl_alloc_cb(cl, length, MEI_FOP_WRITE, file);
+ if (!cb) {
rets = -ENOMEM;
goto out;
}
- rets = copy_from_user(write_cb->buf.data, ubuf, length);
+ rets = copy_from_user(cb->buf.data, ubuf, length);
if (rets) {
dev_dbg(dev->dev, "failed to copy data from userland\n");
rets = -EFAULT;
+ mei_io_cb_free(cb);
goto out;
}
if (cl == &dev->iamthif_cl) {
- rets = mei_amthif_write(cl, write_cb);
-
- if (rets) {
- dev_err(dev->dev,
- "amthif write failed with status = %d\n", rets);
- goto out;
- }
- mutex_unlock(&dev->device_lock);
- return length;
+ rets = mei_amthif_write(cl, cb);
+ if (!rets)
+ rets = length;
+ goto out;
}
- rets = mei_cl_write(cl, write_cb, false);
+ rets = mei_cl_write(cl, cb, false);
out:
mutex_unlock(&dev->device_lock);
- if (rets < 0)
- mei_io_cb_free(write_cb);
return rets;
}
@@ -393,12 +363,22 @@
/* find ME client we're trying to connect to */
me_cl = mei_me_cl_by_uuid(dev, &data->in_client_uuid);
- if (!me_cl ||
- (me_cl->props.fixed_address && !dev->allow_fixed_address)) {
+ if (!me_cl) {
dev_dbg(dev->dev, "Cannot connect to FW Client UUID = %pUl\n",
&data->in_client_uuid);
- mei_me_cl_put(me_cl);
- return -ENOTTY;
+ rets = -ENOTTY;
+ goto end;
+ }
+
+ if (me_cl->props.fixed_address) {
+ bool forbidden = dev->override_fixed_address ?
+ !dev->allow_fixed_address : !dev->hbm_f_fa_supported;
+ if (forbidden) {
+ dev_dbg(dev->dev, "Connection forbidden to FW Client UUID = %pUl\n",
+ &data->in_client_uuid);
+ rets = -ENOTTY;
+ goto end;
+ }
}
dev_dbg(dev->dev, "Connect to FW Client ID = %d\n",
@@ -454,11 +434,15 @@
*
* Return: 0 on success , <0 on error
*/
-static int mei_ioctl_client_notify_request(struct file *file, u32 request)
+static int mei_ioctl_client_notify_request(const struct file *file, u32 request)
{
struct mei_cl *cl = file->private_data;
- return mei_cl_notify_request(cl, file, request);
+ if (request != MEI_HBM_NOTIFICATION_START &&
+ request != MEI_HBM_NOTIFICATION_STOP)
+ return -EINVAL;
+
+ return mei_cl_notify_request(cl, file, (u8)request);
}
/**
@@ -469,7 +453,7 @@
*
* Return: 0 on success , <0 on error
*/
-static int mei_ioctl_client_notify_get(struct file *file, u32 *notify_get)
+static int mei_ioctl_client_notify_get(const struct file *file, u32 *notify_get)
{
struct mei_cl *cl = file->private_data;
bool notify_ev;
diff --git a/drivers/misc/mei/mei-trace.c b/drivers/misc/mei/mei-trace.c
index 388efb5..e19e6ac 100644
--- a/drivers/misc/mei/mei-trace.c
+++ b/drivers/misc/mei/mei-trace.c
@@ -22,4 +22,6 @@
EXPORT_TRACEPOINT_SYMBOL(mei_reg_read);
EXPORT_TRACEPOINT_SYMBOL(mei_reg_write);
+EXPORT_TRACEPOINT_SYMBOL(mei_pci_cfg_read);
+EXPORT_TRACEPOINT_SYMBOL(mei_pci_cfg_write);
#endif /* __CHECKER__ */
diff --git a/drivers/misc/mei/mei-trace.h b/drivers/misc/mei/mei-trace.h
index 47e1bc6..7d2d5d4 100644
--- a/drivers/misc/mei/mei-trace.h
+++ b/drivers/misc/mei/mei-trace.h
@@ -60,7 +60,45 @@
__entry->offs = offs;
__entry->val = val;
),
- TP_printk("[%s] write %s[%#x] = %#x)",
+ TP_printk("[%s] write %s[%#x] = %#x",
+ __get_str(dev), __entry->reg, __entry->offs, __entry->val)
+);
+
+TRACE_EVENT(mei_pci_cfg_read,
+ TP_PROTO(const struct device *dev, const char *reg, u32 offs, u32 val),
+ TP_ARGS(dev, reg, offs, val),
+ TP_STRUCT__entry(
+ __string(dev, dev_name(dev))
+ __field(const char *, reg)
+ __field(u32, offs)
+ __field(u32, val)
+ ),
+ TP_fast_assign(
+ __assign_str(dev, dev_name(dev))
+ __entry->reg = reg;
+ __entry->offs = offs;
+ __entry->val = val;
+ ),
+ TP_printk("[%s] pci cfg read %s:[%#x] = %#x",
+ __get_str(dev), __entry->reg, __entry->offs, __entry->val)
+);
+
+TRACE_EVENT(mei_pci_cfg_write,
+ TP_PROTO(const struct device *dev, const char *reg, u32 offs, u32 val),
+ TP_ARGS(dev, reg, offs, val),
+ TP_STRUCT__entry(
+ __string(dev, dev_name(dev))
+ __field(const char *, reg)
+ __field(u32, offs)
+ __field(u32, val)
+ ),
+ TP_fast_assign(
+ __assign_str(dev, dev_name(dev))
+ __entry->reg = reg;
+ __entry->offs = offs;
+ __entry->val = val;
+ ),
+ TP_printk("[%s] pci cfg write %s[%#x] = %#x",
__get_str(dev), __entry->reg, __entry->offs, __entry->val)
);
diff --git a/drivers/misc/mei/mei_dev.h b/drivers/misc/mei/mei_dev.h
index 4250555..db78e6d 100644
--- a/drivers/misc/mei/mei_dev.h
+++ b/drivers/misc/mei/mei_dev.h
@@ -18,7 +18,7 @@
#define _MEI_DEV_H_
#include <linux/types.h>
-#include <linux/watchdog.h>
+#include <linux/cdev.h>
#include <linux/poll.h>
#include <linux/mei.h>
#include <linux/mei_cl_bus.h>
@@ -26,33 +26,13 @@
#include "hw.h"
#include "hbm.h"
-/*
- * watch dog definition
- */
-#define MEI_WD_HDR_SIZE 4
-#define MEI_WD_STOP_MSG_SIZE MEI_WD_HDR_SIZE
-#define MEI_WD_START_MSG_SIZE (MEI_WD_HDR_SIZE + 16)
-
-#define MEI_WD_DEFAULT_TIMEOUT 120 /* seconds */
-#define MEI_WD_MIN_TIMEOUT 120 /* seconds */
-#define MEI_WD_MAX_TIMEOUT 65535 /* seconds */
-
-#define MEI_WD_STOP_TIMEOUT 10 /* msecs */
-
-#define MEI_WD_STATE_INDEPENDENCE_MSG_SENT (1 << 0)
-
-#define MEI_RD_MSG_BUF_SIZE (128 * sizeof(u32))
-
/*
* AMTHI Client UUID
*/
extern const uuid_le mei_amthif_guid;
-/*
- * Watchdog Client UUID
- */
-extern const uuid_le mei_wd_guid;
+#define MEI_RD_MSG_BUF_SIZE (128 * sizeof(u32))
/*
* Number of Maximum MEI Clients
@@ -73,15 +53,6 @@
*/
#define MEI_MAX_OPEN_HANDLE_COUNT (MEI_CLIENTS_MAX - 1)
-/*
- * Internal Clients Number
- */
-#define MEI_HOST_CLIENT_ID_ANY (-1)
-#define MEI_HBM_HOST_CLIENT_ID 0 /* not used, just for documentation */
-#define MEI_WD_HOST_CLIENT_ID 1
-#define MEI_IAMTHIF_HOST_CLIENT_ID 2
-
-
/* File state */
enum file_state {
MEI_FILE_INITIALIZING = 0,
@@ -123,12 +94,6 @@
MEI_READ_COMPLETE
};
-enum mei_wd_states {
- MEI_WD_IDLE,
- MEI_WD_RUNNING,
- MEI_WD_STOPPING,
-};
-
/**
* enum mei_cb_file_ops - file operation associated with the callback
* @MEI_FOP_READ: read
@@ -153,7 +118,7 @@
* Intel MEI message data struct
*/
struct mei_msg_data {
- u32 size;
+ size_t size;
unsigned char *data;
};
@@ -206,8 +171,7 @@
* @fop_type: file operation type
* @buf: buffer for data associated with the callback
* @buf_idx: last read index
- * @read_time: last read operation time stamp (iamthif)
- * @file_object: pointer to file structure
+ * @fp: pointer to file structure
* @status: io status of the cb
* @internal: communication between driver and FW flag
* @completed: the transfer or reception has completed
@@ -217,9 +181,8 @@
struct mei_cl *cl;
enum mei_cb_file_ops fop_type;
struct mei_msg_data buf;
- unsigned long buf_idx;
- unsigned long read_time;
- struct file *file_object;
+ size_t buf_idx;
+ const struct file *fp;
int status;
u32 internal:1;
u32 completed:1;
@@ -341,12 +304,13 @@
/* MEI bus API*/
void mei_cl_bus_rescan(struct mei_device *bus);
+void mei_cl_bus_rescan_work(struct work_struct *work);
void mei_cl_bus_dev_fixup(struct mei_cl_device *dev);
ssize_t __mei_cl_send(struct mei_cl *cl, u8 *buf, size_t length,
bool blocking);
ssize_t __mei_cl_recv(struct mei_cl *cl, u8 *buf, size_t length);
-void mei_cl_bus_rx_event(struct mei_cl *cl);
-void mei_cl_bus_notify_event(struct mei_cl *cl);
+bool mei_cl_bus_rx_event(struct mei_cl *cl);
+bool mei_cl_bus_notify_event(struct mei_cl *cl);
void mei_cl_bus_remove_devices(struct mei_device *bus);
int mei_cl_bus_init(void);
void mei_cl_bus_exit(void);
@@ -404,7 +368,6 @@
* @wait_hw_ready : wait queue for receive HW ready message form FW
* @wait_pg : wait queue for receive PG message from FW
* @wait_hbm_start : wait queue for receive HBM start message from FW
- * @wait_stop_wd : wait queue for receive WD stop message from FW
*
* @reset_count : number of consecutive resets
* @dev_state : device state
@@ -426,6 +389,8 @@
* @hbm_f_dc_supported : hbm feature dynamic clients
* @hbm_f_dot_supported : hbm feature disconnect on timeout
* @hbm_f_ev_supported : hbm feature event notification
+ * @hbm_f_fa_supported : hbm feature fixed address client
+ * @hbm_f_ie_supported : hbm feature immediate reply to enum request
*
* @me_clients_rwsem: rw lock over me_clients list
* @me_clients : list of FW clients
@@ -434,26 +399,19 @@
* @me_client_index : last FW client index in enumeration
*
* @allow_fixed_address: allow user space to connect a fixed client
- *
- * @wd_cl : watchdog client
- * @wd_state : watchdog client state
- * @wd_pending : watchdog command is pending
- * @wd_timeout : watchdog expiration timeout
- * @wd_data : watchdog message buffer
+ * @override_fixed_address: force allow fixed address behavior
*
* @amthif_cmd_list : amthif list for cmd waiting
- * @amthif_rd_complete_list : amthif list for reading completed cmd data
- * @iamthif_file_object : file for current amthif operation
+ * @iamthif_fp : file for current amthif operation
* @iamthif_cl : amthif host client
* @iamthif_current_cb : amthif current operation callback
* @iamthif_open_count : number of opened amthif connections
- * @iamthif_timer : time stamp of current amthif command completion
* @iamthif_stall_timer : timer to detect amthif hang
* @iamthif_state : amthif processor state
* @iamthif_canceled : current amthif command is canceled
*
- * @init_work : work item for the device init
* @reset_work : work item for the device reset
+ * @bus_rescan_work : work item for the bus rescan
*
* @device_list : mei client bus list
* @cl_bus_lock : client bus list lock
@@ -486,7 +444,6 @@
wait_queue_head_t wait_hw_ready;
wait_queue_head_t wait_pg;
wait_queue_head_t wait_hbm_start;
- wait_queue_head_t wait_stop_wd;
/*
* mei device states
@@ -522,6 +479,8 @@
unsigned int hbm_f_dc_supported:1;
unsigned int hbm_f_dot_supported:1;
unsigned int hbm_f_ev_supported:1;
+ unsigned int hbm_f_fa_supported:1;
+ unsigned int hbm_f_ie_supported:1;
struct rw_semaphore me_clients_rwsem;
struct list_head me_clients;
@@ -530,29 +489,21 @@
unsigned long me_client_index;
bool allow_fixed_address;
-
- struct mei_cl wd_cl;
- enum mei_wd_states wd_state;
- bool wd_pending;
- u16 wd_timeout;
- unsigned char wd_data[MEI_WD_START_MSG_SIZE];
-
+ bool override_fixed_address;
/* amthif list for cmd waiting */
struct mei_cl_cb amthif_cmd_list;
/* driver managed amthif list for reading completed amthif cmd data */
- struct mei_cl_cb amthif_rd_complete_list;
- struct file *iamthif_file_object;
+ const struct file *iamthif_fp;
struct mei_cl iamthif_cl;
struct mei_cl_cb *iamthif_current_cb;
long iamthif_open_count;
- unsigned long iamthif_timer;
u32 iamthif_stall_timer;
enum iamthif_states iamthif_state;
bool iamthif_canceled;
- struct work_struct init_work;
struct work_struct reset_work;
+ struct work_struct bus_rescan_work;
/* List of bus devices */
struct list_head device_list;
@@ -635,47 +586,18 @@
int mei_amthif_release(struct mei_device *dev, struct file *file);
-struct mei_cl_cb *mei_amthif_find_read_list_entry(struct mei_device *dev,
- struct file *file);
-
int mei_amthif_write(struct mei_cl *cl, struct mei_cl_cb *cb);
int mei_amthif_run_next_cmd(struct mei_device *dev);
int mei_amthif_irq_write(struct mei_cl *cl, struct mei_cl_cb *cb,
struct mei_cl_cb *cmpl_list);
-void mei_amthif_complete(struct mei_device *dev, struct mei_cl_cb *cb);
+void mei_amthif_complete(struct mei_cl *cl, struct mei_cl_cb *cb);
int mei_amthif_irq_read_msg(struct mei_cl *cl,
struct mei_msg_hdr *mei_hdr,
struct mei_cl_cb *complete_list);
int mei_amthif_irq_read(struct mei_device *dev, s32 *slots);
/*
- * NFC functions
- */
-int mei_nfc_host_init(struct mei_device *dev, struct mei_me_client *me_cl);
-void mei_nfc_host_exit(struct mei_device *dev);
-
-/*
- * NFC Client UUID
- */
-extern const uuid_le mei_nfc_guid;
-
-int mei_wd_send(struct mei_device *dev);
-int mei_wd_stop(struct mei_device *dev);
-int mei_wd_host_init(struct mei_device *dev, struct mei_me_client *me_cl);
-/*
- * mei_watchdog_register - Registering watchdog interface
- * once we got connection to the WD Client
- * @dev: mei device
- */
-int mei_watchdog_register(struct mei_device *dev);
-/*
- * mei_watchdog_unregister - Unregistering watchdog interface
- * @dev: mei device
- */
-void mei_watchdog_unregister(struct mei_device *dev);
-
-/*
* Register Access Function
*/
diff --git a/drivers/misc/mei/pci-me.c b/drivers/misc/mei/pci-me.c
index 75fc9c6..996344f 100644
--- a/drivers/misc/mei/pci-me.c
+++ b/drivers/misc/mei/pci-me.c
@@ -210,7 +210,7 @@
err = mei_register(dev, &pdev->dev);
if (err)
- goto release_irq;
+ goto stop;
pci_set_drvdata(pdev, dev);
@@ -231,6 +231,8 @@
return 0;
+stop:
+ mei_stop(dev);
release_irq:
mei_cancel_work(dev);
mei_disable_interrupts(dev);
diff --git a/drivers/misc/mei/pci-txe.c b/drivers/misc/mei/pci-txe.c
index 71f8a74..30cc306 100644
--- a/drivers/misc/mei/pci-txe.c
+++ b/drivers/misc/mei/pci-txe.c
@@ -154,7 +154,7 @@
err = mei_register(dev, &pdev->dev);
if (err)
- goto release_irq;
+ goto stop;
pci_set_drvdata(pdev, dev);
@@ -170,6 +170,8 @@
return 0;
+stop:
+ mei_stop(dev);
release_irq:
mei_cancel_work(dev);
diff --git a/drivers/misc/mei/wd.c b/drivers/misc/mei/wd.c
deleted file mode 100644
index b346638..0000000
--- a/drivers/misc/mei/wd.c
+++ /dev/null
@@ -1,391 +0,0 @@
-/*
- *
- * Intel Management Engine Interface (Intel MEI) Linux driver
- * Copyright (c) 2003-2012, Intel Corporation.
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
- * more details.
- *
- */
-#include <linux/kernel.h>
-#include <linux/module.h>
-#include <linux/moduleparam.h>
-#include <linux/device.h>
-#include <linux/sched.h>
-#include <linux/watchdog.h>
-
-#include <linux/mei.h>
-
-#include "mei_dev.h"
-#include "hbm.h"
-#include "client.h"
-
-static const u8 mei_start_wd_params[] = { 0x02, 0x12, 0x13, 0x10 };
-static const u8 mei_stop_wd_params[] = { 0x02, 0x02, 0x14, 0x10 };
-
-/*
- * AMT Watchdog Device
- */
-#define INTEL_AMT_WATCHDOG_ID "INTCAMT"
-
-/* UUIDs for AMT F/W clients */
-const uuid_le mei_wd_guid = UUID_LE(0x05B79A6F, 0x4628, 0x4D7F, 0x89,
- 0x9D, 0xA9, 0x15, 0x14, 0xCB,
- 0x32, 0xAB);
-
-static void mei_wd_set_start_timeout(struct mei_device *dev, u16 timeout)
-{
- dev_dbg(dev->dev, "wd: set timeout=%d.\n", timeout);
- memcpy(dev->wd_data, mei_start_wd_params, MEI_WD_HDR_SIZE);
- memcpy(dev->wd_data + MEI_WD_HDR_SIZE, &timeout, sizeof(u16));
-}
-
-/**
- * mei_wd_host_init - connect to the watchdog client
- *
- * @dev: the device structure
- * @me_cl: me client
- *
- * Return: -ENOTTY if wd client cannot be found
- * -EIO if write has failed
- * 0 on success
- */
-int mei_wd_host_init(struct mei_device *dev, struct mei_me_client *me_cl)
-{
- struct mei_cl *cl = &dev->wd_cl;
- int ret;
-
- mei_cl_init(cl, dev);
-
- dev->wd_timeout = MEI_WD_DEFAULT_TIMEOUT;
- dev->wd_state = MEI_WD_IDLE;
-
- ret = mei_cl_link(cl, MEI_WD_HOST_CLIENT_ID);
- if (ret < 0) {
- dev_info(dev->dev, "wd: failed link client\n");
- return ret;
- }
-
- ret = mei_cl_connect(cl, me_cl, NULL);
- if (ret) {
- dev_err(dev->dev, "wd: failed to connect = %d\n", ret);
- mei_cl_unlink(cl);
- return ret;
- }
-
- ret = mei_watchdog_register(dev);
- if (ret) {
- mei_cl_disconnect(cl);
- mei_cl_unlink(cl);
- }
- return ret;
-}
-
-/**
- * mei_wd_send - sends watch dog message to fw.
- *
- * @dev: the device structure
- *
- * Return: 0 if success,
- * -EIO when message send fails
- * -EINVAL when invalid message is to be sent
- * -ENODEV on flow control failure
- */
-int mei_wd_send(struct mei_device *dev)
-{
- struct mei_cl *cl = &dev->wd_cl;
- struct mei_msg_hdr hdr;
- int ret;
-
- hdr.host_addr = cl->host_client_id;
- hdr.me_addr = mei_cl_me_id(cl);
- hdr.msg_complete = 1;
- hdr.reserved = 0;
- hdr.internal = 0;
-
- if (!memcmp(dev->wd_data, mei_start_wd_params, MEI_WD_HDR_SIZE))
- hdr.length = MEI_WD_START_MSG_SIZE;
- else if (!memcmp(dev->wd_data, mei_stop_wd_params, MEI_WD_HDR_SIZE))
- hdr.length = MEI_WD_STOP_MSG_SIZE;
- else {
- dev_err(dev->dev, "wd: invalid message is to be sent, aborting\n");
- return -EINVAL;
- }
-
- ret = mei_write_message(dev, &hdr, dev->wd_data);
- if (ret) {
- dev_err(dev->dev, "wd: write message failed\n");
- return ret;
- }
-
- ret = mei_cl_flow_ctrl_reduce(cl);
- if (ret) {
- dev_err(dev->dev, "wd: flow_ctrl_reduce failed.\n");
- return ret;
- }
-
- return 0;
-}
-
-/**
- * mei_wd_stop - sends watchdog stop message to fw.
- *
- * @dev: the device structure
- *
- * Return: 0 if success
- * on error:
- * -EIO when message send fails
- * -EINVAL when invalid message is to be sent
- * -ETIME on message timeout
- */
-int mei_wd_stop(struct mei_device *dev)
-{
- struct mei_cl *cl = &dev->wd_cl;
- int ret;
-
- if (!mei_cl_is_connected(cl) ||
- dev->wd_state != MEI_WD_RUNNING)
- return 0;
-
- memcpy(dev->wd_data, mei_stop_wd_params, MEI_WD_STOP_MSG_SIZE);
-
- dev->wd_state = MEI_WD_STOPPING;
-
- ret = mei_cl_flow_ctrl_creds(cl);
- if (ret < 0)
- goto err;
-
- if (ret && mei_hbuf_acquire(dev)) {
- ret = mei_wd_send(dev);
- if (ret)
- goto err;
- dev->wd_pending = false;
- } else {
- dev->wd_pending = true;
- }
-
- mutex_unlock(&dev->device_lock);
-
- ret = wait_event_timeout(dev->wait_stop_wd,
- dev->wd_state == MEI_WD_IDLE,
- msecs_to_jiffies(MEI_WD_STOP_TIMEOUT));
- mutex_lock(&dev->device_lock);
- if (dev->wd_state != MEI_WD_IDLE) {
- /* timeout */
- ret = -ETIME;
- dev_warn(dev->dev, "wd: stop failed to complete ret=%d\n", ret);
- goto err;
- }
- dev_dbg(dev->dev, "wd: stop completed after %u msec\n",
- MEI_WD_STOP_TIMEOUT - jiffies_to_msecs(ret));
- return 0;
-err:
- return ret;
-}
-
-/**
- * mei_wd_ops_start - wd start command from the watchdog core.
- *
- * @wd_dev: watchdog device struct
- *
- * Return: 0 if success, negative errno code for failure
- */
-static int mei_wd_ops_start(struct watchdog_device *wd_dev)
-{
- struct mei_device *dev;
- struct mei_cl *cl;
- int err = -ENODEV;
-
- dev = watchdog_get_drvdata(wd_dev);
- if (!dev)
- return -ENODEV;
-
- cl = &dev->wd_cl;
-
- mutex_lock(&dev->device_lock);
-
- if (dev->dev_state != MEI_DEV_ENABLED) {
- dev_dbg(dev->dev, "wd: dev_state != MEI_DEV_ENABLED dev_state = %s\n",
- mei_dev_state_str(dev->dev_state));
- goto end_unlock;
- }
-
- if (!mei_cl_is_connected(cl)) {
- cl_dbg(dev, cl, "MEI Driver is not connected to Watchdog Client\n");
- goto end_unlock;
- }
-
- mei_wd_set_start_timeout(dev, dev->wd_timeout);
-
- err = 0;
-end_unlock:
- mutex_unlock(&dev->device_lock);
- return err;
-}
-
-/**
- * mei_wd_ops_stop - wd stop command from the watchdog core.
- *
- * @wd_dev: watchdog device struct
- *
- * Return: 0 if success, negative errno code for failure
- */
-static int mei_wd_ops_stop(struct watchdog_device *wd_dev)
-{
- struct mei_device *dev;
-
- dev = watchdog_get_drvdata(wd_dev);
- if (!dev)
- return -ENODEV;
-
- mutex_lock(&dev->device_lock);
- mei_wd_stop(dev);
- mutex_unlock(&dev->device_lock);
-
- return 0;
-}
-
-/**
- * mei_wd_ops_ping - wd ping command from the watchdog core.
- *
- * @wd_dev: watchdog device struct
- *
- * Return: 0 if success, negative errno code for failure
- */
-static int mei_wd_ops_ping(struct watchdog_device *wd_dev)
-{
- struct mei_device *dev;
- struct mei_cl *cl;
- int ret;
-
- dev = watchdog_get_drvdata(wd_dev);
- if (!dev)
- return -ENODEV;
-
- cl = &dev->wd_cl;
-
- mutex_lock(&dev->device_lock);
-
- if (!mei_cl_is_connected(cl)) {
- cl_err(dev, cl, "wd: not connected.\n");
- ret = -ENODEV;
- goto end;
- }
-
- dev->wd_state = MEI_WD_RUNNING;
-
- ret = mei_cl_flow_ctrl_creds(cl);
- if (ret < 0)
- goto end;
-
- /* Check if we can send the ping to HW*/
- if (ret && mei_hbuf_acquire(dev)) {
- dev_dbg(dev->dev, "wd: sending ping\n");
-
- ret = mei_wd_send(dev);
- if (ret)
- goto end;
- dev->wd_pending = false;
- } else {
- dev->wd_pending = true;
- }
-
-end:
- mutex_unlock(&dev->device_lock);
- return ret;
-}
-
-/**
- * mei_wd_ops_set_timeout - wd set timeout command from the watchdog core.
- *
- * @wd_dev: watchdog device struct
- * @timeout: timeout value to set
- *
- * Return: 0 if success, negative errno code for failure
- */
-static int mei_wd_ops_set_timeout(struct watchdog_device *wd_dev,
- unsigned int timeout)
-{
- struct mei_device *dev;
-
- dev = watchdog_get_drvdata(wd_dev);
- if (!dev)
- return -ENODEV;
-
- /* Check Timeout value */
- if (timeout < MEI_WD_MIN_TIMEOUT || timeout > MEI_WD_MAX_TIMEOUT)
- return -EINVAL;
-
- mutex_lock(&dev->device_lock);
-
- dev->wd_timeout = timeout;
- wd_dev->timeout = timeout;
- mei_wd_set_start_timeout(dev, dev->wd_timeout);
-
- mutex_unlock(&dev->device_lock);
-
- return 0;
-}
-
-/*
- * Watchdog Device structs
- */
-static const struct watchdog_ops wd_ops = {
- .owner = THIS_MODULE,
- .start = mei_wd_ops_start,
- .stop = mei_wd_ops_stop,
- .ping = mei_wd_ops_ping,
- .set_timeout = mei_wd_ops_set_timeout,
-};
-static const struct watchdog_info wd_info = {
- .identity = INTEL_AMT_WATCHDOG_ID,
- .options = WDIOF_KEEPALIVEPING |
- WDIOF_SETTIMEOUT |
- WDIOF_ALARMONLY,
-};
-
-static struct watchdog_device amt_wd_dev = {
- .info = &wd_info,
- .ops = &wd_ops,
- .timeout = MEI_WD_DEFAULT_TIMEOUT,
- .min_timeout = MEI_WD_MIN_TIMEOUT,
- .max_timeout = MEI_WD_MAX_TIMEOUT,
-};
-
-
-int mei_watchdog_register(struct mei_device *dev)
-{
-
- int ret;
-
- amt_wd_dev.parent = dev->dev;
- /* unlock to perserve correct locking order */
- mutex_unlock(&dev->device_lock);
- ret = watchdog_register_device(&amt_wd_dev);
- mutex_lock(&dev->device_lock);
- if (ret) {
- dev_err(dev->dev, "wd: unable to register watchdog device = %d.\n",
- ret);
- return ret;
- }
-
- dev_dbg(dev->dev, "wd: successfully register watchdog interface.\n");
- watchdog_set_drvdata(&amt_wd_dev, dev);
- return 0;
-}
-
-void mei_watchdog_unregister(struct mei_device *dev)
-{
- if (watchdog_get_drvdata(&amt_wd_dev) == NULL)
- return;
-
- watchdog_set_drvdata(&amt_wd_dev, NULL);
- watchdog_unregister_device(&amt_wd_dev);
-}
-
diff --git a/drivers/misc/mic/Kconfig b/drivers/misc/mic/Kconfig
index 40677df..2e4f3ba 100644
--- a/drivers/misc/mic/Kconfig
+++ b/drivers/misc/mic/Kconfig
@@ -32,12 +32,29 @@
OS and tools for MIC to use with this driver are available from
<http://software.intel.com/en-us/mic-developer>.
+comment "VOP Bus Driver"
+
+config VOP_BUS
+ tristate "VOP Bus Driver"
+ depends on 64BIT && PCI && X86 && X86_DEV_DMA_OPS
+ help
+ This option is selected by any driver which registers a
+ device or driver on the VOP Bus, such as CONFIG_INTEL_MIC_HOST
+ and CONFIG_INTEL_MIC_CARD.
+
+ If you are building a host/card kernel with an Intel MIC device
+ then say M (recommended) or Y, else say N. If unsure say N.
+
+ More information about the Intel MIC family as well as the Linux
+ OS and tools for MIC to use with this driver are available from
+ <http://software.intel.com/en-us/mic-developer>.
+
comment "Intel MIC Host Driver"
config INTEL_MIC_HOST
tristate "Intel MIC Host Driver"
- depends on 64BIT && PCI && X86 && INTEL_MIC_BUS && SCIF_BUS && MIC_COSM
- select VHOST_RING
+ depends on 64BIT && PCI && X86
+ depends on INTEL_MIC_BUS && SCIF_BUS && MIC_COSM && VOP_BUS
help
This enables Host Driver support for the Intel Many Integrated
Core (MIC) family of PCIe form factor coprocessor devices that
@@ -56,7 +73,8 @@
config INTEL_MIC_CARD
tristate "Intel MIC Card Driver"
- depends on 64BIT && X86 && INTEL_MIC_BUS && SCIF_BUS && MIC_COSM
+ depends on 64BIT && X86
+ depends on INTEL_MIC_BUS && SCIF_BUS && MIC_COSM && VOP_BUS
select VIRTIO
help
This enables card driver support for the Intel Many Integrated
@@ -107,3 +125,23 @@
More information about the Intel MIC family as well as the Linux
OS and tools for MIC to use with this driver are available from
<http://software.intel.com/en-us/mic-developer>.
+
+comment "VOP Driver"
+
+config VOP
+ tristate "VOP Driver"
+ depends on 64BIT && PCI && X86 && VOP_BUS
+ select VHOST_RING
+ help
+ This enables VOP (Virtio over PCIe) Driver support for the Intel
+ Many Integrated Core (MIC) family of PCIe form factor coprocessor
+ devices. The VOP driver allows virtio drivers, e.g. net, console
+ and block drivers, on the card connect to user space virtio
+ devices on the host.
+
+ If you are building a host kernel with an Intel MIC device then
+ say M (recommended) or Y, else say N. If unsure say N.
+
+ More information about the Intel MIC family as well as the Linux
+ OS and tools for MIC to use with this driver are available from
+ <http://software.intel.com/en-us/mic-developer>.
diff --git a/drivers/misc/mic/Makefile b/drivers/misc/mic/Makefile
index e288a11..f2b1323 100644
--- a/drivers/misc/mic/Makefile
+++ b/drivers/misc/mic/Makefile
@@ -8,3 +8,4 @@
obj-$(CONFIG_SCIF) += scif/
obj-$(CONFIG_MIC_COSM) += cosm/
obj-$(CONFIG_MIC_COSM) += cosm_client/
+obj-$(CONFIG_VOP) += vop/
diff --git a/drivers/misc/mic/bus/Makefile b/drivers/misc/mic/bus/Makefile
index 761842b..8758a7d 100644
--- a/drivers/misc/mic/bus/Makefile
+++ b/drivers/misc/mic/bus/Makefile
@@ -5,3 +5,4 @@
obj-$(CONFIG_INTEL_MIC_BUS) += mic_bus.o
obj-$(CONFIG_SCIF_BUS) += scif_bus.o
obj-$(CONFIG_MIC_COSM) += cosm_bus.o
+obj-$(CONFIG_VOP_BUS) += vop_bus.o
diff --git a/drivers/misc/mic/bus/cosm_bus.h b/drivers/misc/mic/bus/cosm_bus.h
index f7c57f2..8b63418 100644
--- a/drivers/misc/mic/bus/cosm_bus.h
+++ b/drivers/misc/mic/bus/cosm_bus.h
@@ -30,6 +30,7 @@
* @attr_group: Pointer to list of sysfs attribute groups.
* @sdev: Device for sysfs entries.
* @state: MIC state.
+ * @prev_state: MIC state previous to MIC_RESETTING
* @shutdown_status: MIC status reported by card for shutdown/crashes.
* @shutdown_status_int: Internal shutdown status maintained by the driver
* @cosm_mutex: Mutex for synchronizing access to data structures.
@@ -55,6 +56,7 @@
const struct attribute_group **attr_group;
struct device *sdev;
u8 state;
+ u8 prev_state;
u8 shutdown_status;
u8 shutdown_status_int;
struct mutex cosm_mutex;
diff --git a/drivers/misc/mic/bus/vop_bus.c b/drivers/misc/mic/bus/vop_bus.c
new file mode 100644
index 0000000..303da22
--- /dev/null
+++ b/drivers/misc/mic/bus/vop_bus.c
@@ -0,0 +1,203 @@
+/*
+ * Intel MIC Platform Software Stack (MPSS)
+ *
+ * Copyright(c) 2016 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License, version 2, as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ *
+ * Intel Virtio Over PCIe (VOP) Bus driver.
+ */
+#include <linux/slab.h>
+#include <linux/module.h>
+#include <linux/idr.h>
+#include <linux/dma-mapping.h>
+
+#include "vop_bus.h"
+
+static ssize_t device_show(struct device *d,
+ struct device_attribute *attr, char *buf)
+{
+ struct vop_device *dev = dev_to_vop(d);
+
+ return sprintf(buf, "0x%04x\n", dev->id.device);
+}
+static DEVICE_ATTR_RO(device);
+
+static ssize_t vendor_show(struct device *d,
+ struct device_attribute *attr, char *buf)
+{
+ struct vop_device *dev = dev_to_vop(d);
+
+ return sprintf(buf, "0x%04x\n", dev->id.vendor);
+}
+static DEVICE_ATTR_RO(vendor);
+
+static ssize_t modalias_show(struct device *d,
+ struct device_attribute *attr, char *buf)
+{
+ struct vop_device *dev = dev_to_vop(d);
+
+ return sprintf(buf, "vop:d%08Xv%08X\n",
+ dev->id.device, dev->id.vendor);
+}
+static DEVICE_ATTR_RO(modalias);
+
+static struct attribute *vop_dev_attrs[] = {
+ &dev_attr_device.attr,
+ &dev_attr_vendor.attr,
+ &dev_attr_modalias.attr,
+ NULL,
+};
+ATTRIBUTE_GROUPS(vop_dev);
+
+static inline int vop_id_match(const struct vop_device *dev,
+ const struct vop_device_id *id)
+{
+ if (id->device != dev->id.device && id->device != VOP_DEV_ANY_ID)
+ return 0;
+
+ return id->vendor == VOP_DEV_ANY_ID || id->vendor == dev->id.vendor;
+}
+
+/*
+ * This looks through all the IDs a driver claims to support. If any of them
+ * match, we return 1 and the kernel will call vop_dev_probe().
+ */
+static int vop_dev_match(struct device *dv, struct device_driver *dr)
+{
+ unsigned int i;
+ struct vop_device *dev = dev_to_vop(dv);
+ const struct vop_device_id *ids;
+
+ ids = drv_to_vop(dr)->id_table;
+ for (i = 0; ids[i].device; i++)
+ if (vop_id_match(dev, &ids[i]))
+ return 1;
+ return 0;
+}
+
+static int vop_uevent(struct device *dv, struct kobj_uevent_env *env)
+{
+ struct vop_device *dev = dev_to_vop(dv);
+
+ return add_uevent_var(env, "MODALIAS=vop:d%08Xv%08X",
+ dev->id.device, dev->id.vendor);
+}
+
+static int vop_dev_probe(struct device *d)
+{
+ struct vop_device *dev = dev_to_vop(d);
+ struct vop_driver *drv = drv_to_vop(dev->dev.driver);
+
+ return drv->probe(dev);
+}
+
+static int vop_dev_remove(struct device *d)
+{
+ struct vop_device *dev = dev_to_vop(d);
+ struct vop_driver *drv = drv_to_vop(dev->dev.driver);
+
+ drv->remove(dev);
+ return 0;
+}
+
+static struct bus_type vop_bus = {
+ .name = "vop_bus",
+ .match = vop_dev_match,
+ .dev_groups = vop_dev_groups,
+ .uevent = vop_uevent,
+ .probe = vop_dev_probe,
+ .remove = vop_dev_remove,
+};
+
+int vop_register_driver(struct vop_driver *driver)
+{
+ driver->driver.bus = &vop_bus;
+ return driver_register(&driver->driver);
+}
+EXPORT_SYMBOL_GPL(vop_register_driver);
+
+void vop_unregister_driver(struct vop_driver *driver)
+{
+ driver_unregister(&driver->driver);
+}
+EXPORT_SYMBOL_GPL(vop_unregister_driver);
+
+static void vop_release_dev(struct device *d)
+{
+ put_device(d);
+}
+
+struct vop_device *
+vop_register_device(struct device *pdev, int id,
+ const struct dma_map_ops *dma_ops,
+ struct vop_hw_ops *hw_ops, u8 dnode, struct mic_mw *aper,
+ struct dma_chan *chan)
+{
+ int ret;
+ struct vop_device *vdev;
+
+ vdev = kzalloc(sizeof(*vdev), GFP_KERNEL);
+ if (!vdev)
+ return ERR_PTR(-ENOMEM);
+
+ vdev->dev.parent = pdev;
+ vdev->id.device = id;
+ vdev->id.vendor = VOP_DEV_ANY_ID;
+ vdev->dev.archdata.dma_ops = (struct dma_map_ops *)dma_ops;
+ vdev->dev.dma_mask = &vdev->dev.coherent_dma_mask;
+ dma_set_mask(&vdev->dev, DMA_BIT_MASK(64));
+ vdev->dev.release = vop_release_dev;
+ vdev->hw_ops = hw_ops;
+ vdev->dev.bus = &vop_bus;
+ vdev->dnode = dnode;
+ vdev->aper = aper;
+ vdev->dma_ch = chan;
+ vdev->index = dnode - 1;
+ dev_set_name(&vdev->dev, "vop-dev%u", vdev->index);
+ /*
+ * device_register() causes the bus infrastructure to look for a
+ * matching driver.
+ */
+ ret = device_register(&vdev->dev);
+ if (ret)
+ goto free_vdev;
+ return vdev;
+free_vdev:
+ kfree(vdev);
+ return ERR_PTR(ret);
+}
+EXPORT_SYMBOL_GPL(vop_register_device);
+
+void vop_unregister_device(struct vop_device *dev)
+{
+ device_unregister(&dev->dev);
+}
+EXPORT_SYMBOL_GPL(vop_unregister_device);
+
+static int __init vop_init(void)
+{
+ return bus_register(&vop_bus);
+}
+
+static void __exit vop_exit(void)
+{
+ bus_unregister(&vop_bus);
+}
+
+core_initcall(vop_init);
+module_exit(vop_exit);
+
+MODULE_AUTHOR("Intel Corporation");
+MODULE_DESCRIPTION("Intel(R) VOP Bus driver");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/misc/mic/bus/vop_bus.h b/drivers/misc/mic/bus/vop_bus.h
new file mode 100644
index 0000000..fff7a86
--- /dev/null
+++ b/drivers/misc/mic/bus/vop_bus.h
@@ -0,0 +1,140 @@
+/*
+ * Intel MIC Platform Software Stack (MPSS)
+ *
+ * Copyright(c) 2016 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License, version 2, as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ *
+ * Intel Virtio over PCIe Bus driver.
+ */
+#ifndef _VOP_BUS_H_
+#define _VOP_BUS_H_
+/*
+ * Everything a vop driver needs to work with any particular vop
+ * implementation.
+ */
+#include <linux/dmaengine.h>
+#include <linux/interrupt.h>
+
+#include "../common/mic_dev.h"
+
+struct vop_device_id {
+ u32 device;
+ u32 vendor;
+};
+
+#define VOP_DEV_TRNSP 1
+#define VOP_DEV_ANY_ID 0xffffffff
+/*
+ * Size of the internal buffer used during DMA's as an intermediate buffer
+ * for copy to/from user. Must be an integral number of pages.
+ */
+#define VOP_INT_DMA_BUF_SIZE PAGE_ALIGN(64 * 1024ULL)
+
+/**
+ * vop_device - representation of a device using vop
+ * @hw_ops: the hardware ops supported by this device.
+ * @id: the device type identification (used to match it with a driver).
+ * @dev: underlying device.
+ * @dnode - The destination node which this device will communicate with.
+ * @aper: Aperture memory window
+ * @dma_ch - DMA channel
+ * @index: unique position on the vop bus
+ */
+struct vop_device {
+ struct vop_hw_ops *hw_ops;
+ struct vop_device_id id;
+ struct device dev;
+ u8 dnode;
+ struct mic_mw *aper;
+ struct dma_chan *dma_ch;
+ int index;
+};
+
+/**
+ * vop_driver - operations for a vop I/O driver
+ * @driver: underlying device driver (populate name and owner).
+ * @id_table: the ids serviced by this driver.
+ * @probe: the function to call when a device is found. Returns 0 or -errno.
+ * @remove: the function to call when a device is removed.
+ */
+struct vop_driver {
+ struct device_driver driver;
+ const struct vop_device_id *id_table;
+ int (*probe)(struct vop_device *dev);
+ void (*remove)(struct vop_device *dev);
+};
+
+/**
+ * vop_hw_ops - Hardware operations for accessing a VOP device on the VOP bus.
+ *
+ * @next_db: Obtain the next available doorbell.
+ * @request_irq: Request an interrupt on a particular doorbell.
+ * @free_irq: Free an interrupt requested previously.
+ * @ack_interrupt: acknowledge an interrupt in the ISR.
+ * @get_remote_dp: Get access to the virtio device page used by the remote
+ * node to add/remove/configure virtio devices.
+ * @get_dp: Get access to the virtio device page used by the self
+ * node to add/remove/configure virtio devices.
+ * @send_intr: Send an interrupt to the peer node on a specified doorbell.
+ * @ioremap: Map a buffer with the specified DMA address and length.
+ * @iounmap: Unmap a buffer previously mapped.
+ * @dma_filter: The DMA filter function to use for obtaining access to
+ * a DMA channel on the peer node.
+ */
+struct vop_hw_ops {
+ int (*next_db)(struct vop_device *vpdev);
+ struct mic_irq *(*request_irq)(struct vop_device *vpdev,
+ irqreturn_t (*func)(int irq, void *data),
+ const char *name, void *data,
+ int intr_src);
+ void (*free_irq)(struct vop_device *vpdev,
+ struct mic_irq *cookie, void *data);
+ void (*ack_interrupt)(struct vop_device *vpdev, int num);
+ void __iomem * (*get_remote_dp)(struct vop_device *vpdev);
+ void * (*get_dp)(struct vop_device *vpdev);
+ void (*send_intr)(struct vop_device *vpdev, int db);
+ void __iomem * (*ioremap)(struct vop_device *vpdev,
+ dma_addr_t pa, size_t len);
+ void (*iounmap)(struct vop_device *vpdev, void __iomem *va);
+};
+
+struct vop_device *
+vop_register_device(struct device *pdev, int id,
+ const struct dma_map_ops *dma_ops,
+ struct vop_hw_ops *hw_ops, u8 dnode, struct mic_mw *aper,
+ struct dma_chan *chan);
+void vop_unregister_device(struct vop_device *dev);
+int vop_register_driver(struct vop_driver *drv);
+void vop_unregister_driver(struct vop_driver *drv);
+
+/*
+ * module_vop_driver() - Helper macro for drivers that don't do
+ * anything special in module init/exit. This eliminates a lot of
+ * boilerplate. Each module may only use this macro once, and
+ * calling it replaces module_init() and module_exit()
+ */
+#define module_vop_driver(__vop_driver) \
+ module_driver(__vop_driver, vop_register_driver, \
+ vop_unregister_driver)
+
+static inline struct vop_device *dev_to_vop(struct device *dev)
+{
+ return container_of(dev, struct vop_device, dev);
+}
+
+static inline struct vop_driver *drv_to_vop(struct device_driver *drv)
+{
+ return container_of(drv, struct vop_driver, driver);
+}
+#endif /* _VOP_BUS_H */
diff --git a/drivers/misc/mic/card/Makefile b/drivers/misc/mic/card/Makefile
index 69d58be..6e9675e 100644
--- a/drivers/misc/mic/card/Makefile
+++ b/drivers/misc/mic/card/Makefile
@@ -8,4 +8,3 @@
mic_card-y += mic_x100.o
mic_card-y += mic_device.o
mic_card-y += mic_debugfs.o
-mic_card-y += mic_virtio.o
diff --git a/drivers/misc/mic/card/mic_device.c b/drivers/misc/mic/card/mic_device.c
index d0edaf7..e749af48 100644
--- a/drivers/misc/mic/card/mic_device.c
+++ b/drivers/misc/mic/card/mic_device.c
@@ -34,7 +34,6 @@
#include <linux/mic_common.h>
#include "../common/mic_dev.h"
#include "mic_device.h"
-#include "mic_virtio.h"
static struct mic_driver *g_drv;
@@ -250,12 +249,82 @@
.iounmap = ___mic_iounmap,
};
+static inline struct mic_driver *vpdev_to_mdrv(struct vop_device *vpdev)
+{
+ return dev_get_drvdata(vpdev->dev.parent);
+}
+
+static struct mic_irq *
+__mic_request_irq(struct vop_device *vpdev,
+ irqreturn_t (*func)(int irq, void *data),
+ const char *name, void *data, int intr_src)
+{
+ return mic_request_card_irq(func, NULL, name, data, intr_src);
+}
+
+static void __mic_free_irq(struct vop_device *vpdev,
+ struct mic_irq *cookie, void *data)
+{
+ return mic_free_card_irq(cookie, data);
+}
+
+static void __mic_ack_interrupt(struct vop_device *vpdev, int num)
+{
+ struct mic_driver *mdrv = vpdev_to_mdrv(vpdev);
+
+ mic_ack_interrupt(&mdrv->mdev);
+}
+
+static int __mic_next_db(struct vop_device *vpdev)
+{
+ return mic_next_card_db();
+}
+
+static void __iomem *__mic_get_remote_dp(struct vop_device *vpdev)
+{
+ struct mic_driver *mdrv = vpdev_to_mdrv(vpdev);
+
+ return mdrv->dp;
+}
+
+static void __mic_send_intr(struct vop_device *vpdev, int db)
+{
+ struct mic_driver *mdrv = vpdev_to_mdrv(vpdev);
+
+ mic_send_intr(&mdrv->mdev, db);
+}
+
+static void __iomem *__mic_ioremap(struct vop_device *vpdev,
+ dma_addr_t pa, size_t len)
+{
+ struct mic_driver *mdrv = vpdev_to_mdrv(vpdev);
+
+ return mic_card_map(&mdrv->mdev, pa, len);
+}
+
+static void __mic_iounmap(struct vop_device *vpdev, void __iomem *va)
+{
+ struct mic_driver *mdrv = vpdev_to_mdrv(vpdev);
+
+ mic_card_unmap(&mdrv->mdev, va);
+}
+
+static struct vop_hw_ops vop_hw_ops = {
+ .request_irq = __mic_request_irq,
+ .free_irq = __mic_free_irq,
+ .ack_interrupt = __mic_ack_interrupt,
+ .next_db = __mic_next_db,
+ .get_remote_dp = __mic_get_remote_dp,
+ .send_intr = __mic_send_intr,
+ .ioremap = __mic_ioremap,
+ .iounmap = __mic_iounmap,
+};
+
static int mic_request_dma_chans(struct mic_driver *mdrv)
{
dma_cap_mask_t mask;
struct dma_chan *chan;
- request_module("mic_x100_dma");
dma_cap_zero(mask);
dma_cap_set(DMA_MEMCPY, mask);
@@ -309,9 +378,13 @@
rc = -ENODEV;
goto irq_uninit;
}
- rc = mic_devices_init(mdrv);
- if (rc)
+ mdrv->vpdev = vop_register_device(mdrv->dev, VOP_DEV_TRNSP,
+ NULL, &vop_hw_ops, 0,
+ NULL, mdrv->dma_ch[0]);
+ if (IS_ERR(mdrv->vpdev)) {
+ rc = PTR_ERR(mdrv->vpdev);
goto dma_free;
+ }
bootparam = mdrv->dp;
node_id = ioread8(&bootparam->node_id);
mdrv->scdev = scif_register_device(mdrv->dev, MIC_SCIF_DEV,
@@ -321,13 +394,13 @@
mdrv->num_dma_ch, true);
if (IS_ERR(mdrv->scdev)) {
rc = PTR_ERR(mdrv->scdev);
- goto device_uninit;
+ goto vop_remove;
}
mic_create_card_debug_dir(mdrv);
done:
return rc;
-device_uninit:
- mic_devices_uninit(mdrv);
+vop_remove:
+ vop_unregister_device(mdrv->vpdev);
dma_free:
mic_free_dma_chans(mdrv);
irq_uninit:
@@ -348,7 +421,7 @@
{
mic_delete_card_debug_dir(mdrv);
scif_unregister_device(mdrv->scdev);
- mic_devices_uninit(mdrv);
+ vop_unregister_device(mdrv->vpdev);
mic_free_dma_chans(mdrv);
mic_uninit_irq();
mic_dp_uninit();
diff --git a/drivers/misc/mic/card/mic_device.h b/drivers/misc/mic/card/mic_device.h
index 1dbf83c..333dbed 100644
--- a/drivers/misc/mic/card/mic_device.h
+++ b/drivers/misc/mic/card/mic_device.h
@@ -32,6 +32,7 @@
#include <linux/interrupt.h>
#include <linux/mic_bus.h>
#include "../bus/scif_bus.h"
+#include "../bus/vop_bus.h"
/**
* struct mic_intr_info - Contains h/w specific interrupt sources info
@@ -76,6 +77,7 @@
* @dma_ch - Array of DMA channels
* @num_dma_ch - Number of DMA channels available
* @scdev: SCIF device on the SCIF virtual bus.
+ * @vpdev: Virtio over PCIe device on the VOP virtual bus.
*/
struct mic_driver {
char name[20];
@@ -90,6 +92,7 @@
struct dma_chan *dma_ch[MIC_MAX_DMA_CHAN];
int num_dma_ch;
struct scif_hw_dev *scdev;
+ struct vop_device *vpdev;
};
/**
diff --git a/drivers/misc/mic/card/mic_virtio.c b/drivers/misc/mic/card/mic_virtio.c
deleted file mode 100644
index f6ed57d..0000000
--- a/drivers/misc/mic/card/mic_virtio.c
+++ /dev/null
@@ -1,634 +0,0 @@
-/*
- * Intel MIC Platform Software Stack (MPSS)
- *
- * Copyright(c) 2013 Intel Corporation.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License, version 2, as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful, but
- * WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * General Public License for more details.
- *
- * The full GNU General Public License is included in this distribution in
- * the file called "COPYING".
- *
- * Disclaimer: The codes contained in these modules may be specific to
- * the Intel Software Development Platform codenamed: Knights Ferry, and
- * the Intel product codenamed: Knights Corner, and are not backward
- * compatible with other Intel products. Additionally, Intel will NOT
- * support the codes or instruction set in future products.
- *
- * Adapted from:
- *
- * virtio for kvm on s390
- *
- * Copyright IBM Corp. 2008
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License (version 2 only)
- * as published by the Free Software Foundation.
- *
- * Author(s): Christian Borntraeger <borntraeger@de.ibm.com>
- *
- * Intel MIC Card driver.
- *
- */
-#include <linux/delay.h>
-#include <linux/slab.h>
-#include <linux/virtio_config.h>
-
-#include "../common/mic_dev.h"
-#include "mic_virtio.h"
-
-#define VIRTIO_SUBCODE_64 0x0D00
-
-#define MIC_MAX_VRINGS 4
-struct mic_vdev {
- struct virtio_device vdev;
- struct mic_device_desc __iomem *desc;
- struct mic_device_ctrl __iomem *dc;
- struct mic_device *mdev;
- void __iomem *vr[MIC_MAX_VRINGS];
- int used_size[MIC_MAX_VRINGS];
- struct completion reset_done;
- struct mic_irq *virtio_cookie;
- int c2h_vdev_db;
-};
-
-static struct mic_irq *virtio_config_cookie;
-#define to_micvdev(vd) container_of(vd, struct mic_vdev, vdev)
-
-/* Helper API to obtain the parent of the virtio device */
-static inline struct device *mic_dev(struct mic_vdev *mvdev)
-{
- return mvdev->vdev.dev.parent;
-}
-
-/* This gets the device's feature bits. */
-static u64 mic_get_features(struct virtio_device *vdev)
-{
- unsigned int i, bits;
- u32 features = 0;
- struct mic_device_desc __iomem *desc = to_micvdev(vdev)->desc;
- u8 __iomem *in_features = mic_vq_features(desc);
- int feature_len = ioread8(&desc->feature_len);
-
- bits = min_t(unsigned, feature_len, sizeof(features)) * 8;
- for (i = 0; i < bits; i++)
- if (ioread8(&in_features[i / 8]) & (BIT(i % 8)))
- features |= BIT(i);
-
- return features;
-}
-
-static int mic_finalize_features(struct virtio_device *vdev)
-{
- unsigned int i, bits;
- struct mic_device_desc __iomem *desc = to_micvdev(vdev)->desc;
- u8 feature_len = ioread8(&desc->feature_len);
- /* Second half of bitmap is features we accept. */
- u8 __iomem *out_features =
- mic_vq_features(desc) + feature_len;
-
- /* Give virtio_ring a chance to accept features. */
- vring_transport_features(vdev);
-
- /* Make sure we don't have any features > 32 bits! */
- BUG_ON((u32)vdev->features != vdev->features);
-
- memset_io(out_features, 0, feature_len);
- bits = min_t(unsigned, feature_len,
- sizeof(vdev->features)) * 8;
- for (i = 0; i < bits; i++) {
- if (__virtio_test_bit(vdev, i))
- iowrite8(ioread8(&out_features[i / 8]) | (1 << (i % 8)),
- &out_features[i / 8]);
- }
-
- return 0;
-}
-
-/*
- * Reading and writing elements in config space
- */
-static void mic_get(struct virtio_device *vdev, unsigned int offset,
- void *buf, unsigned len)
-{
- struct mic_device_desc __iomem *desc = to_micvdev(vdev)->desc;
-
- if (offset + len > ioread8(&desc->config_len))
- return;
- memcpy_fromio(buf, mic_vq_configspace(desc) + offset, len);
-}
-
-static void mic_set(struct virtio_device *vdev, unsigned int offset,
- const void *buf, unsigned len)
-{
- struct mic_device_desc __iomem *desc = to_micvdev(vdev)->desc;
-
- if (offset + len > ioread8(&desc->config_len))
- return;
- memcpy_toio(mic_vq_configspace(desc) + offset, buf, len);
-}
-
-/*
- * The operations to get and set the status word just access the status
- * field of the device descriptor. set_status also interrupts the host
- * to tell about status changes.
- */
-static u8 mic_get_status(struct virtio_device *vdev)
-{
- return ioread8(&to_micvdev(vdev)->desc->status);
-}
-
-static void mic_set_status(struct virtio_device *vdev, u8 status)
-{
- struct mic_vdev *mvdev = to_micvdev(vdev);
- if (!status)
- return;
- iowrite8(status, &mvdev->desc->status);
- mic_send_intr(mvdev->mdev, mvdev->c2h_vdev_db);
-}
-
-/* Inform host on a virtio device reset and wait for ack from host */
-static void mic_reset_inform_host(struct virtio_device *vdev)
-{
- struct mic_vdev *mvdev = to_micvdev(vdev);
- struct mic_device_ctrl __iomem *dc = mvdev->dc;
- int retry;
-
- iowrite8(0, &dc->host_ack);
- iowrite8(1, &dc->vdev_reset);
- mic_send_intr(mvdev->mdev, mvdev->c2h_vdev_db);
-
- /* Wait till host completes all card accesses and acks the reset */
- for (retry = 100; retry--;) {
- if (ioread8(&dc->host_ack))
- break;
- msleep(100);
- };
-
- dev_dbg(mic_dev(mvdev), "%s: retry: %d\n", __func__, retry);
-
- /* Reset status to 0 in case we timed out */
- iowrite8(0, &mvdev->desc->status);
-}
-
-static void mic_reset(struct virtio_device *vdev)
-{
- struct mic_vdev *mvdev = to_micvdev(vdev);
-
- dev_dbg(mic_dev(mvdev), "%s: virtio id %d\n",
- __func__, vdev->id.device);
-
- mic_reset_inform_host(vdev);
- complete_all(&mvdev->reset_done);
-}
-
-/*
- * The virtio_ring code calls this API when it wants to notify the Host.
- */
-static bool mic_notify(struct virtqueue *vq)
-{
- struct mic_vdev *mvdev = vq->priv;
-
- mic_send_intr(mvdev->mdev, mvdev->c2h_vdev_db);
- return true;
-}
-
-static void mic_del_vq(struct virtqueue *vq, int n)
-{
- struct mic_vdev *mvdev = to_micvdev(vq->vdev);
- struct vring *vr = (struct vring *)(vq + 1);
-
- free_pages((unsigned long) vr->used, get_order(mvdev->used_size[n]));
- vring_del_virtqueue(vq);
- mic_card_unmap(mvdev->mdev, mvdev->vr[n]);
- mvdev->vr[n] = NULL;
-}
-
-static void mic_del_vqs(struct virtio_device *vdev)
-{
- struct mic_vdev *mvdev = to_micvdev(vdev);
- struct virtqueue *vq, *n;
- int idx = 0;
-
- dev_dbg(mic_dev(mvdev), "%s\n", __func__);
-
- list_for_each_entry_safe(vq, n, &vdev->vqs, list)
- mic_del_vq(vq, idx++);
-}
-
-/*
- * This routine will assign vring's allocated in host/io memory. Code in
- * virtio_ring.c however continues to access this io memory as if it were local
- * memory without io accessors.
- */
-static struct virtqueue *mic_find_vq(struct virtio_device *vdev,
- unsigned index,
- void (*callback)(struct virtqueue *vq),
- const char *name)
-{
- struct mic_vdev *mvdev = to_micvdev(vdev);
- struct mic_vqconfig __iomem *vqconfig;
- struct mic_vqconfig config;
- struct virtqueue *vq;
- void __iomem *va;
- struct _mic_vring_info __iomem *info;
- void *used;
- int vr_size, _vr_size, err, magic;
- struct vring *vr;
- u8 type = ioread8(&mvdev->desc->type);
-
- if (index >= ioread8(&mvdev->desc->num_vq))
- return ERR_PTR(-ENOENT);
-
- if (!name)
- return ERR_PTR(-ENOENT);
-
- /* First assign the vring's allocated in host memory */
- vqconfig = mic_vq_config(mvdev->desc) + index;
- memcpy_fromio(&config, vqconfig, sizeof(config));
- _vr_size = vring_size(le16_to_cpu(config.num), MIC_VIRTIO_RING_ALIGN);
- vr_size = PAGE_ALIGN(_vr_size + sizeof(struct _mic_vring_info));
- va = mic_card_map(mvdev->mdev, le64_to_cpu(config.address), vr_size);
- if (!va)
- return ERR_PTR(-ENOMEM);
- mvdev->vr[index] = va;
- memset_io(va, 0x0, _vr_size);
- vq = vring_new_virtqueue(index, le16_to_cpu(config.num),
- MIC_VIRTIO_RING_ALIGN, vdev, false,
- (void __force *)va, mic_notify, callback,
- name);
- if (!vq) {
- err = -ENOMEM;
- goto unmap;
- }
- info = va + _vr_size;
- magic = ioread32(&info->magic);
-
- if (WARN(magic != MIC_MAGIC + type + index, "magic mismatch")) {
- err = -EIO;
- goto unmap;
- }
-
- /* Allocate and reassign used ring now */
- mvdev->used_size[index] = PAGE_ALIGN(sizeof(__u16) * 3 +
- sizeof(struct vring_used_elem) *
- le16_to_cpu(config.num));
- used = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO,
- get_order(mvdev->used_size[index]));
- if (!used) {
- err = -ENOMEM;
- dev_err(mic_dev(mvdev), "%s %d err %d\n",
- __func__, __LINE__, err);
- goto del_vq;
- }
- iowrite64(virt_to_phys(used), &vqconfig->used_address);
-
- /*
- * To reassign the used ring here we are directly accessing
- * struct vring_virtqueue which is a private data structure
- * in virtio_ring.c. At the minimum, a BUILD_BUG_ON() in
- * vring_new_virtqueue() would ensure that
- * (&vq->vring == (struct vring *) (&vq->vq + 1));
- */
- vr = (struct vring *)(vq + 1);
- vr->used = used;
-
- vq->priv = mvdev;
- return vq;
-del_vq:
- vring_del_virtqueue(vq);
-unmap:
- mic_card_unmap(mvdev->mdev, mvdev->vr[index]);
- return ERR_PTR(err);
-}
-
-static int mic_find_vqs(struct virtio_device *vdev, unsigned nvqs,
- struct virtqueue *vqs[],
- vq_callback_t *callbacks[],
- const char * const names[])
-{
- struct mic_vdev *mvdev = to_micvdev(vdev);
- struct mic_device_ctrl __iomem *dc = mvdev->dc;
- int i, err, retry;
-
- /* We must have this many virtqueues. */
- if (nvqs > ioread8(&mvdev->desc->num_vq))
- return -ENOENT;
-
- for (i = 0; i < nvqs; ++i) {
- dev_dbg(mic_dev(mvdev), "%s: %d: %s\n",
- __func__, i, names[i]);
- vqs[i] = mic_find_vq(vdev, i, callbacks[i], names[i]);
- if (IS_ERR(vqs[i])) {
- err = PTR_ERR(vqs[i]);
- goto error;
- }
- }
-
- iowrite8(1, &dc->used_address_updated);
- /*
- * Send an interrupt to the host to inform it that used
- * rings have been re-assigned.
- */
- mic_send_intr(mvdev->mdev, mvdev->c2h_vdev_db);
- for (retry = 100; retry--;) {
- if (!ioread8(&dc->used_address_updated))
- break;
- msleep(100);
- };
-
- dev_dbg(mic_dev(mvdev), "%s: retry: %d\n", __func__, retry);
- if (!retry) {
- err = -ENODEV;
- goto error;
- }
-
- return 0;
-error:
- mic_del_vqs(vdev);
- return err;
-}
-
-/*
- * The config ops structure as defined by virtio config
- */
-static struct virtio_config_ops mic_vq_config_ops = {
- .get_features = mic_get_features,
- .finalize_features = mic_finalize_features,
- .get = mic_get,
- .set = mic_set,
- .get_status = mic_get_status,
- .set_status = mic_set_status,
- .reset = mic_reset,
- .find_vqs = mic_find_vqs,
- .del_vqs = mic_del_vqs,
-};
-
-static irqreturn_t
-mic_virtio_intr_handler(int irq, void *data)
-{
- struct mic_vdev *mvdev = data;
- struct virtqueue *vq;
-
- mic_ack_interrupt(mvdev->mdev);
- list_for_each_entry(vq, &mvdev->vdev.vqs, list)
- vring_interrupt(0, vq);
-
- return IRQ_HANDLED;
-}
-
-static void mic_virtio_release_dev(struct device *_d)
-{
- /*
- * No need for a release method similar to virtio PCI.
- * Provide an empty one to avoid getting a warning from core.
- */
-}
-
-/*
- * adds a new device and register it with virtio
- * appropriate drivers are loaded by the device model
- */
-static int mic_add_device(struct mic_device_desc __iomem *d,
- unsigned int offset, struct mic_driver *mdrv)
-{
- struct mic_vdev *mvdev;
- int ret;
- int virtio_db;
- u8 type = ioread8(&d->type);
-
- mvdev = kzalloc(sizeof(*mvdev), GFP_KERNEL);
- if (!mvdev) {
- dev_err(mdrv->dev, "Cannot allocate mic dev %u type %u\n",
- offset, type);
- return -ENOMEM;
- }
-
- mvdev->mdev = &mdrv->mdev;
- mvdev->vdev.dev.parent = mdrv->dev;
- mvdev->vdev.dev.release = mic_virtio_release_dev;
- mvdev->vdev.id.device = type;
- mvdev->vdev.config = &mic_vq_config_ops;
- mvdev->desc = d;
- mvdev->dc = (void __iomem *)d + mic_aligned_desc_size(d);
- init_completion(&mvdev->reset_done);
-
- virtio_db = mic_next_card_db();
- mvdev->virtio_cookie = mic_request_card_irq(mic_virtio_intr_handler,
- NULL, "virtio intr", mvdev, virtio_db);
- if (IS_ERR(mvdev->virtio_cookie)) {
- ret = PTR_ERR(mvdev->virtio_cookie);
- goto kfree;
- }
- iowrite8((u8)virtio_db, &mvdev->dc->h2c_vdev_db);
- mvdev->c2h_vdev_db = ioread8(&mvdev->dc->c2h_vdev_db);
-
- ret = register_virtio_device(&mvdev->vdev);
- if (ret) {
- dev_err(mic_dev(mvdev),
- "Failed to register mic device %u type %u\n",
- offset, type);
- goto free_irq;
- }
- iowrite64((u64)mvdev, &mvdev->dc->vdev);
- dev_dbg(mic_dev(mvdev), "%s: registered mic device %u type %u mvdev %p\n",
- __func__, offset, type, mvdev);
-
- return 0;
-
-free_irq:
- mic_free_card_irq(mvdev->virtio_cookie, mvdev);
-kfree:
- kfree(mvdev);
- return ret;
-}
-
-/*
- * match for a mic device with a specific desc pointer
- */
-static int mic_match_desc(struct device *dev, void *data)
-{
- struct virtio_device *vdev = dev_to_virtio(dev);
- struct mic_vdev *mvdev = to_micvdev(vdev);
-
- return mvdev->desc == (void __iomem *)data;
-}
-
-static void mic_handle_config_change(struct mic_device_desc __iomem *d,
- unsigned int offset, struct mic_driver *mdrv)
-{
- struct mic_device_ctrl __iomem *dc
- = (void __iomem *)d + mic_aligned_desc_size(d);
- struct mic_vdev *mvdev = (struct mic_vdev *)ioread64(&dc->vdev);
-
- if (ioread8(&dc->config_change) != MIC_VIRTIO_PARAM_CONFIG_CHANGED)
- return;
-
- dev_dbg(mdrv->dev, "%s %d\n", __func__, __LINE__);
- virtio_config_changed(&mvdev->vdev);
- iowrite8(1, &dc->guest_ack);
-}
-
-/*
- * removes a virtio device if a hot remove event has been
- * requested by the host.
- */
-static int mic_remove_device(struct mic_device_desc __iomem *d,
- unsigned int offset, struct mic_driver *mdrv)
-{
- struct mic_device_ctrl __iomem *dc
- = (void __iomem *)d + mic_aligned_desc_size(d);
- struct mic_vdev *mvdev = (struct mic_vdev *)ioread64(&dc->vdev);
- u8 status;
- int ret = -1;
-
- if (ioread8(&dc->config_change) == MIC_VIRTIO_PARAM_DEV_REMOVE) {
- dev_dbg(mdrv->dev,
- "%s %d config_change %d type %d mvdev %p\n",
- __func__, __LINE__,
- ioread8(&dc->config_change), ioread8(&d->type), mvdev);
-
- status = ioread8(&d->status);
- reinit_completion(&mvdev->reset_done);
- unregister_virtio_device(&mvdev->vdev);
- mic_free_card_irq(mvdev->virtio_cookie, mvdev);
- if (status & VIRTIO_CONFIG_S_DRIVER_OK)
- wait_for_completion(&mvdev->reset_done);
- kfree(mvdev);
- iowrite8(1, &dc->guest_ack);
- dev_dbg(mdrv->dev, "%s %d guest_ack %d\n",
- __func__, __LINE__, ioread8(&dc->guest_ack));
- ret = 0;
- }
-
- return ret;
-}
-
-#define REMOVE_DEVICES true
-
-static void mic_scan_devices(struct mic_driver *mdrv, bool remove)
-{
- s8 type;
- unsigned int i;
- struct mic_device_desc __iomem *d;
- struct mic_device_ctrl __iomem *dc;
- struct device *dev;
- int ret;
-
- for (i = sizeof(struct mic_bootparam); i < MIC_DP_SIZE;
- i += mic_total_desc_size(d)) {
- d = mdrv->dp + i;
- dc = (void __iomem *)d + mic_aligned_desc_size(d);
- /*
- * This read barrier is paired with the corresponding write
- * barrier on the host which is inserted before adding or
- * removing a virtio device descriptor, by updating the type.
- */
- rmb();
- type = ioread8(&d->type);
-
- /* end of list */
- if (type == 0)
- break;
-
- if (type == -1)
- continue;
-
- /* device already exists */
- dev = device_find_child(mdrv->dev, (void __force *)d,
- mic_match_desc);
- if (dev) {
- if (remove)
- iowrite8(MIC_VIRTIO_PARAM_DEV_REMOVE,
- &dc->config_change);
- put_device(dev);
- mic_handle_config_change(d, i, mdrv);
- ret = mic_remove_device(d, i, mdrv);
- if (!ret && !remove)
- iowrite8(-1, &d->type);
- if (remove) {
- iowrite8(0, &dc->config_change);
- iowrite8(0, &dc->guest_ack);
- }
- continue;
- }
-
- /* new device */
- dev_dbg(mdrv->dev, "%s %d Adding new virtio device %p\n",
- __func__, __LINE__, d);
- if (!remove)
- mic_add_device(d, i, mdrv);
- }
-}
-
-/*
- * mic_hotplug_device tries to find changes in the device page.
- */
-static void mic_hotplug_devices(struct work_struct *work)
-{
- struct mic_driver *mdrv = container_of(work,
- struct mic_driver, hotplug_work);
-
- mic_scan_devices(mdrv, !REMOVE_DEVICES);
-}
-
-/*
- * Interrupt handler for hot plug/config changes etc.
- */
-static irqreturn_t
-mic_extint_handler(int irq, void *data)
-{
- struct mic_driver *mdrv = (struct mic_driver *)data;
-
- dev_dbg(mdrv->dev, "%s %d hotplug work\n",
- __func__, __LINE__);
- mic_ack_interrupt(&mdrv->mdev);
- schedule_work(&mdrv->hotplug_work);
- return IRQ_HANDLED;
-}
-
-/*
- * Init function for virtio
- */
-int mic_devices_init(struct mic_driver *mdrv)
-{
- int rc;
- struct mic_bootparam __iomem *bootparam;
- int config_db;
-
- INIT_WORK(&mdrv->hotplug_work, mic_hotplug_devices);
- mic_scan_devices(mdrv, !REMOVE_DEVICES);
-
- config_db = mic_next_card_db();
- virtio_config_cookie = mic_request_card_irq(mic_extint_handler, NULL,
- "virtio_config_intr", mdrv,
- config_db);
- if (IS_ERR(virtio_config_cookie)) {
- rc = PTR_ERR(virtio_config_cookie);
- goto exit;
- }
-
- bootparam = mdrv->dp;
- iowrite8(config_db, &bootparam->h2c_config_db);
- return 0;
-exit:
- return rc;
-}
-
-/*
- * Uninit function for virtio
- */
-void mic_devices_uninit(struct mic_driver *mdrv)
-{
- struct mic_bootparam __iomem *bootparam = mdrv->dp;
- iowrite8(-1, &bootparam->h2c_config_db);
- mic_free_card_irq(virtio_config_cookie, mdrv);
- flush_work(&mdrv->hotplug_work);
- mic_scan_devices(mdrv, REMOVE_DEVICES);
-}
diff --git a/drivers/misc/mic/card/mic_virtio.h b/drivers/misc/mic/card/mic_virtio.h
deleted file mode 100644
index d0407ba..0000000
--- a/drivers/misc/mic/card/mic_virtio.h
+++ /dev/null
@@ -1,76 +0,0 @@
-/*
- * Intel MIC Platform Software Stack (MPSS)
- *
- * Copyright(c) 2013 Intel Corporation.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License, version 2, as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful, but
- * WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * General Public License for more details.
- *
- * The full GNU General Public License is included in this distribution in
- * the file called "COPYING".
- *
- * Disclaimer: The codes contained in these modules may be specific to
- * the Intel Software Development Platform codenamed: Knights Ferry, and
- * the Intel product codenamed: Knights Corner, and are not backward
- * compatible with other Intel products. Additionally, Intel will NOT
- * support the codes or instruction set in future products.
- *
- * Intel MIC Card driver.
- *
- */
-#ifndef __MIC_CARD_VIRTIO_H
-#define __MIC_CARD_VIRTIO_H
-
-#include <linux/mic_common.h>
-#include "mic_device.h"
-
-/*
- * 64 bit I/O access
- */
-#ifndef ioread64
-#define ioread64 readq
-#endif
-#ifndef iowrite64
-#define iowrite64 writeq
-#endif
-
-static inline unsigned mic_desc_size(struct mic_device_desc __iomem *desc)
-{
- return sizeof(*desc)
- + ioread8(&desc->num_vq) * sizeof(struct mic_vqconfig)
- + ioread8(&desc->feature_len) * 2
- + ioread8(&desc->config_len);
-}
-
-static inline struct mic_vqconfig __iomem *
-mic_vq_config(struct mic_device_desc __iomem *desc)
-{
- return (struct mic_vqconfig __iomem *)(desc + 1);
-}
-
-static inline __u8 __iomem *
-mic_vq_features(struct mic_device_desc __iomem *desc)
-{
- return (__u8 __iomem *)(mic_vq_config(desc) + ioread8(&desc->num_vq));
-}
-
-static inline __u8 __iomem *
-mic_vq_configspace(struct mic_device_desc __iomem *desc)
-{
- return mic_vq_features(desc) + ioread8(&desc->feature_len) * 2;
-}
-static inline unsigned mic_total_desc_size(struct mic_device_desc __iomem *desc)
-{
- return mic_aligned_desc_size(desc) + sizeof(struct mic_device_ctrl);
-}
-
-int mic_devices_init(struct mic_driver *mdrv);
-void mic_devices_uninit(struct mic_driver *mdrv);
-
-#endif
diff --git a/drivers/misc/mic/card/mic_x100.c b/drivers/misc/mic/card/mic_x100.c
index b2958ce..b9f0710 100644
--- a/drivers/misc/mic/card/mic_x100.c
+++ b/drivers/misc/mic/card/mic_x100.c
@@ -326,6 +326,7 @@
goto done;
}
+ request_module("mic_x100_dma");
mic_init_card_debugfs();
ret = platform_device_register(&mic_platform_dev);
if (ret) {
diff --git a/drivers/misc/mic/cosm/cosm_main.c b/drivers/misc/mic/cosm/cosm_main.c
index 4b4b356..7005cb1 100644
--- a/drivers/misc/mic/cosm/cosm_main.c
+++ b/drivers/misc/mic/cosm/cosm_main.c
@@ -153,8 +153,10 @@
* stop(..) calls device_unregister and will crash the system if
* called multiple times.
*/
- bool call_hw_ops = cdev->state != MIC_RESET_FAILED &&
- cdev->state != MIC_READY;
+ u8 state = cdev->state == MIC_RESETTING ?
+ cdev->prev_state : cdev->state;
+ bool call_hw_ops = state != MIC_RESET_FAILED &&
+ state != MIC_READY;
if (cdev->state != MIC_RESETTING)
cosm_set_state(cdev, MIC_RESETTING);
@@ -195,8 +197,11 @@
mutex_lock(&cdev->cosm_mutex);
if (cdev->state != MIC_READY) {
- cosm_set_state(cdev, MIC_RESETTING);
- schedule_work(&cdev->reset_trigger_work);
+ if (cdev->state != MIC_RESETTING) {
+ cdev->prev_state = cdev->state;
+ cosm_set_state(cdev, MIC_RESETTING);
+ schedule_work(&cdev->reset_trigger_work);
+ }
} else {
dev_err(&cdev->dev, "%s %d MIC is READY\n", __func__, __LINE__);
rc = -EINVAL;
diff --git a/drivers/misc/mic/host/Makefile b/drivers/misc/mic/host/Makefile
index 004d3db..f3b5023 100644
--- a/drivers/misc/mic/host/Makefile
+++ b/drivers/misc/mic/host/Makefile
@@ -9,5 +9,3 @@
mic_host-objs += mic_intr.o
mic_host-objs += mic_boot.o
mic_host-objs += mic_debugfs.o
-mic_host-objs += mic_fops.o
-mic_host-objs += mic_virtio.o
diff --git a/drivers/misc/mic/host/mic_boot.c b/drivers/misc/mic/host/mic_boot.c
index 7845564..8c91c99 100644
--- a/drivers/misc/mic/host/mic_boot.c
+++ b/drivers/misc/mic/host/mic_boot.c
@@ -25,10 +25,117 @@
#include <linux/mic_common.h>
#include <linux/mic_bus.h>
#include "../bus/scif_bus.h"
+#include "../bus/vop_bus.h"
#include "../common/mic_dev.h"
#include "mic_device.h"
#include "mic_smpt.h"
-#include "mic_virtio.h"
+
+static inline struct mic_device *vpdev_to_mdev(struct device *dev)
+{
+ return dev_get_drvdata(dev->parent);
+}
+
+static dma_addr_t
+_mic_dma_map_page(struct device *dev, struct page *page,
+ unsigned long offset, size_t size,
+ enum dma_data_direction dir, struct dma_attrs *attrs)
+{
+ void *va = phys_to_virt(page_to_phys(page)) + offset;
+ struct mic_device *mdev = vpdev_to_mdev(dev);
+
+ return mic_map_single(mdev, va, size);
+}
+
+static void _mic_dma_unmap_page(struct device *dev, dma_addr_t dma_addr,
+ size_t size, enum dma_data_direction dir,
+ struct dma_attrs *attrs)
+{
+ struct mic_device *mdev = vpdev_to_mdev(dev);
+
+ mic_unmap_single(mdev, dma_addr, size);
+}
+
+static const struct dma_map_ops _mic_dma_ops = {
+ .map_page = _mic_dma_map_page,
+ .unmap_page = _mic_dma_unmap_page,
+};
+
+static struct mic_irq *
+__mic_request_irq(struct vop_device *vpdev,
+ irqreturn_t (*func)(int irq, void *data),
+ const char *name, void *data, int intr_src)
+{
+ struct mic_device *mdev = vpdev_to_mdev(&vpdev->dev);
+
+ return mic_request_threaded_irq(mdev, func, NULL, name, data,
+ intr_src, MIC_INTR_DB);
+}
+
+static void __mic_free_irq(struct vop_device *vpdev,
+ struct mic_irq *cookie, void *data)
+{
+ struct mic_device *mdev = vpdev_to_mdev(&vpdev->dev);
+
+ return mic_free_irq(mdev, cookie, data);
+}
+
+static void __mic_ack_interrupt(struct vop_device *vpdev, int num)
+{
+ struct mic_device *mdev = vpdev_to_mdev(&vpdev->dev);
+
+ mdev->ops->intr_workarounds(mdev);
+}
+
+static int __mic_next_db(struct vop_device *vpdev)
+{
+ struct mic_device *mdev = vpdev_to_mdev(&vpdev->dev);
+
+ return mic_next_db(mdev);
+}
+
+static void *__mic_get_dp(struct vop_device *vpdev)
+{
+ struct mic_device *mdev = vpdev_to_mdev(&vpdev->dev);
+
+ return mdev->dp;
+}
+
+static void __iomem *__mic_get_remote_dp(struct vop_device *vpdev)
+{
+ return NULL;
+}
+
+static void __mic_send_intr(struct vop_device *vpdev, int db)
+{
+ struct mic_device *mdev = vpdev_to_mdev(&vpdev->dev);
+
+ mdev->ops->send_intr(mdev, db);
+}
+
+static void __iomem *__mic_ioremap(struct vop_device *vpdev,
+ dma_addr_t pa, size_t len)
+{
+ struct mic_device *mdev = vpdev_to_mdev(&vpdev->dev);
+
+ return mdev->aper.va + pa;
+}
+
+static void __mic_iounmap(struct vop_device *vpdev, void __iomem *va)
+{
+ /* nothing to do */
+}
+
+static struct vop_hw_ops vop_hw_ops = {
+ .request_irq = __mic_request_irq,
+ .free_irq = __mic_free_irq,
+ .ack_interrupt = __mic_ack_interrupt,
+ .next_db = __mic_next_db,
+ .get_dp = __mic_get_dp,
+ .get_remote_dp = __mic_get_remote_dp,
+ .send_intr = __mic_send_intr,
+ .ioremap = __mic_ioremap,
+ .iounmap = __mic_iounmap,
+};
static inline struct mic_device *scdev_to_mdev(struct scif_hw_dev *scdev)
{
@@ -315,7 +422,6 @@
dma_cap_mask_t mask;
struct dma_chan *chan;
- request_module("mic_x100_dma");
dma_cap_zero(mask);
dma_cap_set(DMA_MEMCPY, mask);
@@ -387,9 +493,18 @@
goto dma_free;
}
+ mdev->vpdev = vop_register_device(&mdev->pdev->dev,
+ VOP_DEV_TRNSP, &_mic_dma_ops,
+ &vop_hw_ops, id + 1, &mdev->aper,
+ mdev->dma_ch[0]);
+ if (IS_ERR(mdev->vpdev)) {
+ rc = PTR_ERR(mdev->vpdev);
+ goto scif_remove;
+ }
+
rc = mdev->ops->load_mic_fw(mdev, NULL);
if (rc)
- goto scif_remove;
+ goto vop_remove;
mic_smpt_restore(mdev);
mic_intr_restore(mdev);
mdev->intr_ops->enable_interrupts(mdev);
@@ -397,6 +512,8 @@
mdev->ops->write_spad(mdev, MIC_DPHI_SPAD, mdev->dp_dma_addr >> 32);
mdev->ops->send_firmware_intr(mdev);
goto unlock_ret;
+vop_remove:
+ vop_unregister_device(mdev->vpdev);
scif_remove:
scif_unregister_device(mdev->scdev);
dma_free:
@@ -423,7 +540,7 @@
* will be the first to be registered and the last to be
* unregistered.
*/
- mic_virtio_reset_devices(mdev);
+ vop_unregister_device(mdev->vpdev);
scif_unregister_device(mdev->scdev);
mic_free_dma_chans(mdev);
mbus_unregister_device(mdev->dma_mbdev);
diff --git a/drivers/misc/mic/host/mic_debugfs.c b/drivers/misc/mic/host/mic_debugfs.c
index 1058160..0a9daba 100644
--- a/drivers/misc/mic/host/mic_debugfs.c
+++ b/drivers/misc/mic/host/mic_debugfs.c
@@ -26,7 +26,6 @@
#include "../common/mic_dev.h"
#include "mic_device.h"
#include "mic_smpt.h"
-#include "mic_virtio.h"
/* Debugfs parent dir */
static struct dentry *mic_dbg;
@@ -100,190 +99,6 @@
.release = mic_post_code_debug_release
};
-static int mic_dp_show(struct seq_file *s, void *pos)
-{
- struct mic_device *mdev = s->private;
- struct mic_device_desc *d;
- struct mic_device_ctrl *dc;
- struct mic_vqconfig *vqconfig;
- __u32 *features;
- __u8 *config;
- struct mic_bootparam *bootparam = mdev->dp;
- int i, j;
-
- seq_printf(s, "Bootparam: magic 0x%x\n",
- bootparam->magic);
- seq_printf(s, "Bootparam: h2c_config_db %d\n",
- bootparam->h2c_config_db);
- seq_printf(s, "Bootparam: node_id %d\n",
- bootparam->node_id);
- seq_printf(s, "Bootparam: c2h_scif_db %d\n",
- bootparam->c2h_scif_db);
- seq_printf(s, "Bootparam: h2c_scif_db %d\n",
- bootparam->h2c_scif_db);
- seq_printf(s, "Bootparam: scif_host_dma_addr 0x%llx\n",
- bootparam->scif_host_dma_addr);
- seq_printf(s, "Bootparam: scif_card_dma_addr 0x%llx\n",
- bootparam->scif_card_dma_addr);
-
-
- for (i = sizeof(*bootparam); i < MIC_DP_SIZE;
- i += mic_total_desc_size(d)) {
- d = mdev->dp + i;
- dc = (void *)d + mic_aligned_desc_size(d);
-
- /* end of list */
- if (d->type == 0)
- break;
-
- if (d->type == -1)
- continue;
-
- seq_printf(s, "Type %d ", d->type);
- seq_printf(s, "Num VQ %d ", d->num_vq);
- seq_printf(s, "Feature Len %d\n", d->feature_len);
- seq_printf(s, "Config Len %d ", d->config_len);
- seq_printf(s, "Shutdown Status %d\n", d->status);
-
- for (j = 0; j < d->num_vq; j++) {
- vqconfig = mic_vq_config(d) + j;
- seq_printf(s, "vqconfig[%d]: ", j);
- seq_printf(s, "address 0x%llx ", vqconfig->address);
- seq_printf(s, "num %d ", vqconfig->num);
- seq_printf(s, "used address 0x%llx\n",
- vqconfig->used_address);
- }
-
- features = (__u32 *)mic_vq_features(d);
- seq_printf(s, "Features: Host 0x%x ", features[0]);
- seq_printf(s, "Guest 0x%x\n", features[1]);
-
- config = mic_vq_configspace(d);
- for (j = 0; j < d->config_len; j++)
- seq_printf(s, "config[%d]=%d\n", j, config[j]);
-
- seq_puts(s, "Device control:\n");
- seq_printf(s, "Config Change %d ", dc->config_change);
- seq_printf(s, "Vdev reset %d\n", dc->vdev_reset);
- seq_printf(s, "Guest Ack %d ", dc->guest_ack);
- seq_printf(s, "Host ack %d\n", dc->host_ack);
- seq_printf(s, "Used address updated %d ",
- dc->used_address_updated);
- seq_printf(s, "Vdev 0x%llx\n", dc->vdev);
- seq_printf(s, "c2h doorbell %d ", dc->c2h_vdev_db);
- seq_printf(s, "h2c doorbell %d\n", dc->h2c_vdev_db);
- }
-
- return 0;
-}
-
-static int mic_dp_debug_open(struct inode *inode, struct file *file)
-{
- return single_open(file, mic_dp_show, inode->i_private);
-}
-
-static int mic_dp_debug_release(struct inode *inode, struct file *file)
-{
- return single_release(inode, file);
-}
-
-static const struct file_operations dp_ops = {
- .owner = THIS_MODULE,
- .open = mic_dp_debug_open,
- .read = seq_read,
- .llseek = seq_lseek,
- .release = mic_dp_debug_release
-};
-
-static int mic_vdev_info_show(struct seq_file *s, void *unused)
-{
- struct mic_device *mdev = s->private;
- struct list_head *pos, *tmp;
- struct mic_vdev *mvdev;
- int i, j;
-
- mutex_lock(&mdev->mic_mutex);
- list_for_each_safe(pos, tmp, &mdev->vdev_list) {
- mvdev = list_entry(pos, struct mic_vdev, list);
- seq_printf(s, "VDEV type %d state %s in %ld out %ld\n",
- mvdev->virtio_id,
- mic_vdevup(mvdev) ? "UP" : "DOWN",
- mvdev->in_bytes,
- mvdev->out_bytes);
- for (i = 0; i < MIC_MAX_VRINGS; i++) {
- struct vring_desc *desc;
- struct vring_avail *avail;
- struct vring_used *used;
- struct mic_vringh *mvr = &mvdev->mvr[i];
- struct vringh *vrh = &mvr->vrh;
- int num = vrh->vring.num;
- if (!num)
- continue;
- desc = vrh->vring.desc;
- seq_printf(s, "vring i %d avail_idx %d",
- i, mvr->vring.info->avail_idx & (num - 1));
- seq_printf(s, " vring i %d avail_idx %d\n",
- i, mvr->vring.info->avail_idx);
- seq_printf(s, "vrh i %d weak_barriers %d",
- i, vrh->weak_barriers);
- seq_printf(s, " last_avail_idx %d last_used_idx %d",
- vrh->last_avail_idx, vrh->last_used_idx);
- seq_printf(s, " completed %d\n", vrh->completed);
- for (j = 0; j < num; j++) {
- seq_printf(s, "desc[%d] addr 0x%llx len %d",
- j, desc->addr, desc->len);
- seq_printf(s, " flags 0x%x next %d\n",
- desc->flags, desc->next);
- desc++;
- }
- avail = vrh->vring.avail;
- seq_printf(s, "avail flags 0x%x idx %d\n",
- vringh16_to_cpu(vrh, avail->flags),
- vringh16_to_cpu(vrh, avail->idx) & (num - 1));
- seq_printf(s, "avail flags 0x%x idx %d\n",
- vringh16_to_cpu(vrh, avail->flags),
- vringh16_to_cpu(vrh, avail->idx));
- for (j = 0; j < num; j++)
- seq_printf(s, "avail ring[%d] %d\n",
- j, avail->ring[j]);
- used = vrh->vring.used;
- seq_printf(s, "used flags 0x%x idx %d\n",
- vringh16_to_cpu(vrh, used->flags),
- vringh16_to_cpu(vrh, used->idx) & (num - 1));
- seq_printf(s, "used flags 0x%x idx %d\n",
- vringh16_to_cpu(vrh, used->flags),
- vringh16_to_cpu(vrh, used->idx));
- for (j = 0; j < num; j++)
- seq_printf(s, "used ring[%d] id %d len %d\n",
- j, vringh32_to_cpu(vrh,
- used->ring[j].id),
- vringh32_to_cpu(vrh,
- used->ring[j].len));
- }
- }
- mutex_unlock(&mdev->mic_mutex);
-
- return 0;
-}
-
-static int mic_vdev_info_debug_open(struct inode *inode, struct file *file)
-{
- return single_open(file, mic_vdev_info_show, inode->i_private);
-}
-
-static int mic_vdev_info_debug_release(struct inode *inode, struct file *file)
-{
- return single_release(inode, file);
-}
-
-static const struct file_operations vdev_info_ops = {
- .owner = THIS_MODULE,
- .open = mic_vdev_info_debug_open,
- .read = seq_read,
- .llseek = seq_lseek,
- .release = mic_vdev_info_debug_release
-};
-
static int mic_msi_irq_info_show(struct seq_file *s, void *pos)
{
struct mic_device *mdev = s->private;
@@ -367,11 +182,6 @@
debugfs_create_file("post_code", 0444, mdev->dbg_dir, mdev,
&post_code_ops);
- debugfs_create_file("dp", 0444, mdev->dbg_dir, mdev, &dp_ops);
-
- debugfs_create_file("vdev_info", 0444, mdev->dbg_dir, mdev,
- &vdev_info_ops);
-
debugfs_create_file("msi_irq_info", 0444, mdev->dbg_dir, mdev,
&msi_irq_info_ops);
}
diff --git a/drivers/misc/mic/host/mic_device.h b/drivers/misc/mic/host/mic_device.h
index 461184a..52b12b2 100644
--- a/drivers/misc/mic/host/mic_device.h
+++ b/drivers/misc/mic/host/mic_device.h
@@ -29,6 +29,7 @@
#include <linux/miscdevice.h>
#include <linux/mic_bus.h>
#include "../bus/scif_bus.h"
+#include "../bus/vop_bus.h"
#include "../bus/cosm_bus.h"
#include "mic_intr.h"
@@ -64,13 +65,11 @@
* @bootaddr: MIC boot address.
* @dp: virtio device page
* @dp_dma_addr: virtio device page DMA address.
- * @name: name for the misc char device
- * @miscdev: registered misc char device
- * @vdev_list: list of virtio devices.
* @dma_mbdev: MIC BUS DMA device.
* @dma_ch - Array of DMA channels
* @num_dma_ch - Number of DMA channels available
* @scdev: SCIF device on the SCIF virtual bus.
+ * @vpdev: Virtio over PCIe device on the VOP virtual bus.
* @cosm_dev: COSM device
*/
struct mic_device {
@@ -91,13 +90,11 @@
u32 bootaddr;
void *dp;
dma_addr_t dp_dma_addr;
- char name[16];
- struct miscdevice miscdev;
- struct list_head vdev_list;
struct mbus_device *dma_mbdev;
struct dma_chan *dma_ch[MIC_MAX_DMA_CHAN];
int num_dma_ch;
struct scif_hw_dev *scdev;
+ struct vop_device *vpdev;
struct cosm_device *cosm_dev;
};
diff --git a/drivers/misc/mic/host/mic_fops.c b/drivers/misc/mic/host/mic_fops.c
deleted file mode 100644
index 8cc1d90..0000000
--- a/drivers/misc/mic/host/mic_fops.c
+++ /dev/null
@@ -1,222 +0,0 @@
-/*
- * Intel MIC Platform Software Stack (MPSS)
- *
- * Copyright(c) 2013 Intel Corporation.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License, version 2, as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful, but
- * WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * General Public License for more details.
- *
- * The full GNU General Public License is included in this distribution in
- * the file called "COPYING".
- *
- * Intel MIC Host driver.
- *
- */
-#include <linux/poll.h>
-#include <linux/pci.h>
-
-#include <linux/mic_common.h>
-#include "../common/mic_dev.h"
-#include "mic_device.h"
-#include "mic_fops.h"
-#include "mic_virtio.h"
-
-int mic_open(struct inode *inode, struct file *f)
-{
- struct mic_vdev *mvdev;
- struct mic_device *mdev = container_of(f->private_data,
- struct mic_device, miscdev);
-
- mvdev = kzalloc(sizeof(*mvdev), GFP_KERNEL);
- if (!mvdev)
- return -ENOMEM;
-
- init_waitqueue_head(&mvdev->waitq);
- INIT_LIST_HEAD(&mvdev->list);
- mvdev->mdev = mdev;
- mvdev->virtio_id = -1;
-
- f->private_data = mvdev;
- return 0;
-}
-
-int mic_release(struct inode *inode, struct file *f)
-{
- struct mic_vdev *mvdev = (struct mic_vdev *)f->private_data;
-
- if (-1 != mvdev->virtio_id)
- mic_virtio_del_device(mvdev);
- f->private_data = NULL;
- kfree(mvdev);
- return 0;
-}
-
-long mic_ioctl(struct file *f, unsigned int cmd, unsigned long arg)
-{
- struct mic_vdev *mvdev = (struct mic_vdev *)f->private_data;
- void __user *argp = (void __user *)arg;
- int ret;
-
- switch (cmd) {
- case MIC_VIRTIO_ADD_DEVICE:
- {
- ret = mic_virtio_add_device(mvdev, argp);
- if (ret < 0) {
- dev_err(mic_dev(mvdev),
- "%s %d errno ret %d\n",
- __func__, __LINE__, ret);
- return ret;
- }
- break;
- }
- case MIC_VIRTIO_COPY_DESC:
- {
- struct mic_copy_desc copy;
-
- ret = mic_vdev_inited(mvdev);
- if (ret)
- return ret;
-
- if (copy_from_user(©, argp, sizeof(copy)))
- return -EFAULT;
-
- dev_dbg(mic_dev(mvdev),
- "%s %d === iovcnt 0x%x vr_idx 0x%x update_used %d\n",
- __func__, __LINE__, copy.iovcnt, copy.vr_idx,
- copy.update_used);
-
- ret = mic_virtio_copy_desc(mvdev, ©);
- if (ret < 0) {
- dev_err(mic_dev(mvdev),
- "%s %d errno ret %d\n",
- __func__, __LINE__, ret);
- return ret;
- }
- if (copy_to_user(
- &((struct mic_copy_desc __user *)argp)->out_len,
- ©.out_len, sizeof(copy.out_len))) {
- dev_err(mic_dev(mvdev), "%s %d errno ret %d\n",
- __func__, __LINE__, -EFAULT);
- return -EFAULT;
- }
- break;
- }
- case MIC_VIRTIO_CONFIG_CHANGE:
- {
- ret = mic_vdev_inited(mvdev);
- if (ret)
- return ret;
-
- ret = mic_virtio_config_change(mvdev, argp);
- if (ret < 0) {
- dev_err(mic_dev(mvdev),
- "%s %d errno ret %d\n",
- __func__, __LINE__, ret);
- return ret;
- }
- break;
- }
- default:
- return -ENOIOCTLCMD;
- };
- return 0;
-}
-
-/*
- * We return POLLIN | POLLOUT from poll when new buffers are enqueued, and
- * not when previously enqueued buffers may be available. This means that
- * in the card->host (TX) path, when userspace is unblocked by poll it
- * must drain all available descriptors or it can stall.
- */
-unsigned int mic_poll(struct file *f, poll_table *wait)
-{
- struct mic_vdev *mvdev = (struct mic_vdev *)f->private_data;
- int mask = 0;
-
- poll_wait(f, &mvdev->waitq, wait);
-
- if (mic_vdev_inited(mvdev)) {
- mask = POLLERR;
- } else if (mvdev->poll_wake) {
- mvdev->poll_wake = 0;
- mask = POLLIN | POLLOUT;
- }
-
- return mask;
-}
-
-static inline int
-mic_query_offset(struct mic_vdev *mvdev, unsigned long offset,
- unsigned long *size, unsigned long *pa)
-{
- struct mic_device *mdev = mvdev->mdev;
- unsigned long start = MIC_DP_SIZE;
- int i;
-
- /*
- * MMAP interface is as follows:
- * offset region
- * 0x0 virtio device_page
- * 0x1000 first vring
- * 0x1000 + size of 1st vring second vring
- * ....
- */
- if (!offset) {
- *pa = virt_to_phys(mdev->dp);
- *size = MIC_DP_SIZE;
- return 0;
- }
-
- for (i = 0; i < mvdev->dd->num_vq; i++) {
- struct mic_vringh *mvr = &mvdev->mvr[i];
- if (offset == start) {
- *pa = virt_to_phys(mvr->vring.va);
- *size = mvr->vring.len;
- return 0;
- }
- start += mvr->vring.len;
- }
- return -1;
-}
-
-/*
- * Maps the device page and virtio rings to user space for readonly access.
- */
-int
-mic_mmap(struct file *f, struct vm_area_struct *vma)
-{
- struct mic_vdev *mvdev = (struct mic_vdev *)f->private_data;
- unsigned long offset = vma->vm_pgoff << PAGE_SHIFT;
- unsigned long pa, size = vma->vm_end - vma->vm_start, size_rem = size;
- int i, err;
-
- err = mic_vdev_inited(mvdev);
- if (err)
- return err;
-
- if (vma->vm_flags & VM_WRITE)
- return -EACCES;
-
- while (size_rem) {
- i = mic_query_offset(mvdev, offset, &size, &pa);
- if (i < 0)
- return -EINVAL;
- err = remap_pfn_range(vma, vma->vm_start + offset,
- pa >> PAGE_SHIFT, size, vma->vm_page_prot);
- if (err)
- return err;
- dev_dbg(mic_dev(mvdev),
- "%s %d type %d size 0x%lx off 0x%lx pa 0x%lx vma 0x%lx\n",
- __func__, __LINE__, mvdev->virtio_id, size, offset,
- pa, vma->vm_start + offset);
- size_rem -= size;
- offset += size;
- }
- return 0;
-}
diff --git a/drivers/misc/mic/host/mic_fops.h b/drivers/misc/mic/host/mic_fops.h
deleted file mode 100644
index dc3893d..0000000
--- a/drivers/misc/mic/host/mic_fops.h
+++ /dev/null
@@ -1,32 +0,0 @@
-/*
- * Intel MIC Platform Software Stack (MPSS)
- *
- * Copyright(c) 2013 Intel Corporation.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License, version 2, as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful, but
- * WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * General Public License for more details.
- *
- * The full GNU General Public License is included in this distribution in
- * the file called "COPYING".
- *
- * Intel MIC Host driver.
- *
- */
-#ifndef _MIC_FOPS_H_
-#define _MIC_FOPS_H_
-
-int mic_open(struct inode *inode, struct file *filp);
-int mic_release(struct inode *inode, struct file *filp);
-ssize_t mic_read(struct file *filp, char __user *buf,
- size_t count, loff_t *pos);
-long mic_ioctl(struct file *filp, unsigned int cmd, unsigned long arg);
-int mic_mmap(struct file *f, struct vm_area_struct *vma);
-unsigned int mic_poll(struct file *f, poll_table *wait);
-
-#endif
diff --git a/drivers/misc/mic/host/mic_main.c b/drivers/misc/mic/host/mic_main.c
index 153894e..035be3e 100644
--- a/drivers/misc/mic/host/mic_main.c
+++ b/drivers/misc/mic/host/mic_main.c
@@ -27,8 +27,6 @@
#include "mic_device.h"
#include "mic_x100.h"
#include "mic_smpt.h"
-#include "mic_fops.h"
-#include "mic_virtio.h"
static const char mic_driver_name[] = "mic";
@@ -57,17 +55,6 @@
/* ID allocator for MIC devices */
static struct ida g_mic_ida;
-/* Base device node number for MIC devices */
-static dev_t g_mic_devno;
-
-static const struct file_operations mic_fops = {
- .open = mic_open,
- .release = mic_release,
- .unlocked_ioctl = mic_ioctl,
- .poll = mic_poll,
- .mmap = mic_mmap,
- .owner = THIS_MODULE,
-};
/* Initialize the device page */
static int mic_dp_init(struct mic_device *mdev)
@@ -169,7 +156,6 @@
mic_ops_init(mdev);
mutex_init(&mdev->mic_mutex);
mdev->irq_info.next_avail_src = 0;
- INIT_LIST_HEAD(&mdev->vdev_list);
}
/**
@@ -259,30 +245,15 @@
goto smpt_uninit;
}
mic_bootparam_init(mdev);
-
mic_create_debug_dir(mdev);
- mdev->miscdev.minor = MISC_DYNAMIC_MINOR;
- snprintf(mdev->name, sizeof(mdev->name), "mic%d", mdev->id);
- mdev->miscdev.name = mdev->name;
- mdev->miscdev.fops = &mic_fops;
- mdev->miscdev.parent = &mdev->pdev->dev;
- rc = misc_register(&mdev->miscdev);
- if (rc) {
- dev_err(&pdev->dev, "misc_register err id %d rc %d\n",
- mdev->id, rc);
- goto cleanup_debug_dir;
- }
-
mdev->cosm_dev = cosm_register_device(&mdev->pdev->dev, &cosm_hw_ops);
if (IS_ERR(mdev->cosm_dev)) {
rc = PTR_ERR(mdev->cosm_dev);
dev_err(&pdev->dev, "cosm_add_device failed rc %d\n", rc);
- goto misc_dereg;
+ goto cleanup_debug_dir;
}
return 0;
-misc_dereg:
- misc_deregister(&mdev->miscdev);
cleanup_debug_dir:
mic_delete_debug_dir(mdev);
mic_dp_uninit(mdev);
@@ -323,7 +294,6 @@
return;
cosm_unregister_device(mdev->cosm_dev);
- misc_deregister(&mdev->miscdev);
mic_delete_debug_dir(mdev);
mic_dp_uninit(mdev);
mic_smpt_uninit(mdev);
@@ -347,26 +317,18 @@
{
int ret;
- ret = alloc_chrdev_region(&g_mic_devno, 0,
- MIC_MAX_NUM_DEVS, mic_driver_name);
- if (ret) {
- pr_err("alloc_chrdev_region failed ret %d\n", ret);
- goto error;
- }
-
+ request_module("mic_x100_dma");
mic_init_debugfs();
ida_init(&g_mic_ida);
ret = pci_register_driver(&mic_driver);
if (ret) {
pr_err("pci_register_driver failed ret %d\n", ret);
- goto cleanup_chrdev;
+ goto cleanup_debugfs;
}
- return ret;
-cleanup_chrdev:
+ return 0;
+cleanup_debugfs:
ida_destroy(&g_mic_ida);
mic_exit_debugfs();
- unregister_chrdev_region(g_mic_devno, MIC_MAX_NUM_DEVS);
-error:
return ret;
}
@@ -375,7 +337,6 @@
pci_unregister_driver(&mic_driver);
ida_destroy(&g_mic_ida);
mic_exit_debugfs();
- unregister_chrdev_region(g_mic_devno, MIC_MAX_NUM_DEVS);
}
module_init(mic_init);
diff --git a/drivers/misc/mic/host/mic_virtio.c b/drivers/misc/mic/host/mic_virtio.c
deleted file mode 100644
index 58b107a..0000000
--- a/drivers/misc/mic/host/mic_virtio.c
+++ /dev/null
@@ -1,811 +0,0 @@
-/*
- * Intel MIC Platform Software Stack (MPSS)
- *
- * Copyright(c) 2013 Intel Corporation.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License, version 2, as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful, but
- * WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * General Public License for more details.
- *
- * The full GNU General Public License is included in this distribution in
- * the file called "COPYING".
- *
- * Intel MIC Host driver.
- *
- */
-#include <linux/pci.h>
-#include <linux/sched.h>
-#include <linux/uaccess.h>
-#include <linux/dmaengine.h>
-#include <linux/mic_common.h>
-#include "../common/mic_dev.h"
-#include "mic_device.h"
-#include "mic_smpt.h"
-#include "mic_virtio.h"
-
-/*
- * Size of the internal buffer used during DMA's as an intermediate buffer
- * for copy to/from user.
- */
-#define MIC_INT_DMA_BUF_SIZE PAGE_ALIGN(64 * 1024ULL)
-
-static int mic_sync_dma(struct mic_device *mdev, dma_addr_t dst,
- dma_addr_t src, size_t len)
-{
- int err = 0;
- struct dma_async_tx_descriptor *tx;
- struct dma_chan *mic_ch = mdev->dma_ch[0];
-
- if (!mic_ch) {
- err = -EBUSY;
- goto error;
- }
-
- tx = mic_ch->device->device_prep_dma_memcpy(mic_ch, dst, src, len,
- DMA_PREP_FENCE);
- if (!tx) {
- err = -ENOMEM;
- goto error;
- } else {
- dma_cookie_t cookie = tx->tx_submit(tx);
-
- err = dma_submit_error(cookie);
- if (err)
- goto error;
- err = dma_sync_wait(mic_ch, cookie);
- }
-error:
- if (err)
- dev_err(&mdev->pdev->dev, "%s %d err %d\n",
- __func__, __LINE__, err);
- return err;
-}
-
-/*
- * Initiates the copies across the PCIe bus from card memory to a user
- * space buffer. When transfers are done using DMA, source/destination
- * addresses and transfer length must follow the alignment requirements of
- * the MIC DMA engine.
- */
-static int mic_virtio_copy_to_user(struct mic_vdev *mvdev, void __user *ubuf,
- size_t len, u64 daddr, size_t dlen,
- int vr_idx)
-{
- struct mic_device *mdev = mvdev->mdev;
- void __iomem *dbuf = mdev->aper.va + daddr;
- struct mic_vringh *mvr = &mvdev->mvr[vr_idx];
- size_t dma_alignment = 1 << mdev->dma_ch[0]->device->copy_align;
- size_t dma_offset;
- size_t partlen;
- int err;
-
- dma_offset = daddr - round_down(daddr, dma_alignment);
- daddr -= dma_offset;
- len += dma_offset;
-
- while (len) {
- partlen = min_t(size_t, len, MIC_INT_DMA_BUF_SIZE);
-
- err = mic_sync_dma(mdev, mvr->buf_da, daddr,
- ALIGN(partlen, dma_alignment));
- if (err)
- goto err;
-
- if (copy_to_user(ubuf, mvr->buf + dma_offset,
- partlen - dma_offset)) {
- err = -EFAULT;
- goto err;
- }
- daddr += partlen;
- ubuf += partlen;
- dbuf += partlen;
- mvdev->in_bytes_dma += partlen;
- mvdev->in_bytes += partlen;
- len -= partlen;
- dma_offset = 0;
- }
- return 0;
-err:
- dev_err(mic_dev(mvdev), "%s %d err %d\n", __func__, __LINE__, err);
- return err;
-}
-
-/*
- * Initiates copies across the PCIe bus from a user space buffer to card
- * memory. When transfers are done using DMA, source/destination addresses
- * and transfer length must follow the alignment requirements of the MIC
- * DMA engine.
- */
-static int mic_virtio_copy_from_user(struct mic_vdev *mvdev, void __user *ubuf,
- size_t len, u64 daddr, size_t dlen,
- int vr_idx)
-{
- struct mic_device *mdev = mvdev->mdev;
- void __iomem *dbuf = mdev->aper.va + daddr;
- struct mic_vringh *mvr = &mvdev->mvr[vr_idx];
- size_t dma_alignment = 1 << mdev->dma_ch[0]->device->copy_align;
- size_t partlen;
- int err;
-
- if (daddr & (dma_alignment - 1)) {
- mvdev->tx_dst_unaligned += len;
- goto memcpy;
- } else if (ALIGN(len, dma_alignment) > dlen) {
- mvdev->tx_len_unaligned += len;
- goto memcpy;
- }
-
- while (len) {
- partlen = min_t(size_t, len, MIC_INT_DMA_BUF_SIZE);
-
- if (copy_from_user(mvr->buf, ubuf, partlen)) {
- err = -EFAULT;
- goto err;
- }
- err = mic_sync_dma(mdev, daddr, mvr->buf_da,
- ALIGN(partlen, dma_alignment));
- if (err)
- goto err;
- daddr += partlen;
- ubuf += partlen;
- dbuf += partlen;
- mvdev->out_bytes_dma += partlen;
- mvdev->out_bytes += partlen;
- len -= partlen;
- }
-memcpy:
- /*
- * We are copying to IO below and should ideally use something
- * like copy_from_user_toio(..) if it existed.
- */
- if (copy_from_user((void __force *)dbuf, ubuf, len)) {
- err = -EFAULT;
- goto err;
- }
- mvdev->out_bytes += len;
- return 0;
-err:
- dev_err(mic_dev(mvdev), "%s %d err %d\n", __func__, __LINE__, err);
- return err;
-}
-
-#define MIC_VRINGH_READ true
-
-/* The function to call to notify the card about added buffers */
-static void mic_notify(struct vringh *vrh)
-{
- struct mic_vringh *mvrh = container_of(vrh, struct mic_vringh, vrh);
- struct mic_vdev *mvdev = mvrh->mvdev;
- s8 db = mvdev->dc->h2c_vdev_db;
-
- if (db != -1)
- mvdev->mdev->ops->send_intr(mvdev->mdev, db);
-}
-
-/* Determine the total number of bytes consumed in a VRINGH KIOV */
-static inline u32 mic_vringh_iov_consumed(struct vringh_kiov *iov)
-{
- int i;
- u32 total = iov->consumed;
-
- for (i = 0; i < iov->i; i++)
- total += iov->iov[i].iov_len;
- return total;
-}
-
-/*
- * Traverse the VRINGH KIOV and issue the APIs to trigger the copies.
- * This API is heavily based on the vringh_iov_xfer(..) implementation
- * in vringh.c. The reason we cannot reuse vringh_iov_pull_kern(..)
- * and vringh_iov_push_kern(..) directly is because there is no
- * way to override the VRINGH xfer(..) routines as of v3.10.
- */
-static int mic_vringh_copy(struct mic_vdev *mvdev, struct vringh_kiov *iov,
- void __user *ubuf, size_t len, bool read, int vr_idx,
- size_t *out_len)
-{
- int ret = 0;
- size_t partlen, tot_len = 0;
-
- while (len && iov->i < iov->used) {
- partlen = min(iov->iov[iov->i].iov_len, len);
- if (read)
- ret = mic_virtio_copy_to_user(mvdev, ubuf, partlen,
- (u64)iov->iov[iov->i].iov_base,
- iov->iov[iov->i].iov_len,
- vr_idx);
- else
- ret = mic_virtio_copy_from_user(mvdev, ubuf, partlen,
- (u64)iov->iov[iov->i].iov_base,
- iov->iov[iov->i].iov_len,
- vr_idx);
- if (ret) {
- dev_err(mic_dev(mvdev), "%s %d err %d\n",
- __func__, __LINE__, ret);
- break;
- }
- len -= partlen;
- ubuf += partlen;
- tot_len += partlen;
- iov->consumed += partlen;
- iov->iov[iov->i].iov_len -= partlen;
- iov->iov[iov->i].iov_base += partlen;
- if (!iov->iov[iov->i].iov_len) {
- /* Fix up old iov element then increment. */
- iov->iov[iov->i].iov_len = iov->consumed;
- iov->iov[iov->i].iov_base -= iov->consumed;
-
- iov->consumed = 0;
- iov->i++;
- }
- }
- *out_len = tot_len;
- return ret;
-}
-
-/*
- * Use the standard VRINGH infrastructure in the kernel to fetch new
- * descriptors, initiate the copies and update the used ring.
- */
-static int _mic_virtio_copy(struct mic_vdev *mvdev,
- struct mic_copy_desc *copy)
-{
- int ret = 0;
- u32 iovcnt = copy->iovcnt;
- struct iovec iov;
- struct iovec __user *u_iov = copy->iov;
- void __user *ubuf = NULL;
- struct mic_vringh *mvr = &mvdev->mvr[copy->vr_idx];
- struct vringh_kiov *riov = &mvr->riov;
- struct vringh_kiov *wiov = &mvr->wiov;
- struct vringh *vrh = &mvr->vrh;
- u16 *head = &mvr->head;
- struct mic_vring *vr = &mvr->vring;
- size_t len = 0, out_len;
-
- copy->out_len = 0;
- /* Fetch a new IOVEC if all previous elements have been processed */
- if (riov->i == riov->used && wiov->i == wiov->used) {
- ret = vringh_getdesc_kern(vrh, riov, wiov,
- head, GFP_KERNEL);
- /* Check if there are available descriptors */
- if (ret <= 0)
- return ret;
- }
- while (iovcnt) {
- if (!len) {
- /* Copy over a new iovec from user space. */
- ret = copy_from_user(&iov, u_iov, sizeof(*u_iov));
- if (ret) {
- ret = -EINVAL;
- dev_err(mic_dev(mvdev), "%s %d err %d\n",
- __func__, __LINE__, ret);
- break;
- }
- len = iov.iov_len;
- ubuf = iov.iov_base;
- }
- /* Issue all the read descriptors first */
- ret = mic_vringh_copy(mvdev, riov, ubuf, len, MIC_VRINGH_READ,
- copy->vr_idx, &out_len);
- if (ret) {
- dev_err(mic_dev(mvdev), "%s %d err %d\n",
- __func__, __LINE__, ret);
- break;
- }
- len -= out_len;
- ubuf += out_len;
- copy->out_len += out_len;
- /* Issue the write descriptors next */
- ret = mic_vringh_copy(mvdev, wiov, ubuf, len, !MIC_VRINGH_READ,
- copy->vr_idx, &out_len);
- if (ret) {
- dev_err(mic_dev(mvdev), "%s %d err %d\n",
- __func__, __LINE__, ret);
- break;
- }
- len -= out_len;
- ubuf += out_len;
- copy->out_len += out_len;
- if (!len) {
- /* One user space iovec is now completed */
- iovcnt--;
- u_iov++;
- }
- /* Exit loop if all elements in KIOVs have been processed. */
- if (riov->i == riov->used && wiov->i == wiov->used)
- break;
- }
- /*
- * Update the used ring if a descriptor was available and some data was
- * copied in/out and the user asked for a used ring update.
- */
- if (*head != USHRT_MAX && copy->out_len && copy->update_used) {
- u32 total = 0;
-
- /* Determine the total data consumed */
- total += mic_vringh_iov_consumed(riov);
- total += mic_vringh_iov_consumed(wiov);
- vringh_complete_kern(vrh, *head, total);
- *head = USHRT_MAX;
- if (vringh_need_notify_kern(vrh) > 0)
- vringh_notify(vrh);
- vringh_kiov_cleanup(riov);
- vringh_kiov_cleanup(wiov);
- /* Update avail idx for user space */
- vr->info->avail_idx = vrh->last_avail_idx;
- }
- return ret;
-}
-
-static inline int mic_verify_copy_args(struct mic_vdev *mvdev,
- struct mic_copy_desc *copy)
-{
- if (copy->vr_idx >= mvdev->dd->num_vq) {
- dev_err(mic_dev(mvdev), "%s %d err %d\n",
- __func__, __LINE__, -EINVAL);
- return -EINVAL;
- }
- return 0;
-}
-
-/* Copy a specified number of virtio descriptors in a chain */
-int mic_virtio_copy_desc(struct mic_vdev *mvdev,
- struct mic_copy_desc *copy)
-{
- int err;
- struct mic_vringh *mvr = &mvdev->mvr[copy->vr_idx];
-
- err = mic_verify_copy_args(mvdev, copy);
- if (err)
- return err;
-
- mutex_lock(&mvr->vr_mutex);
- if (!mic_vdevup(mvdev)) {
- err = -ENODEV;
- dev_err(mic_dev(mvdev), "%s %d err %d\n",
- __func__, __LINE__, err);
- goto err;
- }
- err = _mic_virtio_copy(mvdev, copy);
- if (err) {
- dev_err(mic_dev(mvdev), "%s %d err %d\n",
- __func__, __LINE__, err);
- }
-err:
- mutex_unlock(&mvr->vr_mutex);
- return err;
-}
-
-static void mic_virtio_init_post(struct mic_vdev *mvdev)
-{
- struct mic_vqconfig *vqconfig = mic_vq_config(mvdev->dd);
- int i;
-
- for (i = 0; i < mvdev->dd->num_vq; i++) {
- if (!le64_to_cpu(vqconfig[i].used_address)) {
- dev_warn(mic_dev(mvdev), "used_address zero??\n");
- continue;
- }
- mvdev->mvr[i].vrh.vring.used =
- (void __force *)mvdev->mdev->aper.va +
- le64_to_cpu(vqconfig[i].used_address);
- }
-
- mvdev->dc->used_address_updated = 0;
-
- dev_dbg(mic_dev(mvdev), "%s: device type %d LINKUP\n",
- __func__, mvdev->virtio_id);
-}
-
-static inline void mic_virtio_device_reset(struct mic_vdev *mvdev)
-{
- int i;
-
- dev_dbg(mic_dev(mvdev), "%s: status %d device type %d RESET\n",
- __func__, mvdev->dd->status, mvdev->virtio_id);
-
- for (i = 0; i < mvdev->dd->num_vq; i++)
- /*
- * Avoid lockdep false positive. The + 1 is for the mic
- * mutex which is held in the reset devices code path.
- */
- mutex_lock_nested(&mvdev->mvr[i].vr_mutex, i + 1);
-
- /* 0 status means "reset" */
- mvdev->dd->status = 0;
- mvdev->dc->vdev_reset = 0;
- mvdev->dc->host_ack = 1;
-
- for (i = 0; i < mvdev->dd->num_vq; i++) {
- struct vringh *vrh = &mvdev->mvr[i].vrh;
- mvdev->mvr[i].vring.info->avail_idx = 0;
- vrh->completed = 0;
- vrh->last_avail_idx = 0;
- vrh->last_used_idx = 0;
- }
-
- for (i = 0; i < mvdev->dd->num_vq; i++)
- mutex_unlock(&mvdev->mvr[i].vr_mutex);
-}
-
-void mic_virtio_reset_devices(struct mic_device *mdev)
-{
- struct list_head *pos, *tmp;
- struct mic_vdev *mvdev;
-
- dev_dbg(&mdev->pdev->dev, "%s\n", __func__);
-
- list_for_each_safe(pos, tmp, &mdev->vdev_list) {
- mvdev = list_entry(pos, struct mic_vdev, list);
- mic_virtio_device_reset(mvdev);
- mvdev->poll_wake = 1;
- wake_up(&mvdev->waitq);
- }
-}
-
-void mic_bh_handler(struct work_struct *work)
-{
- struct mic_vdev *mvdev = container_of(work, struct mic_vdev,
- virtio_bh_work);
-
- if (mvdev->dc->used_address_updated)
- mic_virtio_init_post(mvdev);
-
- if (mvdev->dc->vdev_reset)
- mic_virtio_device_reset(mvdev);
-
- mvdev->poll_wake = 1;
- wake_up(&mvdev->waitq);
-}
-
-static irqreturn_t mic_virtio_intr_handler(int irq, void *data)
-{
- struct mic_vdev *mvdev = data;
- struct mic_device *mdev = mvdev->mdev;
-
- mdev->ops->intr_workarounds(mdev);
- schedule_work(&mvdev->virtio_bh_work);
- return IRQ_HANDLED;
-}
-
-int mic_virtio_config_change(struct mic_vdev *mvdev,
- void __user *argp)
-{
- DECLARE_WAIT_QUEUE_HEAD_ONSTACK(wake);
- int ret = 0, retry, i;
- struct mic_bootparam *bootparam = mvdev->mdev->dp;
- s8 db = bootparam->h2c_config_db;
-
- mutex_lock(&mvdev->mdev->mic_mutex);
- for (i = 0; i < mvdev->dd->num_vq; i++)
- mutex_lock_nested(&mvdev->mvr[i].vr_mutex, i + 1);
-
- if (db == -1 || mvdev->dd->type == -1) {
- ret = -EIO;
- goto exit;
- }
-
- if (copy_from_user(mic_vq_configspace(mvdev->dd),
- argp, mvdev->dd->config_len)) {
- dev_err(mic_dev(mvdev), "%s %d err %d\n",
- __func__, __LINE__, -EFAULT);
- ret = -EFAULT;
- goto exit;
- }
- mvdev->dc->config_change = MIC_VIRTIO_PARAM_CONFIG_CHANGED;
- mvdev->mdev->ops->send_intr(mvdev->mdev, db);
-
- for (retry = 100; retry--;) {
- ret = wait_event_timeout(wake,
- mvdev->dc->guest_ack, msecs_to_jiffies(100));
- if (ret)
- break;
- }
-
- dev_dbg(mic_dev(mvdev),
- "%s %d retry: %d\n", __func__, __LINE__, retry);
- mvdev->dc->config_change = 0;
- mvdev->dc->guest_ack = 0;
-exit:
- for (i = 0; i < mvdev->dd->num_vq; i++)
- mutex_unlock(&mvdev->mvr[i].vr_mutex);
- mutex_unlock(&mvdev->mdev->mic_mutex);
- return ret;
-}
-
-static int mic_copy_dp_entry(struct mic_vdev *mvdev,
- void __user *argp,
- __u8 *type,
- struct mic_device_desc **devpage)
-{
- struct mic_device *mdev = mvdev->mdev;
- struct mic_device_desc dd, *dd_config, *devp;
- struct mic_vqconfig *vqconfig;
- int ret = 0, i;
- bool slot_found = false;
-
- if (copy_from_user(&dd, argp, sizeof(dd))) {
- dev_err(mic_dev(mvdev), "%s %d err %d\n",
- __func__, __LINE__, -EFAULT);
- return -EFAULT;
- }
-
- if (mic_aligned_desc_size(&dd) > MIC_MAX_DESC_BLK_SIZE ||
- dd.num_vq > MIC_MAX_VRINGS) {
- dev_err(mic_dev(mvdev), "%s %d err %d\n",
- __func__, __LINE__, -EINVAL);
- return -EINVAL;
- }
-
- dd_config = kmalloc(mic_desc_size(&dd), GFP_KERNEL);
- if (dd_config == NULL) {
- dev_err(mic_dev(mvdev), "%s %d err %d\n",
- __func__, __LINE__, -ENOMEM);
- return -ENOMEM;
- }
- if (copy_from_user(dd_config, argp, mic_desc_size(&dd))) {
- ret = -EFAULT;
- dev_err(mic_dev(mvdev), "%s %d err %d\n",
- __func__, __LINE__, ret);
- goto exit;
- }
-
- vqconfig = mic_vq_config(dd_config);
- for (i = 0; i < dd.num_vq; i++) {
- if (le16_to_cpu(vqconfig[i].num) > MIC_MAX_VRING_ENTRIES) {
- ret = -EINVAL;
- dev_err(mic_dev(mvdev), "%s %d err %d\n",
- __func__, __LINE__, ret);
- goto exit;
- }
- }
-
- /* Find the first free device page entry */
- for (i = sizeof(struct mic_bootparam);
- i < MIC_DP_SIZE - mic_total_desc_size(dd_config);
- i += mic_total_desc_size(devp)) {
- devp = mdev->dp + i;
- if (devp->type == 0 || devp->type == -1) {
- slot_found = true;
- break;
- }
- }
- if (!slot_found) {
- ret = -EINVAL;
- dev_err(mic_dev(mvdev), "%s %d err %d\n",
- __func__, __LINE__, ret);
- goto exit;
- }
- /*
- * Save off the type before doing the memcpy. Type will be set in the
- * end after completing all initialization for the new device.
- */
- *type = dd_config->type;
- dd_config->type = 0;
- memcpy(devp, dd_config, mic_desc_size(dd_config));
-
- *devpage = devp;
-exit:
- kfree(dd_config);
- return ret;
-}
-
-static void mic_init_device_ctrl(struct mic_vdev *mvdev,
- struct mic_device_desc *devpage)
-{
- struct mic_device_ctrl *dc;
-
- dc = (void *)devpage + mic_aligned_desc_size(devpage);
-
- dc->config_change = 0;
- dc->guest_ack = 0;
- dc->vdev_reset = 0;
- dc->host_ack = 0;
- dc->used_address_updated = 0;
- dc->c2h_vdev_db = -1;
- dc->h2c_vdev_db = -1;
- mvdev->dc = dc;
-}
-
-int mic_virtio_add_device(struct mic_vdev *mvdev,
- void __user *argp)
-{
- struct mic_device *mdev = mvdev->mdev;
- struct mic_device_desc *dd = NULL;
- struct mic_vqconfig *vqconfig;
- int vr_size, i, j, ret;
- u8 type = 0;
- s8 db;
- char irqname[10];
- struct mic_bootparam *bootparam = mdev->dp;
- u16 num;
- dma_addr_t vr_addr;
-
- mutex_lock(&mdev->mic_mutex);
-
- ret = mic_copy_dp_entry(mvdev, argp, &type, &dd);
- if (ret) {
- mutex_unlock(&mdev->mic_mutex);
- return ret;
- }
-
- mic_init_device_ctrl(mvdev, dd);
-
- mvdev->dd = dd;
- mvdev->virtio_id = type;
- vqconfig = mic_vq_config(dd);
- INIT_WORK(&mvdev->virtio_bh_work, mic_bh_handler);
-
- for (i = 0; i < dd->num_vq; i++) {
- struct mic_vringh *mvr = &mvdev->mvr[i];
- struct mic_vring *vr = &mvdev->mvr[i].vring;
- num = le16_to_cpu(vqconfig[i].num);
- mutex_init(&mvr->vr_mutex);
- vr_size = PAGE_ALIGN(vring_size(num, MIC_VIRTIO_RING_ALIGN) +
- sizeof(struct _mic_vring_info));
- vr->va = (void *)
- __get_free_pages(GFP_KERNEL | __GFP_ZERO,
- get_order(vr_size));
- if (!vr->va) {
- ret = -ENOMEM;
- dev_err(mic_dev(mvdev), "%s %d err %d\n",
- __func__, __LINE__, ret);
- goto err;
- }
- vr->len = vr_size;
- vr->info = vr->va + vring_size(num, MIC_VIRTIO_RING_ALIGN);
- vr->info->magic = cpu_to_le32(MIC_MAGIC + mvdev->virtio_id + i);
- vr_addr = mic_map_single(mdev, vr->va, vr_size);
- if (mic_map_error(vr_addr)) {
- free_pages((unsigned long)vr->va, get_order(vr_size));
- ret = -ENOMEM;
- dev_err(mic_dev(mvdev), "%s %d err %d\n",
- __func__, __LINE__, ret);
- goto err;
- }
- vqconfig[i].address = cpu_to_le64(vr_addr);
-
- vring_init(&vr->vr, num, vr->va, MIC_VIRTIO_RING_ALIGN);
- ret = vringh_init_kern(&mvr->vrh,
- *(u32 *)mic_vq_features(mvdev->dd), num, false,
- vr->vr.desc, vr->vr.avail, vr->vr.used);
- if (ret) {
- dev_err(mic_dev(mvdev), "%s %d err %d\n",
- __func__, __LINE__, ret);
- goto err;
- }
- vringh_kiov_init(&mvr->riov, NULL, 0);
- vringh_kiov_init(&mvr->wiov, NULL, 0);
- mvr->head = USHRT_MAX;
- mvr->mvdev = mvdev;
- mvr->vrh.notify = mic_notify;
- dev_dbg(&mdev->pdev->dev,
- "%s %d index %d va %p info %p vr_size 0x%x\n",
- __func__, __LINE__, i, vr->va, vr->info, vr_size);
- mvr->buf = (void *)__get_free_pages(GFP_KERNEL,
- get_order(MIC_INT_DMA_BUF_SIZE));
- mvr->buf_da = mic_map_single(mvdev->mdev, mvr->buf,
- MIC_INT_DMA_BUF_SIZE);
- }
-
- snprintf(irqname, sizeof(irqname), "mic%dvirtio%d", mdev->id,
- mvdev->virtio_id);
- mvdev->virtio_db = mic_next_db(mdev);
- mvdev->virtio_cookie = mic_request_threaded_irq(mdev,
- mic_virtio_intr_handler,
- NULL, irqname, mvdev,
- mvdev->virtio_db, MIC_INTR_DB);
- if (IS_ERR(mvdev->virtio_cookie)) {
- ret = PTR_ERR(mvdev->virtio_cookie);
- dev_dbg(&mdev->pdev->dev, "request irq failed\n");
- goto err;
- }
-
- mvdev->dc->c2h_vdev_db = mvdev->virtio_db;
-
- list_add_tail(&mvdev->list, &mdev->vdev_list);
- /*
- * Order the type update with previous stores. This write barrier
- * is paired with the corresponding read barrier before the uncached
- * system memory read of the type, on the card while scanning the
- * device page.
- */
- smp_wmb();
- dd->type = type;
-
- dev_dbg(&mdev->pdev->dev, "Added virtio device id %d\n", dd->type);
-
- db = bootparam->h2c_config_db;
- if (db != -1)
- mdev->ops->send_intr(mdev, db);
- mutex_unlock(&mdev->mic_mutex);
- return 0;
-err:
- vqconfig = mic_vq_config(dd);
- for (j = 0; j < i; j++) {
- struct mic_vringh *mvr = &mvdev->mvr[j];
- mic_unmap_single(mdev, le64_to_cpu(vqconfig[j].address),
- mvr->vring.len);
- free_pages((unsigned long)mvr->vring.va,
- get_order(mvr->vring.len));
- }
- mutex_unlock(&mdev->mic_mutex);
- return ret;
-}
-
-void mic_virtio_del_device(struct mic_vdev *mvdev)
-{
- struct list_head *pos, *tmp;
- struct mic_vdev *tmp_mvdev;
- struct mic_device *mdev = mvdev->mdev;
- DECLARE_WAIT_QUEUE_HEAD_ONSTACK(wake);
- int i, ret, retry;
- struct mic_vqconfig *vqconfig;
- struct mic_bootparam *bootparam = mdev->dp;
- s8 db;
-
- mutex_lock(&mdev->mic_mutex);
- db = bootparam->h2c_config_db;
- if (db == -1)
- goto skip_hot_remove;
- dev_dbg(&mdev->pdev->dev,
- "Requesting hot remove id %d\n", mvdev->virtio_id);
- mvdev->dc->config_change = MIC_VIRTIO_PARAM_DEV_REMOVE;
- mdev->ops->send_intr(mdev, db);
- for (retry = 100; retry--;) {
- ret = wait_event_timeout(wake,
- mvdev->dc->guest_ack, msecs_to_jiffies(100));
- if (ret)
- break;
- }
- dev_dbg(&mdev->pdev->dev,
- "Device id %d config_change %d guest_ack %d retry %d\n",
- mvdev->virtio_id, mvdev->dc->config_change,
- mvdev->dc->guest_ack, retry);
- mvdev->dc->config_change = 0;
- mvdev->dc->guest_ack = 0;
-skip_hot_remove:
- mic_free_irq(mdev, mvdev->virtio_cookie, mvdev);
- flush_work(&mvdev->virtio_bh_work);
- vqconfig = mic_vq_config(mvdev->dd);
- for (i = 0; i < mvdev->dd->num_vq; i++) {
- struct mic_vringh *mvr = &mvdev->mvr[i];
-
- mic_unmap_single(mvdev->mdev, mvr->buf_da,
- MIC_INT_DMA_BUF_SIZE);
- free_pages((unsigned long)mvr->buf,
- get_order(MIC_INT_DMA_BUF_SIZE));
- vringh_kiov_cleanup(&mvr->riov);
- vringh_kiov_cleanup(&mvr->wiov);
- mic_unmap_single(mdev, le64_to_cpu(vqconfig[i].address),
- mvr->vring.len);
- free_pages((unsigned long)mvr->vring.va,
- get_order(mvr->vring.len));
- }
-
- list_for_each_safe(pos, tmp, &mdev->vdev_list) {
- tmp_mvdev = list_entry(pos, struct mic_vdev, list);
- if (tmp_mvdev == mvdev) {
- list_del(pos);
- dev_dbg(&mdev->pdev->dev,
- "Removing virtio device id %d\n",
- mvdev->virtio_id);
- break;
- }
- }
- /*
- * Order the type update with previous stores. This write barrier
- * is paired with the corresponding read barrier before the uncached
- * system memory read of the type, on the card while scanning the
- * device page.
- */
- smp_wmb();
- mvdev->dd->type = -1;
- mutex_unlock(&mdev->mic_mutex);
-}
diff --git a/drivers/misc/mic/host/mic_virtio.h b/drivers/misc/mic/host/mic_virtio.h
deleted file mode 100644
index a80631f..0000000
--- a/drivers/misc/mic/host/mic_virtio.h
+++ /dev/null
@@ -1,155 +0,0 @@
-/*
- * Intel MIC Platform Software Stack (MPSS)
- *
- * Copyright(c) 2013 Intel Corporation.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License, version 2, as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful, but
- * WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * General Public License for more details.
- *
- * The full GNU General Public License is included in this distribution in
- * the file called "COPYING".
- *
- * Intel MIC Host driver.
- *
- */
-#ifndef MIC_VIRTIO_H
-#define MIC_VIRTIO_H
-
-#include <linux/virtio_config.h>
-#include <linux/mic_ioctl.h>
-
-/*
- * Note on endianness.
- * 1. Host can be both BE or LE
- * 2. Guest/card is LE. Host uses le_to_cpu to access desc/avail
- * rings and ioreadXX/iowriteXX to access used ring.
- * 3. Device page exposed by host to guest contains LE values. Guest
- * accesses these using ioreadXX/iowriteXX etc. This way in general we
- * obey the virtio spec according to which guest works with native
- * endianness and host is aware of guest endianness and does all
- * required endianness conversion.
- * 4. Data provided from user space to guest (in ADD_DEVICE and
- * CONFIG_CHANGE ioctl's) is not interpreted by the driver and should be
- * in guest endianness.
- */
-
-/**
- * struct mic_vringh - Virtio ring host information.
- *
- * @vring: The MIC vring used for setting up user space mappings.
- * @vrh: The host VRINGH used for accessing the card vrings.
- * @riov: The VRINGH read kernel IOV.
- * @wiov: The VRINGH write kernel IOV.
- * @vr_mutex: Mutex for synchronizing access to the VRING.
- * @buf: Temporary kernel buffer used to copy in/out data
- * from/to the card via DMA.
- * @buf_da: dma address of buf.
- * @mvdev: Back pointer to MIC virtio device for vringh_notify(..).
- * @head: The VRINGH head index address passed to vringh_getdesc_kern(..).
- */
-struct mic_vringh {
- struct mic_vring vring;
- struct vringh vrh;
- struct vringh_kiov riov;
- struct vringh_kiov wiov;
- struct mutex vr_mutex;
- void *buf;
- dma_addr_t buf_da;
- struct mic_vdev *mvdev;
- u16 head;
-};
-
-/**
- * struct mic_vdev - Host information for a card Virtio device.
- *
- * @virtio_id - Virtio device id.
- * @waitq - Waitqueue to allow ring3 apps to poll.
- * @mdev - Back pointer to host MIC device.
- * @poll_wake - Used for waking up threads blocked in poll.
- * @out_bytes - Debug stats for number of bytes copied from host to card.
- * @in_bytes - Debug stats for number of bytes copied from card to host.
- * @out_bytes_dma - Debug stats for number of bytes copied from host to card
- * using DMA.
- * @in_bytes_dma - Debug stats for number of bytes copied from card to host
- * using DMA.
- * @tx_len_unaligned - Debug stats for number of bytes copied to the card where
- * the transfer length did not have the required DMA alignment.
- * @tx_dst_unaligned - Debug stats for number of bytes copied where the
- * destination address on the card did not have the required DMA alignment.
- * @mvr - Store per VRING data structures.
- * @virtio_bh_work - Work struct used to schedule virtio bottom half handling.
- * @dd - Virtio device descriptor.
- * @dc - Virtio device control fields.
- * @list - List of Virtio devices.
- * @virtio_db - The doorbell used by the card to interrupt the host.
- * @virtio_cookie - The cookie returned while requesting interrupts.
- */
-struct mic_vdev {
- int virtio_id;
- wait_queue_head_t waitq;
- struct mic_device *mdev;
- int poll_wake;
- unsigned long out_bytes;
- unsigned long in_bytes;
- unsigned long out_bytes_dma;
- unsigned long in_bytes_dma;
- unsigned long tx_len_unaligned;
- unsigned long tx_dst_unaligned;
- struct mic_vringh mvr[MIC_MAX_VRINGS];
- struct work_struct virtio_bh_work;
- struct mic_device_desc *dd;
- struct mic_device_ctrl *dc;
- struct list_head list;
- int virtio_db;
- struct mic_irq *virtio_cookie;
-};
-
-void mic_virtio_uninit(struct mic_device *mdev);
-int mic_virtio_add_device(struct mic_vdev *mvdev,
- void __user *argp);
-void mic_virtio_del_device(struct mic_vdev *mvdev);
-int mic_virtio_config_change(struct mic_vdev *mvdev,
- void __user *argp);
-int mic_virtio_copy_desc(struct mic_vdev *mvdev,
- struct mic_copy_desc *request);
-void mic_virtio_reset_devices(struct mic_device *mdev);
-void mic_bh_handler(struct work_struct *work);
-
-/* Helper API to obtain the MIC PCIe device */
-static inline struct device *mic_dev(struct mic_vdev *mvdev)
-{
- return &mvdev->mdev->pdev->dev;
-}
-
-/* Helper API to check if a virtio device is initialized */
-static inline int mic_vdev_inited(struct mic_vdev *mvdev)
-{
- /* Device has not been created yet */
- if (!mvdev->dd || !mvdev->dd->type) {
- dev_err(mic_dev(mvdev), "%s %d err %d\n",
- __func__, __LINE__, -EINVAL);
- return -EINVAL;
- }
-
- /* Device has been removed/deleted */
- if (mvdev->dd->type == -1) {
- dev_err(mic_dev(mvdev), "%s %d err %d\n",
- __func__, __LINE__, -ENODEV);
- return -ENODEV;
- }
-
- return 0;
-}
-
-/* Helper API to check if a virtio device is running */
-static inline bool mic_vdevup(struct mic_vdev *mvdev)
-{
- return !!mvdev->dd->status;
-}
-#endif
diff --git a/drivers/misc/mic/host/mic_x100.c b/drivers/misc/mic/host/mic_x100.c
index 8118ac4..82a973c 100644
--- a/drivers/misc/mic/host/mic_x100.c
+++ b/drivers/misc/mic/host/mic_x100.c
@@ -450,26 +450,29 @@
rc = mic_x100_get_boot_addr(mdev);
if (rc)
- goto error;
+ return rc;
/* load OS */
rc = request_firmware(&fw, mdev->cosm_dev->firmware, &mdev->pdev->dev);
if (rc < 0) {
dev_err(&mdev->pdev->dev,
"ramdisk request_firmware failed: %d %s\n",
rc, mdev->cosm_dev->firmware);
- goto error;
+ return rc;
}
if (mdev->bootaddr > mdev->aper.len - fw->size) {
rc = -EINVAL;
dev_err(&mdev->pdev->dev, "%s %d rc %d bootaddr 0x%x\n",
__func__, __LINE__, rc, mdev->bootaddr);
- release_firmware(fw);
goto error;
}
memcpy_toio(mdev->aper.va + mdev->bootaddr, fw->data, fw->size);
mdev->ops->write_spad(mdev, MIC_X100_FW_SIZE, fw->size);
- if (!strcmp(mdev->cosm_dev->bootmode, "flash"))
- goto done;
+ if (!strcmp(mdev->cosm_dev->bootmode, "flash")) {
+ rc = -EINVAL;
+ dev_err(&mdev->pdev->dev, "%s %d rc %d\n",
+ __func__, __LINE__, rc);
+ goto error;
+ }
/* load command line */
rc = mic_x100_load_command_line(mdev, fw);
if (rc) {
@@ -481,9 +484,11 @@
/* load ramdisk */
if (mdev->cosm_dev->ramdisk)
rc = mic_x100_load_ramdisk(mdev);
+
+ return rc;
+
error:
- dev_dbg(&mdev->pdev->dev, "%s %d rc %d\n", __func__, __LINE__, rc);
-done:
+ release_firmware(fw);
return rc;
}
diff --git a/drivers/misc/mic/scif/scif_dma.c b/drivers/misc/mic/scif/scif_dma.c
index 95a13c6..cd01a0e 100644
--- a/drivers/misc/mic/scif/scif_dma.c
+++ b/drivers/misc/mic/scif/scif_dma.c
@@ -74,11 +74,6 @@
bool ordered;
};
-#ifndef list_entry_next
-#define list_entry_next(pos, member) \
- list_entry(pos->member.next, typeof(*pos), member)
-#endif
-
/**
* scif_reserve_dma_chan:
* @ep: Endpoint Descriptor.
@@ -276,13 +271,10 @@
scif_find_mmu_notifier(struct mm_struct *mm, struct scif_endpt_rma_info *rma)
{
struct scif_mmu_notif *mmn;
- struct list_head *item;
- list_for_each(item, &rma->mmn_list) {
- mmn = list_entry(item, struct scif_mmu_notif, list);
+ list_for_each_entry(mmn, &rma->mmn_list, list)
if (mmn->mm == mm)
return mmn;
- }
return NULL;
}
@@ -293,13 +285,12 @@
= kzalloc(sizeof(*mmn), GFP_KERNEL);
if (!mmn)
- return ERR_PTR(ENOMEM);
+ return ERR_PTR(-ENOMEM);
scif_init_mmu_notifier(mmn, current->mm, ep);
- if (mmu_notifier_register(&mmn->ep_mmu_notifier,
- current->mm)) {
+ if (mmu_notifier_register(&mmn->ep_mmu_notifier, current->mm)) {
kfree(mmn);
- return ERR_PTR(EBUSY);
+ return ERR_PTR(-EBUSY);
}
list_add(&mmn->list, &ep->rma_info.mmn_list);
return mmn;
@@ -851,7 +842,7 @@
(window->nr_pages << PAGE_SHIFT);
while (rem_len) {
if (offset == end_offset) {
- window = list_entry_next(window, list);
+ window = list_next_entry(window, list);
end_offset = window->offset +
(window->nr_pages << PAGE_SHIFT);
}
@@ -957,7 +948,7 @@
remaining_len -= tail_len;
while (remaining_len) {
if (offset == end_offset) {
- window = list_entry_next(window, list);
+ window = list_next_entry(window, list);
end_offset = window->offset +
(window->nr_pages << PAGE_SHIFT);
}
@@ -1064,7 +1055,7 @@
}
if (tail_len) {
if (offset == end_offset) {
- window = list_entry_next(window, list);
+ window = list_next_entry(window, list);
end_offset = window->offset +
(window->nr_pages << PAGE_SHIFT);
}
@@ -1147,13 +1138,13 @@
(dst_window->nr_pages << PAGE_SHIFT);
while (remaining_len) {
if (src_offset == end_src_offset) {
- src_window = list_entry_next(src_window, list);
+ src_window = list_next_entry(src_window, list);
end_src_offset = src_window->offset +
(src_window->nr_pages << PAGE_SHIFT);
scif_init_window_iter(src_window, &src_win_iter);
}
if (dst_offset == end_dst_offset) {
- dst_window = list_entry_next(dst_window, list);
+ dst_window = list_next_entry(dst_window, list);
end_dst_offset = dst_window->offset +
(dst_window->nr_pages << PAGE_SHIFT);
scif_init_window_iter(dst_window, &dst_win_iter);
@@ -1314,13 +1305,13 @@
remaining_len -= tail_len;
while (remaining_len) {
if (src_offset == end_src_offset) {
- src_window = list_entry_next(src_window, list);
+ src_window = list_next_entry(src_window, list);
end_src_offset = src_window->offset +
(src_window->nr_pages << PAGE_SHIFT);
scif_init_window_iter(src_window, &src_win_iter);
}
if (dst_offset == end_dst_offset) {
- dst_window = list_entry_next(dst_window, list);
+ dst_window = list_next_entry(dst_window, list);
end_dst_offset = dst_window->offset +
(dst_window->nr_pages << PAGE_SHIFT);
scif_init_window_iter(dst_window, &dst_win_iter);
@@ -1405,9 +1396,9 @@
if (remaining_len) {
loop_len = remaining_len;
if (src_offset == end_src_offset)
- src_window = list_entry_next(src_window, list);
+ src_window = list_next_entry(src_window, list);
if (dst_offset == end_dst_offset)
- dst_window = list_entry_next(dst_window, list);
+ dst_window = list_next_entry(dst_window, list);
src_dma_addr = __scif_off_to_dma_addr(src_window, src_offset);
dst_dma_addr = __scif_off_to_dma_addr(dst_window, dst_offset);
@@ -1550,12 +1541,12 @@
end_dst_offset = dst_window->offset +
(dst_window->nr_pages << PAGE_SHIFT);
if (src_offset == end_src_offset) {
- src_window = list_entry_next(src_window, list);
+ src_window = list_next_entry(src_window, list);
scif_init_window_iter(src_window,
&src_win_iter);
}
if (dst_offset == end_dst_offset) {
- dst_window = list_entry_next(dst_window, list);
+ dst_window = list_next_entry(dst_window, list);
scif_init_window_iter(dst_window,
&dst_win_iter);
}
@@ -1730,7 +1721,7 @@
mutex_lock(&ep->rma_info.mmn_lock);
mmn = scif_find_mmu_notifier(current->mm, &ep->rma_info);
if (!mmn)
- scif_add_mmu_notifier(current->mm, ep);
+ mmn = scif_add_mmu_notifier(current->mm, ep);
mutex_unlock(&ep->rma_info.mmn_lock);
if (IS_ERR(mmn)) {
scif_put_peer_dev(spdev);
diff --git a/drivers/misc/mic/scif/scif_rma.c b/drivers/misc/mic/scif/scif_rma.c
index 8310b4d..6a451bd 100644
--- a/drivers/misc/mic/scif/scif_rma.c
+++ b/drivers/misc/mic/scif/scif_rma.c
@@ -1511,7 +1511,7 @@
if ((map_flags & SCIF_MAP_FIXED) &&
((ALIGN(offset, PAGE_SIZE) != offset) ||
(offset < 0) ||
- (offset + (off_t)len < offset)))
+ (len > LONG_MAX - offset)))
return -EINVAL;
might_sleep();
@@ -1614,7 +1614,7 @@
if ((map_flags & SCIF_MAP_FIXED) &&
((ALIGN(offset, PAGE_SIZE) != offset) ||
(offset < 0) ||
- (offset + (off_t)len < offset)))
+ (len > LONG_MAX - offset)))
return -EINVAL;
/* Unsupported protection requested */
@@ -1732,7 +1732,8 @@
/* Offset is not page aligned or offset+len wraps around */
if ((ALIGN(offset, PAGE_SIZE) != offset) ||
- (offset + (off_t)len < offset))
+ (offset < 0) ||
+ (len > LONG_MAX - offset))
return -EINVAL;
err = scif_verify_epd(ep);
diff --git a/drivers/misc/mic/vop/Makefile b/drivers/misc/mic/vop/Makefile
new file mode 100644
index 0000000..78819c8
--- /dev/null
+++ b/drivers/misc/mic/vop/Makefile
@@ -0,0 +1,9 @@
+#
+# Makefile - Intel MIC Linux driver.
+# Copyright(c) 2016, Intel Corporation.
+#
+obj-m := vop.o
+
+vop-objs += vop_main.o
+vop-objs += vop_debugfs.o
+vop-objs += vop_vringh.o
diff --git a/drivers/misc/mic/vop/vop_debugfs.c b/drivers/misc/mic/vop/vop_debugfs.c
new file mode 100644
index 0000000..ab43884
--- /dev/null
+++ b/drivers/misc/mic/vop/vop_debugfs.c
@@ -0,0 +1,232 @@
+/*
+ * Intel MIC Platform Software Stack (MPSS)
+ *
+ * Copyright(c) 2016 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License, version 2, as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ *
+ * Intel Virtio Over PCIe (VOP) driver.
+ *
+ */
+#include <linux/debugfs.h>
+#include <linux/seq_file.h>
+
+#include "vop_main.h"
+
+static int vop_dp_show(struct seq_file *s, void *pos)
+{
+ struct mic_device_desc *d;
+ struct mic_device_ctrl *dc;
+ struct mic_vqconfig *vqconfig;
+ __u32 *features;
+ __u8 *config;
+ struct vop_info *vi = s->private;
+ struct vop_device *vpdev = vi->vpdev;
+ struct mic_bootparam *bootparam = vpdev->hw_ops->get_dp(vpdev);
+ int j, k;
+
+ seq_printf(s, "Bootparam: magic 0x%x\n",
+ bootparam->magic);
+ seq_printf(s, "Bootparam: h2c_config_db %d\n",
+ bootparam->h2c_config_db);
+ seq_printf(s, "Bootparam: node_id %d\n",
+ bootparam->node_id);
+ seq_printf(s, "Bootparam: c2h_scif_db %d\n",
+ bootparam->c2h_scif_db);
+ seq_printf(s, "Bootparam: h2c_scif_db %d\n",
+ bootparam->h2c_scif_db);
+ seq_printf(s, "Bootparam: scif_host_dma_addr 0x%llx\n",
+ bootparam->scif_host_dma_addr);
+ seq_printf(s, "Bootparam: scif_card_dma_addr 0x%llx\n",
+ bootparam->scif_card_dma_addr);
+
+ for (j = sizeof(*bootparam);
+ j < MIC_DP_SIZE; j += mic_total_desc_size(d)) {
+ d = (void *)bootparam + j;
+ dc = (void *)d + mic_aligned_desc_size(d);
+
+ /* end of list */
+ if (d->type == 0)
+ break;
+
+ if (d->type == -1)
+ continue;
+
+ seq_printf(s, "Type %d ", d->type);
+ seq_printf(s, "Num VQ %d ", d->num_vq);
+ seq_printf(s, "Feature Len %d\n", d->feature_len);
+ seq_printf(s, "Config Len %d ", d->config_len);
+ seq_printf(s, "Shutdown Status %d\n", d->status);
+
+ for (k = 0; k < d->num_vq; k++) {
+ vqconfig = mic_vq_config(d) + k;
+ seq_printf(s, "vqconfig[%d]: ", k);
+ seq_printf(s, "address 0x%llx ",
+ vqconfig->address);
+ seq_printf(s, "num %d ", vqconfig->num);
+ seq_printf(s, "used address 0x%llx\n",
+ vqconfig->used_address);
+ }
+
+ features = (__u32 *)mic_vq_features(d);
+ seq_printf(s, "Features: Host 0x%x ", features[0]);
+ seq_printf(s, "Guest 0x%x\n", features[1]);
+
+ config = mic_vq_configspace(d);
+ for (k = 0; k < d->config_len; k++)
+ seq_printf(s, "config[%d]=%d\n", k, config[k]);
+
+ seq_puts(s, "Device control:\n");
+ seq_printf(s, "Config Change %d ", dc->config_change);
+ seq_printf(s, "Vdev reset %d\n", dc->vdev_reset);
+ seq_printf(s, "Guest Ack %d ", dc->guest_ack);
+ seq_printf(s, "Host ack %d\n", dc->host_ack);
+ seq_printf(s, "Used address updated %d ",
+ dc->used_address_updated);
+ seq_printf(s, "Vdev 0x%llx\n", dc->vdev);
+ seq_printf(s, "c2h doorbell %d ", dc->c2h_vdev_db);
+ seq_printf(s, "h2c doorbell %d\n", dc->h2c_vdev_db);
+ }
+ schedule_work(&vi->hotplug_work);
+ return 0;
+}
+
+static int vop_dp_debug_open(struct inode *inode, struct file *file)
+{
+ return single_open(file, vop_dp_show, inode->i_private);
+}
+
+static int vop_dp_debug_release(struct inode *inode, struct file *file)
+{
+ return single_release(inode, file);
+}
+
+static const struct file_operations dp_ops = {
+ .owner = THIS_MODULE,
+ .open = vop_dp_debug_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = vop_dp_debug_release
+};
+
+static int vop_vdev_info_show(struct seq_file *s, void *unused)
+{
+ struct vop_info *vi = s->private;
+ struct list_head *pos, *tmp;
+ struct vop_vdev *vdev;
+ int i, j;
+
+ mutex_lock(&vi->vop_mutex);
+ list_for_each_safe(pos, tmp, &vi->vdev_list) {
+ vdev = list_entry(pos, struct vop_vdev, list);
+ seq_printf(s, "VDEV type %d state %s in %ld out %ld in_dma %ld out_dma %ld\n",
+ vdev->virtio_id,
+ vop_vdevup(vdev) ? "UP" : "DOWN",
+ vdev->in_bytes,
+ vdev->out_bytes,
+ vdev->in_bytes_dma,
+ vdev->out_bytes_dma);
+ for (i = 0; i < MIC_MAX_VRINGS; i++) {
+ struct vring_desc *desc;
+ struct vring_avail *avail;
+ struct vring_used *used;
+ struct vop_vringh *vvr = &vdev->vvr[i];
+ struct vringh *vrh = &vvr->vrh;
+ int num = vrh->vring.num;
+
+ if (!num)
+ continue;
+ desc = vrh->vring.desc;
+ seq_printf(s, "vring i %d avail_idx %d",
+ i, vvr->vring.info->avail_idx & (num - 1));
+ seq_printf(s, " vring i %d avail_idx %d\n",
+ i, vvr->vring.info->avail_idx);
+ seq_printf(s, "vrh i %d weak_barriers %d",
+ i, vrh->weak_barriers);
+ seq_printf(s, " last_avail_idx %d last_used_idx %d",
+ vrh->last_avail_idx, vrh->last_used_idx);
+ seq_printf(s, " completed %d\n", vrh->completed);
+ for (j = 0; j < num; j++) {
+ seq_printf(s, "desc[%d] addr 0x%llx len %d",
+ j, desc->addr, desc->len);
+ seq_printf(s, " flags 0x%x next %d\n",
+ desc->flags, desc->next);
+ desc++;
+ }
+ avail = vrh->vring.avail;
+ seq_printf(s, "avail flags 0x%x idx %d\n",
+ vringh16_to_cpu(vrh, avail->flags),
+ vringh16_to_cpu(vrh,
+ avail->idx) & (num - 1));
+ seq_printf(s, "avail flags 0x%x idx %d\n",
+ vringh16_to_cpu(vrh, avail->flags),
+ vringh16_to_cpu(vrh, avail->idx));
+ for (j = 0; j < num; j++)
+ seq_printf(s, "avail ring[%d] %d\n",
+ j, avail->ring[j]);
+ used = vrh->vring.used;
+ seq_printf(s, "used flags 0x%x idx %d\n",
+ vringh16_to_cpu(vrh, used->flags),
+ vringh16_to_cpu(vrh, used->idx) & (num - 1));
+ seq_printf(s, "used flags 0x%x idx %d\n",
+ vringh16_to_cpu(vrh, used->flags),
+ vringh16_to_cpu(vrh, used->idx));
+ for (j = 0; j < num; j++)
+ seq_printf(s, "used ring[%d] id %d len %d\n",
+ j, vringh32_to_cpu(vrh,
+ used->ring[j].id),
+ vringh32_to_cpu(vrh,
+ used->ring[j].len));
+ }
+ }
+ mutex_unlock(&vi->vop_mutex);
+
+ return 0;
+}
+
+static int vop_vdev_info_debug_open(struct inode *inode, struct file *file)
+{
+ return single_open(file, vop_vdev_info_show, inode->i_private);
+}
+
+static int vop_vdev_info_debug_release(struct inode *inode, struct file *file)
+{
+ return single_release(inode, file);
+}
+
+static const struct file_operations vdev_info_ops = {
+ .owner = THIS_MODULE,
+ .open = vop_vdev_info_debug_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = vop_vdev_info_debug_release
+};
+
+void vop_init_debugfs(struct vop_info *vi)
+{
+ char name[16];
+
+ snprintf(name, sizeof(name), "%s%d", KBUILD_MODNAME, vi->vpdev->dnode);
+ vi->dbg = debugfs_create_dir(name, NULL);
+ if (!vi->dbg) {
+ pr_err("can't create debugfs dir vop\n");
+ return;
+ }
+ debugfs_create_file("dp", 0444, vi->dbg, vi, &dp_ops);
+ debugfs_create_file("vdev_info", 0444, vi->dbg, vi, &vdev_info_ops);
+}
+
+void vop_exit_debugfs(struct vop_info *vi)
+{
+ debugfs_remove_recursive(vi->dbg);
+}
diff --git a/drivers/misc/mic/vop/vop_main.c b/drivers/misc/mic/vop/vop_main.c
new file mode 100644
index 0000000..1a2b67f3
--- /dev/null
+++ b/drivers/misc/mic/vop/vop_main.c
@@ -0,0 +1,755 @@
+/*
+ * Intel MIC Platform Software Stack (MPSS)
+ *
+ * Copyright(c) 2016 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License, version 2, as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ *
+ * Adapted from:
+ *
+ * virtio for kvm on s390
+ *
+ * Copyright IBM Corp. 2008
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License (version 2 only)
+ * as published by the Free Software Foundation.
+ *
+ * Author(s): Christian Borntraeger <borntraeger@de.ibm.com>
+ *
+ * Intel Virtio Over PCIe (VOP) driver.
+ *
+ */
+#include <linux/delay.h>
+#include <linux/module.h>
+#include <linux/sched.h>
+#include <linux/dma-mapping.h>
+
+#include "vop_main.h"
+
+#define VOP_MAX_VRINGS 4
+
+/*
+ * _vop_vdev - Allocated per virtio device instance injected by the peer.
+ *
+ * @vdev: Virtio device
+ * @desc: Virtio device page descriptor
+ * @dc: Virtio device control
+ * @vpdev: VOP device which is the parent for this virtio device
+ * @vr: Buffer for accessing the VRING
+ * @used: Buffer for used
+ * @used_size: Size of the used buffer
+ * @reset_done: Track whether VOP reset is complete
+ * @virtio_cookie: Cookie returned upon requesting a interrupt
+ * @c2h_vdev_db: The doorbell used by the guest to interrupt the host
+ * @h2c_vdev_db: The doorbell used by the host to interrupt the guest
+ * @dnode: The destination node
+ */
+struct _vop_vdev {
+ struct virtio_device vdev;
+ struct mic_device_desc __iomem *desc;
+ struct mic_device_ctrl __iomem *dc;
+ struct vop_device *vpdev;
+ void __iomem *vr[VOP_MAX_VRINGS];
+ dma_addr_t used[VOP_MAX_VRINGS];
+ int used_size[VOP_MAX_VRINGS];
+ struct completion reset_done;
+ struct mic_irq *virtio_cookie;
+ int c2h_vdev_db;
+ int h2c_vdev_db;
+ int dnode;
+};
+
+#define to_vopvdev(vd) container_of(vd, struct _vop_vdev, vdev)
+
+#define _vop_aligned_desc_size(d) __mic_align(_vop_desc_size(d), 8)
+
+/* Helper API to obtain the parent of the virtio device */
+static inline struct device *_vop_dev(struct _vop_vdev *vdev)
+{
+ return vdev->vdev.dev.parent;
+}
+
+static inline unsigned _vop_desc_size(struct mic_device_desc __iomem *desc)
+{
+ return sizeof(*desc)
+ + ioread8(&desc->num_vq) * sizeof(struct mic_vqconfig)
+ + ioread8(&desc->feature_len) * 2
+ + ioread8(&desc->config_len);
+}
+
+static inline struct mic_vqconfig __iomem *
+_vop_vq_config(struct mic_device_desc __iomem *desc)
+{
+ return (struct mic_vqconfig __iomem *)(desc + 1);
+}
+
+static inline u8 __iomem *
+_vop_vq_features(struct mic_device_desc __iomem *desc)
+{
+ return (u8 __iomem *)(_vop_vq_config(desc) + ioread8(&desc->num_vq));
+}
+
+static inline u8 __iomem *
+_vop_vq_configspace(struct mic_device_desc __iomem *desc)
+{
+ return _vop_vq_features(desc) + ioread8(&desc->feature_len) * 2;
+}
+
+static inline unsigned
+_vop_total_desc_size(struct mic_device_desc __iomem *desc)
+{
+ return _vop_aligned_desc_size(desc) + sizeof(struct mic_device_ctrl);
+}
+
+/* This gets the device's feature bits. */
+static u64 vop_get_features(struct virtio_device *vdev)
+{
+ unsigned int i, bits;
+ u32 features = 0;
+ struct mic_device_desc __iomem *desc = to_vopvdev(vdev)->desc;
+ u8 __iomem *in_features = _vop_vq_features(desc);
+ int feature_len = ioread8(&desc->feature_len);
+
+ bits = min_t(unsigned, feature_len, sizeof(vdev->features)) * 8;
+ for (i = 0; i < bits; i++)
+ if (ioread8(&in_features[i / 8]) & (BIT(i % 8)))
+ features |= BIT(i);
+
+ return features;
+}
+
+static int vop_finalize_features(struct virtio_device *vdev)
+{
+ unsigned int i, bits;
+ struct mic_device_desc __iomem *desc = to_vopvdev(vdev)->desc;
+ u8 feature_len = ioread8(&desc->feature_len);
+ /* Second half of bitmap is features we accept. */
+ u8 __iomem *out_features =
+ _vop_vq_features(desc) + feature_len;
+
+ /* Give virtio_ring a chance to accept features. */
+ vring_transport_features(vdev);
+
+ memset_io(out_features, 0, feature_len);
+ bits = min_t(unsigned, feature_len,
+ sizeof(vdev->features)) * 8;
+ for (i = 0; i < bits; i++) {
+ if (__virtio_test_bit(vdev, i))
+ iowrite8(ioread8(&out_features[i / 8]) | (1 << (i % 8)),
+ &out_features[i / 8]);
+ }
+ return 0;
+}
+
+/*
+ * Reading and writing elements in config space
+ */
+static void vop_get(struct virtio_device *vdev, unsigned int offset,
+ void *buf, unsigned len)
+{
+ struct mic_device_desc __iomem *desc = to_vopvdev(vdev)->desc;
+
+ if (offset + len > ioread8(&desc->config_len))
+ return;
+ memcpy_fromio(buf, _vop_vq_configspace(desc) + offset, len);
+}
+
+static void vop_set(struct virtio_device *vdev, unsigned int offset,
+ const void *buf, unsigned len)
+{
+ struct mic_device_desc __iomem *desc = to_vopvdev(vdev)->desc;
+
+ if (offset + len > ioread8(&desc->config_len))
+ return;
+ memcpy_toio(_vop_vq_configspace(desc) + offset, buf, len);
+}
+
+/*
+ * The operations to get and set the status word just access the status
+ * field of the device descriptor. set_status also interrupts the host
+ * to tell about status changes.
+ */
+static u8 vop_get_status(struct virtio_device *vdev)
+{
+ return ioread8(&to_vopvdev(vdev)->desc->status);
+}
+
+static void vop_set_status(struct virtio_device *dev, u8 status)
+{
+ struct _vop_vdev *vdev = to_vopvdev(dev);
+ struct vop_device *vpdev = vdev->vpdev;
+
+ if (!status)
+ return;
+ iowrite8(status, &vdev->desc->status);
+ vpdev->hw_ops->send_intr(vpdev, vdev->c2h_vdev_db);
+}
+
+/* Inform host on a virtio device reset and wait for ack from host */
+static void vop_reset_inform_host(struct virtio_device *dev)
+{
+ struct _vop_vdev *vdev = to_vopvdev(dev);
+ struct mic_device_ctrl __iomem *dc = vdev->dc;
+ struct vop_device *vpdev = vdev->vpdev;
+ int retry;
+
+ iowrite8(0, &dc->host_ack);
+ iowrite8(1, &dc->vdev_reset);
+ vpdev->hw_ops->send_intr(vpdev, vdev->c2h_vdev_db);
+
+ /* Wait till host completes all card accesses and acks the reset */
+ for (retry = 100; retry--;) {
+ if (ioread8(&dc->host_ack))
+ break;
+ msleep(100);
+ };
+
+ dev_dbg(_vop_dev(vdev), "%s: retry: %d\n", __func__, retry);
+
+ /* Reset status to 0 in case we timed out */
+ iowrite8(0, &vdev->desc->status);
+}
+
+static void vop_reset(struct virtio_device *dev)
+{
+ struct _vop_vdev *vdev = to_vopvdev(dev);
+
+ dev_dbg(_vop_dev(vdev), "%s: virtio id %d\n",
+ __func__, dev->id.device);
+
+ vop_reset_inform_host(dev);
+ complete_all(&vdev->reset_done);
+}
+
+/*
+ * The virtio_ring code calls this API when it wants to notify the Host.
+ */
+static bool vop_notify(struct virtqueue *vq)
+{
+ struct _vop_vdev *vdev = vq->priv;
+ struct vop_device *vpdev = vdev->vpdev;
+
+ vpdev->hw_ops->send_intr(vpdev, vdev->c2h_vdev_db);
+ return true;
+}
+
+static void vop_del_vq(struct virtqueue *vq, int n)
+{
+ struct _vop_vdev *vdev = to_vopvdev(vq->vdev);
+ struct vring *vr = (struct vring *)(vq + 1);
+ struct vop_device *vpdev = vdev->vpdev;
+
+ dma_unmap_single(&vpdev->dev, vdev->used[n],
+ vdev->used_size[n], DMA_BIDIRECTIONAL);
+ free_pages((unsigned long)vr->used, get_order(vdev->used_size[n]));
+ vring_del_virtqueue(vq);
+ vpdev->hw_ops->iounmap(vpdev, vdev->vr[n]);
+ vdev->vr[n] = NULL;
+}
+
+static void vop_del_vqs(struct virtio_device *dev)
+{
+ struct _vop_vdev *vdev = to_vopvdev(dev);
+ struct virtqueue *vq, *n;
+ int idx = 0;
+
+ dev_dbg(_vop_dev(vdev), "%s\n", __func__);
+
+ list_for_each_entry_safe(vq, n, &dev->vqs, list)
+ vop_del_vq(vq, idx++);
+}
+
+/*
+ * This routine will assign vring's allocated in host/io memory. Code in
+ * virtio_ring.c however continues to access this io memory as if it were local
+ * memory without io accessors.
+ */
+static struct virtqueue *vop_find_vq(struct virtio_device *dev,
+ unsigned index,
+ void (*callback)(struct virtqueue *vq),
+ const char *name)
+{
+ struct _vop_vdev *vdev = to_vopvdev(dev);
+ struct vop_device *vpdev = vdev->vpdev;
+ struct mic_vqconfig __iomem *vqconfig;
+ struct mic_vqconfig config;
+ struct virtqueue *vq;
+ void __iomem *va;
+ struct _mic_vring_info __iomem *info;
+ void *used;
+ int vr_size, _vr_size, err, magic;
+ struct vring *vr;
+ u8 type = ioread8(&vdev->desc->type);
+
+ if (index >= ioread8(&vdev->desc->num_vq))
+ return ERR_PTR(-ENOENT);
+
+ if (!name)
+ return ERR_PTR(-ENOENT);
+
+ /* First assign the vring's allocated in host memory */
+ vqconfig = _vop_vq_config(vdev->desc) + index;
+ memcpy_fromio(&config, vqconfig, sizeof(config));
+ _vr_size = vring_size(le16_to_cpu(config.num), MIC_VIRTIO_RING_ALIGN);
+ vr_size = PAGE_ALIGN(_vr_size + sizeof(struct _mic_vring_info));
+ va = vpdev->hw_ops->ioremap(vpdev, le64_to_cpu(config.address),
+ vr_size);
+ if (!va)
+ return ERR_PTR(-ENOMEM);
+ vdev->vr[index] = va;
+ memset_io(va, 0x0, _vr_size);
+ vq = vring_new_virtqueue(
+ index,
+ le16_to_cpu(config.num), MIC_VIRTIO_RING_ALIGN,
+ dev,
+ false,
+ (void __force *)va, vop_notify, callback, name);
+ if (!vq) {
+ err = -ENOMEM;
+ goto unmap;
+ }
+ info = va + _vr_size;
+ magic = ioread32(&info->magic);
+
+ if (WARN(magic != MIC_MAGIC + type + index, "magic mismatch")) {
+ err = -EIO;
+ goto unmap;
+ }
+
+ /* Allocate and reassign used ring now */
+ vdev->used_size[index] = PAGE_ALIGN(sizeof(__u16) * 3 +
+ sizeof(struct vring_used_elem) *
+ le16_to_cpu(config.num));
+ used = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO,
+ get_order(vdev->used_size[index]));
+ if (!used) {
+ err = -ENOMEM;
+ dev_err(_vop_dev(vdev), "%s %d err %d\n",
+ __func__, __LINE__, err);
+ goto del_vq;
+ }
+ vdev->used[index] = dma_map_single(&vpdev->dev, used,
+ vdev->used_size[index],
+ DMA_BIDIRECTIONAL);
+ if (dma_mapping_error(&vpdev->dev, vdev->used[index])) {
+ err = -ENOMEM;
+ dev_err(_vop_dev(vdev), "%s %d err %d\n",
+ __func__, __LINE__, err);
+ goto free_used;
+ }
+ writeq(vdev->used[index], &vqconfig->used_address);
+ /*
+ * To reassign the used ring here we are directly accessing
+ * struct vring_virtqueue which is a private data structure
+ * in virtio_ring.c. At the minimum, a BUILD_BUG_ON() in
+ * vring_new_virtqueue() would ensure that
+ * (&vq->vring == (struct vring *) (&vq->vq + 1));
+ */
+ vr = (struct vring *)(vq + 1);
+ vr->used = used;
+
+ vq->priv = vdev;
+ return vq;
+free_used:
+ free_pages((unsigned long)used,
+ get_order(vdev->used_size[index]));
+del_vq:
+ vring_del_virtqueue(vq);
+unmap:
+ vpdev->hw_ops->iounmap(vpdev, vdev->vr[index]);
+ return ERR_PTR(err);
+}
+
+static int vop_find_vqs(struct virtio_device *dev, unsigned nvqs,
+ struct virtqueue *vqs[],
+ vq_callback_t *callbacks[],
+ const char * const names[])
+{
+ struct _vop_vdev *vdev = to_vopvdev(dev);
+ struct vop_device *vpdev = vdev->vpdev;
+ struct mic_device_ctrl __iomem *dc = vdev->dc;
+ int i, err, retry;
+
+ /* We must have this many virtqueues. */
+ if (nvqs > ioread8(&vdev->desc->num_vq))
+ return -ENOENT;
+
+ for (i = 0; i < nvqs; ++i) {
+ dev_dbg(_vop_dev(vdev), "%s: %d: %s\n",
+ __func__, i, names[i]);
+ vqs[i] = vop_find_vq(dev, i, callbacks[i], names[i]);
+ if (IS_ERR(vqs[i])) {
+ err = PTR_ERR(vqs[i]);
+ goto error;
+ }
+ }
+
+ iowrite8(1, &dc->used_address_updated);
+ /*
+ * Send an interrupt to the host to inform it that used
+ * rings have been re-assigned.
+ */
+ vpdev->hw_ops->send_intr(vpdev, vdev->c2h_vdev_db);
+ for (retry = 100; --retry;) {
+ if (!ioread8(&dc->used_address_updated))
+ break;
+ msleep(100);
+ };
+
+ dev_dbg(_vop_dev(vdev), "%s: retry: %d\n", __func__, retry);
+ if (!retry) {
+ err = -ENODEV;
+ goto error;
+ }
+
+ return 0;
+error:
+ vop_del_vqs(dev);
+ return err;
+}
+
+/*
+ * The config ops structure as defined by virtio config
+ */
+static struct virtio_config_ops vop_vq_config_ops = {
+ .get_features = vop_get_features,
+ .finalize_features = vop_finalize_features,
+ .get = vop_get,
+ .set = vop_set,
+ .get_status = vop_get_status,
+ .set_status = vop_set_status,
+ .reset = vop_reset,
+ .find_vqs = vop_find_vqs,
+ .del_vqs = vop_del_vqs,
+};
+
+static irqreturn_t vop_virtio_intr_handler(int irq, void *data)
+{
+ struct _vop_vdev *vdev = data;
+ struct vop_device *vpdev = vdev->vpdev;
+ struct virtqueue *vq;
+
+ vpdev->hw_ops->ack_interrupt(vpdev, vdev->h2c_vdev_db);
+ list_for_each_entry(vq, &vdev->vdev.vqs, list)
+ vring_interrupt(0, vq);
+
+ return IRQ_HANDLED;
+}
+
+static void vop_virtio_release_dev(struct device *_d)
+{
+ /*
+ * No need for a release method similar to virtio PCI.
+ * Provide an empty one to avoid getting a warning from core.
+ */
+}
+
+/*
+ * adds a new device and register it with virtio
+ * appropriate drivers are loaded by the device model
+ */
+static int _vop_add_device(struct mic_device_desc __iomem *d,
+ unsigned int offset, struct vop_device *vpdev,
+ int dnode)
+{
+ struct _vop_vdev *vdev;
+ int ret;
+ u8 type = ioread8(&d->type);
+
+ vdev = kzalloc(sizeof(*vdev), GFP_KERNEL);
+ if (!vdev)
+ return -ENOMEM;
+
+ vdev->vpdev = vpdev;
+ vdev->vdev.dev.parent = &vpdev->dev;
+ vdev->vdev.dev.release = vop_virtio_release_dev;
+ vdev->vdev.id.device = type;
+ vdev->vdev.config = &vop_vq_config_ops;
+ vdev->desc = d;
+ vdev->dc = (void __iomem *)d + _vop_aligned_desc_size(d);
+ vdev->dnode = dnode;
+ vdev->vdev.priv = (void *)(u64)dnode;
+ init_completion(&vdev->reset_done);
+
+ vdev->h2c_vdev_db = vpdev->hw_ops->next_db(vpdev);
+ vdev->virtio_cookie = vpdev->hw_ops->request_irq(vpdev,
+ vop_virtio_intr_handler, "virtio intr",
+ vdev, vdev->h2c_vdev_db);
+ if (IS_ERR(vdev->virtio_cookie)) {
+ ret = PTR_ERR(vdev->virtio_cookie);
+ goto kfree;
+ }
+ iowrite8((u8)vdev->h2c_vdev_db, &vdev->dc->h2c_vdev_db);
+ vdev->c2h_vdev_db = ioread8(&vdev->dc->c2h_vdev_db);
+
+ ret = register_virtio_device(&vdev->vdev);
+ if (ret) {
+ dev_err(_vop_dev(vdev),
+ "Failed to register vop device %u type %u\n",
+ offset, type);
+ goto free_irq;
+ }
+ writeq((u64)vdev, &vdev->dc->vdev);
+ dev_dbg(_vop_dev(vdev), "%s: registered vop device %u type %u vdev %p\n",
+ __func__, offset, type, vdev);
+
+ return 0;
+
+free_irq:
+ vpdev->hw_ops->free_irq(vpdev, vdev->virtio_cookie, vdev);
+kfree:
+ kfree(vdev);
+ return ret;
+}
+
+/*
+ * match for a vop device with a specific desc pointer
+ */
+static int vop_match_desc(struct device *dev, void *data)
+{
+ struct virtio_device *_dev = dev_to_virtio(dev);
+ struct _vop_vdev *vdev = to_vopvdev(_dev);
+
+ return vdev->desc == (void __iomem *)data;
+}
+
+static void _vop_handle_config_change(struct mic_device_desc __iomem *d,
+ unsigned int offset,
+ struct vop_device *vpdev)
+{
+ struct mic_device_ctrl __iomem *dc
+ = (void __iomem *)d + _vop_aligned_desc_size(d);
+ struct _vop_vdev *vdev = (struct _vop_vdev *)readq(&dc->vdev);
+
+ if (ioread8(&dc->config_change) != MIC_VIRTIO_PARAM_CONFIG_CHANGED)
+ return;
+
+ dev_dbg(&vpdev->dev, "%s %d\n", __func__, __LINE__);
+ virtio_config_changed(&vdev->vdev);
+ iowrite8(1, &dc->guest_ack);
+}
+
+/*
+ * removes a virtio device if a hot remove event has been
+ * requested by the host.
+ */
+static int _vop_remove_device(struct mic_device_desc __iomem *d,
+ unsigned int offset, struct vop_device *vpdev)
+{
+ struct mic_device_ctrl __iomem *dc
+ = (void __iomem *)d + _vop_aligned_desc_size(d);
+ struct _vop_vdev *vdev = (struct _vop_vdev *)readq(&dc->vdev);
+ u8 status;
+ int ret = -1;
+
+ if (ioread8(&dc->config_change) == MIC_VIRTIO_PARAM_DEV_REMOVE) {
+ dev_dbg(&vpdev->dev,
+ "%s %d config_change %d type %d vdev %p\n",
+ __func__, __LINE__,
+ ioread8(&dc->config_change), ioread8(&d->type), vdev);
+ status = ioread8(&d->status);
+ reinit_completion(&vdev->reset_done);
+ unregister_virtio_device(&vdev->vdev);
+ vpdev->hw_ops->free_irq(vpdev, vdev->virtio_cookie, vdev);
+ iowrite8(-1, &dc->h2c_vdev_db);
+ if (status & VIRTIO_CONFIG_S_DRIVER_OK)
+ wait_for_completion(&vdev->reset_done);
+ kfree(vdev);
+ iowrite8(1, &dc->guest_ack);
+ dev_dbg(&vpdev->dev, "%s %d guest_ack %d\n",
+ __func__, __LINE__, ioread8(&dc->guest_ack));
+ iowrite8(-1, &d->type);
+ ret = 0;
+ }
+ return ret;
+}
+
+#define REMOVE_DEVICES true
+
+static void _vop_scan_devices(void __iomem *dp, struct vop_device *vpdev,
+ bool remove, int dnode)
+{
+ s8 type;
+ unsigned int i;
+ struct mic_device_desc __iomem *d;
+ struct mic_device_ctrl __iomem *dc;
+ struct device *dev;
+ int ret;
+
+ for (i = sizeof(struct mic_bootparam);
+ i < MIC_DP_SIZE; i += _vop_total_desc_size(d)) {
+ d = dp + i;
+ dc = (void __iomem *)d + _vop_aligned_desc_size(d);
+ /*
+ * This read barrier is paired with the corresponding write
+ * barrier on the host which is inserted before adding or
+ * removing a virtio device descriptor, by updating the type.
+ */
+ rmb();
+ type = ioread8(&d->type);
+
+ /* end of list */
+ if (type == 0)
+ break;
+
+ if (type == -1)
+ continue;
+
+ /* device already exists */
+ dev = device_find_child(&vpdev->dev, (void __force *)d,
+ vop_match_desc);
+ if (dev) {
+ if (remove)
+ iowrite8(MIC_VIRTIO_PARAM_DEV_REMOVE,
+ &dc->config_change);
+ put_device(dev);
+ _vop_handle_config_change(d, i, vpdev);
+ ret = _vop_remove_device(d, i, vpdev);
+ if (remove) {
+ iowrite8(0, &dc->config_change);
+ iowrite8(0, &dc->guest_ack);
+ }
+ continue;
+ }
+
+ /* new device */
+ dev_dbg(&vpdev->dev, "%s %d Adding new virtio device %p\n",
+ __func__, __LINE__, d);
+ if (!remove)
+ _vop_add_device(d, i, vpdev, dnode);
+ }
+}
+
+static void vop_scan_devices(struct vop_info *vi,
+ struct vop_device *vpdev, bool remove)
+{
+ void __iomem *dp = vpdev->hw_ops->get_remote_dp(vpdev);
+
+ if (!dp)
+ return;
+ mutex_lock(&vi->vop_mutex);
+ _vop_scan_devices(dp, vpdev, remove, vpdev->dnode);
+ mutex_unlock(&vi->vop_mutex);
+}
+
+/*
+ * vop_hotplug_device tries to find changes in the device page.
+ */
+static void vop_hotplug_devices(struct work_struct *work)
+{
+ struct vop_info *vi = container_of(work, struct vop_info,
+ hotplug_work);
+
+ vop_scan_devices(vi, vi->vpdev, !REMOVE_DEVICES);
+}
+
+/*
+ * Interrupt handler for hot plug/config changes etc.
+ */
+static irqreturn_t vop_extint_handler(int irq, void *data)
+{
+ struct vop_info *vi = data;
+ struct mic_bootparam __iomem *bp;
+ struct vop_device *vpdev = vi->vpdev;
+
+ bp = vpdev->hw_ops->get_remote_dp(vpdev);
+ dev_dbg(&vpdev->dev, "%s %d hotplug work\n",
+ __func__, __LINE__);
+ vpdev->hw_ops->ack_interrupt(vpdev, ioread8(&bp->h2c_config_db));
+ schedule_work(&vi->hotplug_work);
+ return IRQ_HANDLED;
+}
+
+static int vop_driver_probe(struct vop_device *vpdev)
+{
+ struct vop_info *vi;
+ int rc;
+
+ vi = kzalloc(sizeof(*vi), GFP_KERNEL);
+ if (!vi) {
+ rc = -ENOMEM;
+ goto exit;
+ }
+ dev_set_drvdata(&vpdev->dev, vi);
+ vi->vpdev = vpdev;
+
+ mutex_init(&vi->vop_mutex);
+ INIT_WORK(&vi->hotplug_work, vop_hotplug_devices);
+ if (vpdev->dnode) {
+ rc = vop_host_init(vi);
+ if (rc < 0)
+ goto free;
+ } else {
+ struct mic_bootparam __iomem *bootparam;
+
+ vop_scan_devices(vi, vpdev, !REMOVE_DEVICES);
+
+ vi->h2c_config_db = vpdev->hw_ops->next_db(vpdev);
+ vi->cookie = vpdev->hw_ops->request_irq(vpdev,
+ vop_extint_handler,
+ "virtio_config_intr",
+ vi, vi->h2c_config_db);
+ if (IS_ERR(vi->cookie)) {
+ rc = PTR_ERR(vi->cookie);
+ goto free;
+ }
+ bootparam = vpdev->hw_ops->get_remote_dp(vpdev);
+ iowrite8(vi->h2c_config_db, &bootparam->h2c_config_db);
+ }
+ vop_init_debugfs(vi);
+ return 0;
+free:
+ kfree(vi);
+exit:
+ return rc;
+}
+
+static void vop_driver_remove(struct vop_device *vpdev)
+{
+ struct vop_info *vi = dev_get_drvdata(&vpdev->dev);
+
+ if (vpdev->dnode) {
+ vop_host_uninit(vi);
+ } else {
+ struct mic_bootparam __iomem *bootparam =
+ vpdev->hw_ops->get_remote_dp(vpdev);
+ if (bootparam)
+ iowrite8(-1, &bootparam->h2c_config_db);
+ vpdev->hw_ops->free_irq(vpdev, vi->cookie, vi);
+ flush_work(&vi->hotplug_work);
+ vop_scan_devices(vi, vpdev, REMOVE_DEVICES);
+ }
+ vop_exit_debugfs(vi);
+ kfree(vi);
+}
+
+static struct vop_device_id id_table[] = {
+ { VOP_DEV_TRNSP, VOP_DEV_ANY_ID },
+ { 0 },
+};
+
+static struct vop_driver vop_driver = {
+ .driver.name = KBUILD_MODNAME,
+ .driver.owner = THIS_MODULE,
+ .id_table = id_table,
+ .probe = vop_driver_probe,
+ .remove = vop_driver_remove,
+};
+
+module_vop_driver(vop_driver);
+
+MODULE_DEVICE_TABLE(mbus, id_table);
+MODULE_AUTHOR("Intel Corporation");
+MODULE_DESCRIPTION("Intel(R) Virtio Over PCIe (VOP) driver");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/misc/mic/vop/vop_main.h b/drivers/misc/mic/vop/vop_main.h
new file mode 100644
index 0000000..ba47ec7
--- /dev/null
+++ b/drivers/misc/mic/vop/vop_main.h
@@ -0,0 +1,170 @@
+/*
+ * Intel MIC Platform Software Stack (MPSS)
+ *
+ * Copyright(c) 2016 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License, version 2, as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ *
+ * Intel Virtio Over PCIe (VOP) driver.
+ *
+ */
+#ifndef _VOP_MAIN_H_
+#define _VOP_MAIN_H_
+
+#include <linux/vringh.h>
+#include <linux/virtio_config.h>
+#include <linux/virtio.h>
+#include <linux/miscdevice.h>
+
+#include <linux/mic_common.h>
+#include "../common/mic_dev.h"
+
+#include "../bus/vop_bus.h"
+
+/*
+ * Note on endianness.
+ * 1. Host can be both BE or LE
+ * 2. Guest/card is LE. Host uses le_to_cpu to access desc/avail
+ * rings and ioreadXX/iowriteXX to access used ring.
+ * 3. Device page exposed by host to guest contains LE values. Guest
+ * accesses these using ioreadXX/iowriteXX etc. This way in general we
+ * obey the virtio spec according to which guest works with native
+ * endianness and host is aware of guest endianness and does all
+ * required endianness conversion.
+ * 4. Data provided from user space to guest (in ADD_DEVICE and
+ * CONFIG_CHANGE ioctl's) is not interpreted by the driver and should be
+ * in guest endianness.
+ */
+
+/*
+ * vop_info - Allocated per invocation of VOP probe
+ *
+ * @vpdev: VOP device
+ * @hotplug_work: Handle virtio device creation, deletion and configuration
+ * @cookie: Cookie received upon requesting a virtio configuration interrupt
+ * @h2c_config_db: The doorbell used by the peer to indicate a config change
+ * @vdev_list: List of "active" virtio devices injected in the peer node
+ * @vop_mutex: Synchronize access to the device page as well as serialize
+ * creation/deletion of virtio devices on the peer node
+ * @dp: Peer device page information
+ * @dbg: Debugfs entry
+ * @dma_ch: The DMA channel used by this transport for data transfers.
+ * @name: Name for this transport used in misc device creation.
+ * @miscdev: The misc device registered.
+ */
+struct vop_info {
+ struct vop_device *vpdev;
+ struct work_struct hotplug_work;
+ struct mic_irq *cookie;
+ int h2c_config_db;
+ struct list_head vdev_list;
+ struct mutex vop_mutex;
+ void __iomem *dp;
+ struct dentry *dbg;
+ struct dma_chan *dma_ch;
+ char name[16];
+ struct miscdevice miscdev;
+};
+
+/**
+ * struct vop_vringh - Virtio ring host information.
+ *
+ * @vring: The VOP vring used for setting up user space mappings.
+ * @vrh: The host VRINGH used for accessing the card vrings.
+ * @riov: The VRINGH read kernel IOV.
+ * @wiov: The VRINGH write kernel IOV.
+ * @head: The VRINGH head index address passed to vringh_getdesc_kern(..).
+ * @vr_mutex: Mutex for synchronizing access to the VRING.
+ * @buf: Temporary kernel buffer used to copy in/out data
+ * from/to the card via DMA.
+ * @buf_da: dma address of buf.
+ * @vdev: Back pointer to VOP virtio device for vringh_notify(..).
+ */
+struct vop_vringh {
+ struct mic_vring vring;
+ struct vringh vrh;
+ struct vringh_kiov riov;
+ struct vringh_kiov wiov;
+ u16 head;
+ struct mutex vr_mutex;
+ void *buf;
+ dma_addr_t buf_da;
+ struct vop_vdev *vdev;
+};
+
+/**
+ * struct vop_vdev - Host information for a card Virtio device.
+ *
+ * @virtio_id - Virtio device id.
+ * @waitq - Waitqueue to allow ring3 apps to poll.
+ * @vpdev - pointer to VOP bus device.
+ * @poll_wake - Used for waking up threads blocked in poll.
+ * @out_bytes - Debug stats for number of bytes copied from host to card.
+ * @in_bytes - Debug stats for number of bytes copied from card to host.
+ * @out_bytes_dma - Debug stats for number of bytes copied from host to card
+ * using DMA.
+ * @in_bytes_dma - Debug stats for number of bytes copied from card to host
+ * using DMA.
+ * @tx_len_unaligned - Debug stats for number of bytes copied to the card where
+ * the transfer length did not have the required DMA alignment.
+ * @tx_dst_unaligned - Debug stats for number of bytes copied where the
+ * destination address on the card did not have the required DMA alignment.
+ * @vvr - Store per VRING data structures.
+ * @virtio_bh_work - Work struct used to schedule virtio bottom half handling.
+ * @dd - Virtio device descriptor.
+ * @dc - Virtio device control fields.
+ * @list - List of Virtio devices.
+ * @virtio_db - The doorbell used by the card to interrupt the host.
+ * @virtio_cookie - The cookie returned while requesting interrupts.
+ * @vi: Transport information.
+ * @vdev_mutex: Mutex synchronizing virtio device injection,
+ * removal and data transfers.
+ * @destroy: Track if a virtio device is being destroyed.
+ * @deleted: The virtio device has been deleted.
+ */
+struct vop_vdev {
+ int virtio_id;
+ wait_queue_head_t waitq;
+ struct vop_device *vpdev;
+ int poll_wake;
+ unsigned long out_bytes;
+ unsigned long in_bytes;
+ unsigned long out_bytes_dma;
+ unsigned long in_bytes_dma;
+ unsigned long tx_len_unaligned;
+ unsigned long tx_dst_unaligned;
+ unsigned long rx_dst_unaligned;
+ struct vop_vringh vvr[MIC_MAX_VRINGS];
+ struct work_struct virtio_bh_work;
+ struct mic_device_desc *dd;
+ struct mic_device_ctrl *dc;
+ struct list_head list;
+ int virtio_db;
+ struct mic_irq *virtio_cookie;
+ struct vop_info *vi;
+ struct mutex vdev_mutex;
+ struct completion destroy;
+ bool deleted;
+};
+
+/* Helper API to check if a virtio device is running */
+static inline bool vop_vdevup(struct vop_vdev *vdev)
+{
+ return !!vdev->dd->status;
+}
+
+void vop_init_debugfs(struct vop_info *vi);
+void vop_exit_debugfs(struct vop_info *vi);
+int vop_host_init(struct vop_info *vi);
+void vop_host_uninit(struct vop_info *vi);
+#endif
diff --git a/drivers/misc/mic/vop/vop_vringh.c b/drivers/misc/mic/vop/vop_vringh.c
new file mode 100644
index 0000000..e94c7fb
--- /dev/null
+++ b/drivers/misc/mic/vop/vop_vringh.c
@@ -0,0 +1,1165 @@
+/*
+ * Intel MIC Platform Software Stack (MPSS)
+ *
+ * Copyright(c) 2016 Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License, version 2, as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ *
+ * Intel Virtio Over PCIe (VOP) driver.
+ *
+ */
+#include <linux/sched.h>
+#include <linux/poll.h>
+#include <linux/dma-mapping.h>
+
+#include <linux/mic_common.h>
+#include "../common/mic_dev.h"
+
+#include <linux/mic_ioctl.h>
+#include "vop_main.h"
+
+/* Helper API to obtain the VOP PCIe device */
+static inline struct device *vop_dev(struct vop_vdev *vdev)
+{
+ return vdev->vpdev->dev.parent;
+}
+
+/* Helper API to check if a virtio device is initialized */
+static inline int vop_vdev_inited(struct vop_vdev *vdev)
+{
+ if (!vdev)
+ return -EINVAL;
+ /* Device has not been created yet */
+ if (!vdev->dd || !vdev->dd->type) {
+ dev_err(vop_dev(vdev), "%s %d err %d\n",
+ __func__, __LINE__, -EINVAL);
+ return -EINVAL;
+ }
+ /* Device has been removed/deleted */
+ if (vdev->dd->type == -1) {
+ dev_dbg(vop_dev(vdev), "%s %d err %d\n",
+ __func__, __LINE__, -ENODEV);
+ return -ENODEV;
+ }
+ return 0;
+}
+
+static void _vop_notify(struct vringh *vrh)
+{
+ struct vop_vringh *vvrh = container_of(vrh, struct vop_vringh, vrh);
+ struct vop_vdev *vdev = vvrh->vdev;
+ struct vop_device *vpdev = vdev->vpdev;
+ s8 db = vdev->dc->h2c_vdev_db;
+
+ if (db != -1)
+ vpdev->hw_ops->send_intr(vpdev, db);
+}
+
+static void vop_virtio_init_post(struct vop_vdev *vdev)
+{
+ struct mic_vqconfig *vqconfig = mic_vq_config(vdev->dd);
+ struct vop_device *vpdev = vdev->vpdev;
+ int i, used_size;
+
+ for (i = 0; i < vdev->dd->num_vq; i++) {
+ used_size = PAGE_ALIGN(sizeof(u16) * 3 +
+ sizeof(struct vring_used_elem) *
+ le16_to_cpu(vqconfig->num));
+ if (!le64_to_cpu(vqconfig[i].used_address)) {
+ dev_warn(vop_dev(vdev), "used_address zero??\n");
+ continue;
+ }
+ vdev->vvr[i].vrh.vring.used =
+ (void __force *)vpdev->hw_ops->ioremap(
+ vpdev,
+ le64_to_cpu(vqconfig[i].used_address),
+ used_size);
+ }
+
+ vdev->dc->used_address_updated = 0;
+
+ dev_info(vop_dev(vdev), "%s: device type %d LINKUP\n",
+ __func__, vdev->virtio_id);
+}
+
+static inline void vop_virtio_device_reset(struct vop_vdev *vdev)
+{
+ int i;
+
+ dev_dbg(vop_dev(vdev), "%s: status %d device type %d RESET\n",
+ __func__, vdev->dd->status, vdev->virtio_id);
+
+ for (i = 0; i < vdev->dd->num_vq; i++)
+ /*
+ * Avoid lockdep false positive. The + 1 is for the vop
+ * mutex which is held in the reset devices code path.
+ */
+ mutex_lock_nested(&vdev->vvr[i].vr_mutex, i + 1);
+
+ /* 0 status means "reset" */
+ vdev->dd->status = 0;
+ vdev->dc->vdev_reset = 0;
+ vdev->dc->host_ack = 1;
+
+ for (i = 0; i < vdev->dd->num_vq; i++) {
+ struct vringh *vrh = &vdev->vvr[i].vrh;
+
+ vdev->vvr[i].vring.info->avail_idx = 0;
+ vrh->completed = 0;
+ vrh->last_avail_idx = 0;
+ vrh->last_used_idx = 0;
+ }
+
+ for (i = 0; i < vdev->dd->num_vq; i++)
+ mutex_unlock(&vdev->vvr[i].vr_mutex);
+}
+
+static void vop_virtio_reset_devices(struct vop_info *vi)
+{
+ struct list_head *pos, *tmp;
+ struct vop_vdev *vdev;
+
+ list_for_each_safe(pos, tmp, &vi->vdev_list) {
+ vdev = list_entry(pos, struct vop_vdev, list);
+ vop_virtio_device_reset(vdev);
+ vdev->poll_wake = 1;
+ wake_up(&vdev->waitq);
+ }
+}
+
+static void vop_bh_handler(struct work_struct *work)
+{
+ struct vop_vdev *vdev = container_of(work, struct vop_vdev,
+ virtio_bh_work);
+
+ if (vdev->dc->used_address_updated)
+ vop_virtio_init_post(vdev);
+
+ if (vdev->dc->vdev_reset)
+ vop_virtio_device_reset(vdev);
+
+ vdev->poll_wake = 1;
+ wake_up(&vdev->waitq);
+}
+
+static irqreturn_t _vop_virtio_intr_handler(int irq, void *data)
+{
+ struct vop_vdev *vdev = data;
+ struct vop_device *vpdev = vdev->vpdev;
+
+ vpdev->hw_ops->ack_interrupt(vpdev, vdev->virtio_db);
+ schedule_work(&vdev->virtio_bh_work);
+ return IRQ_HANDLED;
+}
+
+static int vop_virtio_config_change(struct vop_vdev *vdev, void *argp)
+{
+ DECLARE_WAIT_QUEUE_HEAD_ONSTACK(wake);
+ int ret = 0, retry, i;
+ struct vop_device *vpdev = vdev->vpdev;
+ struct vop_info *vi = dev_get_drvdata(&vpdev->dev);
+ struct mic_bootparam *bootparam = vpdev->hw_ops->get_dp(vpdev);
+ s8 db = bootparam->h2c_config_db;
+
+ mutex_lock(&vi->vop_mutex);
+ for (i = 0; i < vdev->dd->num_vq; i++)
+ mutex_lock_nested(&vdev->vvr[i].vr_mutex, i + 1);
+
+ if (db == -1 || vdev->dd->type == -1) {
+ ret = -EIO;
+ goto exit;
+ }
+
+ memcpy(mic_vq_configspace(vdev->dd), argp, vdev->dd->config_len);
+ vdev->dc->config_change = MIC_VIRTIO_PARAM_CONFIG_CHANGED;
+ vpdev->hw_ops->send_intr(vpdev, db);
+
+ for (retry = 100; retry--;) {
+ ret = wait_event_timeout(wake, vdev->dc->guest_ack,
+ msecs_to_jiffies(100));
+ if (ret)
+ break;
+ }
+
+ dev_dbg(vop_dev(vdev),
+ "%s %d retry: %d\n", __func__, __LINE__, retry);
+ vdev->dc->config_change = 0;
+ vdev->dc->guest_ack = 0;
+exit:
+ for (i = 0; i < vdev->dd->num_vq; i++)
+ mutex_unlock(&vdev->vvr[i].vr_mutex);
+ mutex_unlock(&vi->vop_mutex);
+ return ret;
+}
+
+static int vop_copy_dp_entry(struct vop_vdev *vdev,
+ struct mic_device_desc *argp, __u8 *type,
+ struct mic_device_desc **devpage)
+{
+ struct vop_device *vpdev = vdev->vpdev;
+ struct mic_device_desc *devp;
+ struct mic_vqconfig *vqconfig;
+ int ret = 0, i;
+ bool slot_found = false;
+
+ vqconfig = mic_vq_config(argp);
+ for (i = 0; i < argp->num_vq; i++) {
+ if (le16_to_cpu(vqconfig[i].num) > MIC_MAX_VRING_ENTRIES) {
+ ret = -EINVAL;
+ dev_err(vop_dev(vdev), "%s %d err %d\n",
+ __func__, __LINE__, ret);
+ goto exit;
+ }
+ }
+
+ /* Find the first free device page entry */
+ for (i = sizeof(struct mic_bootparam);
+ i < MIC_DP_SIZE - mic_total_desc_size(argp);
+ i += mic_total_desc_size(devp)) {
+ devp = vpdev->hw_ops->get_dp(vpdev) + i;
+ if (devp->type == 0 || devp->type == -1) {
+ slot_found = true;
+ break;
+ }
+ }
+ if (!slot_found) {
+ ret = -EINVAL;
+ dev_err(vop_dev(vdev), "%s %d err %d\n",
+ __func__, __LINE__, ret);
+ goto exit;
+ }
+ /*
+ * Save off the type before doing the memcpy. Type will be set in the
+ * end after completing all initialization for the new device.
+ */
+ *type = argp->type;
+ argp->type = 0;
+ memcpy(devp, argp, mic_desc_size(argp));
+
+ *devpage = devp;
+exit:
+ return ret;
+}
+
+static void vop_init_device_ctrl(struct vop_vdev *vdev,
+ struct mic_device_desc *devpage)
+{
+ struct mic_device_ctrl *dc;
+
+ dc = (void *)devpage + mic_aligned_desc_size(devpage);
+
+ dc->config_change = 0;
+ dc->guest_ack = 0;
+ dc->vdev_reset = 0;
+ dc->host_ack = 0;
+ dc->used_address_updated = 0;
+ dc->c2h_vdev_db = -1;
+ dc->h2c_vdev_db = -1;
+ vdev->dc = dc;
+}
+
+static int vop_virtio_add_device(struct vop_vdev *vdev,
+ struct mic_device_desc *argp)
+{
+ struct vop_info *vi = vdev->vi;
+ struct vop_device *vpdev = vi->vpdev;
+ struct mic_device_desc *dd = NULL;
+ struct mic_vqconfig *vqconfig;
+ int vr_size, i, j, ret;
+ u8 type = 0;
+ s8 db = -1;
+ char irqname[16];
+ struct mic_bootparam *bootparam;
+ u16 num;
+ dma_addr_t vr_addr;
+
+ bootparam = vpdev->hw_ops->get_dp(vpdev);
+ init_waitqueue_head(&vdev->waitq);
+ INIT_LIST_HEAD(&vdev->list);
+ vdev->vpdev = vpdev;
+
+ ret = vop_copy_dp_entry(vdev, argp, &type, &dd);
+ if (ret) {
+ dev_err(vop_dev(vdev), "%s %d err %d\n",
+ __func__, __LINE__, ret);
+ kfree(vdev);
+ return ret;
+ }
+
+ vop_init_device_ctrl(vdev, dd);
+
+ vdev->dd = dd;
+ vdev->virtio_id = type;
+ vqconfig = mic_vq_config(dd);
+ INIT_WORK(&vdev->virtio_bh_work, vop_bh_handler);
+
+ for (i = 0; i < dd->num_vq; i++) {
+ struct vop_vringh *vvr = &vdev->vvr[i];
+ struct mic_vring *vr = &vdev->vvr[i].vring;
+
+ num = le16_to_cpu(vqconfig[i].num);
+ mutex_init(&vvr->vr_mutex);
+ vr_size = PAGE_ALIGN(vring_size(num, MIC_VIRTIO_RING_ALIGN) +
+ sizeof(struct _mic_vring_info));
+ vr->va = (void *)
+ __get_free_pages(GFP_KERNEL | __GFP_ZERO,
+ get_order(vr_size));
+ if (!vr->va) {
+ ret = -ENOMEM;
+ dev_err(vop_dev(vdev), "%s %d err %d\n",
+ __func__, __LINE__, ret);
+ goto err;
+ }
+ vr->len = vr_size;
+ vr->info = vr->va + vring_size(num, MIC_VIRTIO_RING_ALIGN);
+ vr->info->magic = cpu_to_le32(MIC_MAGIC + vdev->virtio_id + i);
+ vr_addr = dma_map_single(&vpdev->dev, vr->va, vr_size,
+ DMA_BIDIRECTIONAL);
+ if (dma_mapping_error(&vpdev->dev, vr_addr)) {
+ free_pages((unsigned long)vr->va, get_order(vr_size));
+ ret = -ENOMEM;
+ dev_err(vop_dev(vdev), "%s %d err %d\n",
+ __func__, __LINE__, ret);
+ goto err;
+ }
+ vqconfig[i].address = cpu_to_le64(vr_addr);
+
+ vring_init(&vr->vr, num, vr->va, MIC_VIRTIO_RING_ALIGN);
+ ret = vringh_init_kern(&vvr->vrh,
+ *(u32 *)mic_vq_features(vdev->dd),
+ num, false, vr->vr.desc, vr->vr.avail,
+ vr->vr.used);
+ if (ret) {
+ dev_err(vop_dev(vdev), "%s %d err %d\n",
+ __func__, __LINE__, ret);
+ goto err;
+ }
+ vringh_kiov_init(&vvr->riov, NULL, 0);
+ vringh_kiov_init(&vvr->wiov, NULL, 0);
+ vvr->head = USHRT_MAX;
+ vvr->vdev = vdev;
+ vvr->vrh.notify = _vop_notify;
+ dev_dbg(&vpdev->dev,
+ "%s %d index %d va %p info %p vr_size 0x%x\n",
+ __func__, __LINE__, i, vr->va, vr->info, vr_size);
+ vvr->buf = (void *)__get_free_pages(GFP_KERNEL,
+ get_order(VOP_INT_DMA_BUF_SIZE));
+ vvr->buf_da = dma_map_single(&vpdev->dev,
+ vvr->buf, VOP_INT_DMA_BUF_SIZE,
+ DMA_BIDIRECTIONAL);
+ }
+
+ snprintf(irqname, sizeof(irqname), "vop%dvirtio%d", vpdev->index,
+ vdev->virtio_id);
+ vdev->virtio_db = vpdev->hw_ops->next_db(vpdev);
+ vdev->virtio_cookie = vpdev->hw_ops->request_irq(vpdev,
+ _vop_virtio_intr_handler, irqname, vdev,
+ vdev->virtio_db);
+ if (IS_ERR(vdev->virtio_cookie)) {
+ ret = PTR_ERR(vdev->virtio_cookie);
+ dev_dbg(&vpdev->dev, "request irq failed\n");
+ goto err;
+ }
+
+ vdev->dc->c2h_vdev_db = vdev->virtio_db;
+
+ /*
+ * Order the type update with previous stores. This write barrier
+ * is paired with the corresponding read barrier before the uncached
+ * system memory read of the type, on the card while scanning the
+ * device page.
+ */
+ smp_wmb();
+ dd->type = type;
+ argp->type = type;
+
+ if (bootparam) {
+ db = bootparam->h2c_config_db;
+ if (db != -1)
+ vpdev->hw_ops->send_intr(vpdev, db);
+ }
+ dev_dbg(&vpdev->dev, "Added virtio id %d db %d\n", dd->type, db);
+ return 0;
+err:
+ vqconfig = mic_vq_config(dd);
+ for (j = 0; j < i; j++) {
+ struct vop_vringh *vvr = &vdev->vvr[j];
+
+ dma_unmap_single(&vpdev->dev, le64_to_cpu(vqconfig[j].address),
+ vvr->vring.len, DMA_BIDIRECTIONAL);
+ free_pages((unsigned long)vvr->vring.va,
+ get_order(vvr->vring.len));
+ }
+ return ret;
+}
+
+static void vop_dev_remove(struct vop_info *pvi, struct mic_device_ctrl *devp,
+ struct vop_device *vpdev)
+{
+ struct mic_bootparam *bootparam = vpdev->hw_ops->get_dp(vpdev);
+ s8 db;
+ int ret, retry;
+ DECLARE_WAIT_QUEUE_HEAD_ONSTACK(wake);
+
+ devp->config_change = MIC_VIRTIO_PARAM_DEV_REMOVE;
+ db = bootparam->h2c_config_db;
+ if (db != -1)
+ vpdev->hw_ops->send_intr(vpdev, db);
+ else
+ goto done;
+ for (retry = 15; retry--;) {
+ ret = wait_event_timeout(wake, devp->guest_ack,
+ msecs_to_jiffies(1000));
+ if (ret)
+ break;
+ }
+done:
+ devp->config_change = 0;
+ devp->guest_ack = 0;
+}
+
+static void vop_virtio_del_device(struct vop_vdev *vdev)
+{
+ struct vop_info *vi = vdev->vi;
+ struct vop_device *vpdev = vdev->vpdev;
+ int i;
+ struct mic_vqconfig *vqconfig;
+ struct mic_bootparam *bootparam = vpdev->hw_ops->get_dp(vpdev);
+
+ if (!bootparam)
+ goto skip_hot_remove;
+ vop_dev_remove(vi, vdev->dc, vpdev);
+skip_hot_remove:
+ vpdev->hw_ops->free_irq(vpdev, vdev->virtio_cookie, vdev);
+ flush_work(&vdev->virtio_bh_work);
+ vqconfig = mic_vq_config(vdev->dd);
+ for (i = 0; i < vdev->dd->num_vq; i++) {
+ struct vop_vringh *vvr = &vdev->vvr[i];
+
+ dma_unmap_single(&vpdev->dev,
+ vvr->buf_da, VOP_INT_DMA_BUF_SIZE,
+ DMA_BIDIRECTIONAL);
+ free_pages((unsigned long)vvr->buf,
+ get_order(VOP_INT_DMA_BUF_SIZE));
+ vringh_kiov_cleanup(&vvr->riov);
+ vringh_kiov_cleanup(&vvr->wiov);
+ dma_unmap_single(&vpdev->dev, le64_to_cpu(vqconfig[i].address),
+ vvr->vring.len, DMA_BIDIRECTIONAL);
+ free_pages((unsigned long)vvr->vring.va,
+ get_order(vvr->vring.len));
+ }
+ /*
+ * Order the type update with previous stores. This write barrier
+ * is paired with the corresponding read barrier before the uncached
+ * system memory read of the type, on the card while scanning the
+ * device page.
+ */
+ smp_wmb();
+ vdev->dd->type = -1;
+}
+
+/*
+ * vop_sync_dma - Wrapper for synchronous DMAs.
+ *
+ * @dev - The address of the pointer to the device instance used
+ * for DMA registration.
+ * @dst - destination DMA address.
+ * @src - source DMA address.
+ * @len - size of the transfer.
+ *
+ * Return DMA_SUCCESS on success
+ */
+static int vop_sync_dma(struct vop_vdev *vdev, dma_addr_t dst, dma_addr_t src,
+ size_t len)
+{
+ int err = 0;
+ struct dma_device *ddev;
+ struct dma_async_tx_descriptor *tx;
+ struct vop_info *vi = dev_get_drvdata(&vdev->vpdev->dev);
+ struct dma_chan *vop_ch = vi->dma_ch;
+
+ if (!vop_ch) {
+ err = -EBUSY;
+ goto error;
+ }
+ ddev = vop_ch->device;
+ tx = ddev->device_prep_dma_memcpy(vop_ch, dst, src, len,
+ DMA_PREP_FENCE);
+ if (!tx) {
+ err = -ENOMEM;
+ goto error;
+ } else {
+ dma_cookie_t cookie;
+
+ cookie = tx->tx_submit(tx);
+ if (dma_submit_error(cookie)) {
+ err = -ENOMEM;
+ goto error;
+ }
+ dma_async_issue_pending(vop_ch);
+ err = dma_sync_wait(vop_ch, cookie);
+ }
+error:
+ if (err)
+ dev_err(&vi->vpdev->dev, "%s %d err %d\n",
+ __func__, __LINE__, err);
+ return err;
+}
+
+#define VOP_USE_DMA true
+
+/*
+ * Initiates the copies across the PCIe bus from card memory to a user
+ * space buffer. When transfers are done using DMA, source/destination
+ * addresses and transfer length must follow the alignment requirements of
+ * the MIC DMA engine.
+ */
+static int vop_virtio_copy_to_user(struct vop_vdev *vdev, void __user *ubuf,
+ size_t len, u64 daddr, size_t dlen,
+ int vr_idx)
+{
+ struct vop_device *vpdev = vdev->vpdev;
+ void __iomem *dbuf = vpdev->hw_ops->ioremap(vpdev, daddr, len);
+ struct vop_vringh *vvr = &vdev->vvr[vr_idx];
+ struct vop_info *vi = dev_get_drvdata(&vpdev->dev);
+ size_t dma_alignment = 1 << vi->dma_ch->device->copy_align;
+ bool x200 = is_dma_copy_aligned(vi->dma_ch->device, 1, 1, 1);
+ size_t dma_offset, partlen;
+ int err;
+
+ if (!VOP_USE_DMA) {
+ if (copy_to_user(ubuf, (void __force *)dbuf, len)) {
+ err = -EFAULT;
+ dev_err(vop_dev(vdev), "%s %d err %d\n",
+ __func__, __LINE__, err);
+ goto err;
+ }
+ vdev->in_bytes += len;
+ err = 0;
+ goto err;
+ }
+
+ dma_offset = daddr - round_down(daddr, dma_alignment);
+ daddr -= dma_offset;
+ len += dma_offset;
+ /*
+ * X100 uses DMA addresses as seen by the card so adding
+ * the aperture base is not required for DMA. However x200
+ * requires DMA addresses to be an offset into the bar so
+ * add the aperture base for x200.
+ */
+ if (x200)
+ daddr += vpdev->aper->pa;
+ while (len) {
+ partlen = min_t(size_t, len, VOP_INT_DMA_BUF_SIZE);
+ err = vop_sync_dma(vdev, vvr->buf_da, daddr,
+ ALIGN(partlen, dma_alignment));
+ if (err) {
+ dev_err(vop_dev(vdev), "%s %d err %d\n",
+ __func__, __LINE__, err);
+ goto err;
+ }
+ if (copy_to_user(ubuf, vvr->buf + dma_offset,
+ partlen - dma_offset)) {
+ err = -EFAULT;
+ dev_err(vop_dev(vdev), "%s %d err %d\n",
+ __func__, __LINE__, err);
+ goto err;
+ }
+ daddr += partlen;
+ ubuf += partlen;
+ dbuf += partlen;
+ vdev->in_bytes_dma += partlen;
+ vdev->in_bytes += partlen;
+ len -= partlen;
+ dma_offset = 0;
+ }
+ err = 0;
+err:
+ vpdev->hw_ops->iounmap(vpdev, dbuf);
+ dev_dbg(vop_dev(vdev),
+ "%s: ubuf %p dbuf %p len 0x%lx vr_idx 0x%x\n",
+ __func__, ubuf, dbuf, len, vr_idx);
+ return err;
+}
+
+/*
+ * Initiates copies across the PCIe bus from a user space buffer to card
+ * memory. When transfers are done using DMA, source/destination addresses
+ * and transfer length must follow the alignment requirements of the MIC
+ * DMA engine.
+ */
+static int vop_virtio_copy_from_user(struct vop_vdev *vdev, void __user *ubuf,
+ size_t len, u64 daddr, size_t dlen,
+ int vr_idx)
+{
+ struct vop_device *vpdev = vdev->vpdev;
+ void __iomem *dbuf = vpdev->hw_ops->ioremap(vpdev, daddr, len);
+ struct vop_vringh *vvr = &vdev->vvr[vr_idx];
+ struct vop_info *vi = dev_get_drvdata(&vdev->vpdev->dev);
+ size_t dma_alignment = 1 << vi->dma_ch->device->copy_align;
+ bool x200 = is_dma_copy_aligned(vi->dma_ch->device, 1, 1, 1);
+ size_t partlen;
+ bool dma = VOP_USE_DMA;
+ int err = 0;
+
+ if (daddr & (dma_alignment - 1)) {
+ vdev->tx_dst_unaligned += len;
+ dma = false;
+ } else if (ALIGN(len, dma_alignment) > dlen) {
+ vdev->tx_len_unaligned += len;
+ dma = false;
+ }
+
+ if (!dma)
+ goto memcpy;
+
+ /*
+ * X100 uses DMA addresses as seen by the card so adding
+ * the aperture base is not required for DMA. However x200
+ * requires DMA addresses to be an offset into the bar so
+ * add the aperture base for x200.
+ */
+ if (x200)
+ daddr += vpdev->aper->pa;
+ while (len) {
+ partlen = min_t(size_t, len, VOP_INT_DMA_BUF_SIZE);
+
+ if (copy_from_user(vvr->buf, ubuf, partlen)) {
+ err = -EFAULT;
+ dev_err(vop_dev(vdev), "%s %d err %d\n",
+ __func__, __LINE__, err);
+ goto err;
+ }
+ err = vop_sync_dma(vdev, daddr, vvr->buf_da,
+ ALIGN(partlen, dma_alignment));
+ if (err) {
+ dev_err(vop_dev(vdev), "%s %d err %d\n",
+ __func__, __LINE__, err);
+ goto err;
+ }
+ daddr += partlen;
+ ubuf += partlen;
+ dbuf += partlen;
+ vdev->out_bytes_dma += partlen;
+ vdev->out_bytes += partlen;
+ len -= partlen;
+ }
+memcpy:
+ /*
+ * We are copying to IO below and should ideally use something
+ * like copy_from_user_toio(..) if it existed.
+ */
+ if (copy_from_user((void __force *)dbuf, ubuf, len)) {
+ err = -EFAULT;
+ dev_err(vop_dev(vdev), "%s %d err %d\n",
+ __func__, __LINE__, err);
+ goto err;
+ }
+ vdev->out_bytes += len;
+ err = 0;
+err:
+ vpdev->hw_ops->iounmap(vpdev, dbuf);
+ dev_dbg(vop_dev(vdev),
+ "%s: ubuf %p dbuf %p len 0x%lx vr_idx 0x%x\n",
+ __func__, ubuf, dbuf, len, vr_idx);
+ return err;
+}
+
+#define MIC_VRINGH_READ true
+
+/* Determine the total number of bytes consumed in a VRINGH KIOV */
+static inline u32 vop_vringh_iov_consumed(struct vringh_kiov *iov)
+{
+ int i;
+ u32 total = iov->consumed;
+
+ for (i = 0; i < iov->i; i++)
+ total += iov->iov[i].iov_len;
+ return total;
+}
+
+/*
+ * Traverse the VRINGH KIOV and issue the APIs to trigger the copies.
+ * This API is heavily based on the vringh_iov_xfer(..) implementation
+ * in vringh.c. The reason we cannot reuse vringh_iov_pull_kern(..)
+ * and vringh_iov_push_kern(..) directly is because there is no
+ * way to override the VRINGH xfer(..) routines as of v3.10.
+ */
+static int vop_vringh_copy(struct vop_vdev *vdev, struct vringh_kiov *iov,
+ void __user *ubuf, size_t len, bool read, int vr_idx,
+ size_t *out_len)
+{
+ int ret = 0;
+ size_t partlen, tot_len = 0;
+
+ while (len && iov->i < iov->used) {
+ struct kvec *kiov = &iov->iov[iov->i];
+
+ partlen = min(kiov->iov_len, len);
+ if (read)
+ ret = vop_virtio_copy_to_user(vdev, ubuf, partlen,
+ (u64)kiov->iov_base,
+ kiov->iov_len,
+ vr_idx);
+ else
+ ret = vop_virtio_copy_from_user(vdev, ubuf, partlen,
+ (u64)kiov->iov_base,
+ kiov->iov_len,
+ vr_idx);
+ if (ret) {
+ dev_err(vop_dev(vdev), "%s %d err %d\n",
+ __func__, __LINE__, ret);
+ break;
+ }
+ len -= partlen;
+ ubuf += partlen;
+ tot_len += partlen;
+ iov->consumed += partlen;
+ kiov->iov_len -= partlen;
+ kiov->iov_base += partlen;
+ if (!kiov->iov_len) {
+ /* Fix up old iov element then increment. */
+ kiov->iov_len = iov->consumed;
+ kiov->iov_base -= iov->consumed;
+
+ iov->consumed = 0;
+ iov->i++;
+ }
+ }
+ *out_len = tot_len;
+ return ret;
+}
+
+/*
+ * Use the standard VRINGH infrastructure in the kernel to fetch new
+ * descriptors, initiate the copies and update the used ring.
+ */
+static int _vop_virtio_copy(struct vop_vdev *vdev, struct mic_copy_desc *copy)
+{
+ int ret = 0;
+ u32 iovcnt = copy->iovcnt;
+ struct iovec iov;
+ struct iovec __user *u_iov = copy->iov;
+ void __user *ubuf = NULL;
+ struct vop_vringh *vvr = &vdev->vvr[copy->vr_idx];
+ struct vringh_kiov *riov = &vvr->riov;
+ struct vringh_kiov *wiov = &vvr->wiov;
+ struct vringh *vrh = &vvr->vrh;
+ u16 *head = &vvr->head;
+ struct mic_vring *vr = &vvr->vring;
+ size_t len = 0, out_len;
+
+ copy->out_len = 0;
+ /* Fetch a new IOVEC if all previous elements have been processed */
+ if (riov->i == riov->used && wiov->i == wiov->used) {
+ ret = vringh_getdesc_kern(vrh, riov, wiov,
+ head, GFP_KERNEL);
+ /* Check if there are available descriptors */
+ if (ret <= 0)
+ return ret;
+ }
+ while (iovcnt) {
+ if (!len) {
+ /* Copy over a new iovec from user space. */
+ ret = copy_from_user(&iov, u_iov, sizeof(*u_iov));
+ if (ret) {
+ ret = -EINVAL;
+ dev_err(vop_dev(vdev), "%s %d err %d\n",
+ __func__, __LINE__, ret);
+ break;
+ }
+ len = iov.iov_len;
+ ubuf = iov.iov_base;
+ }
+ /* Issue all the read descriptors first */
+ ret = vop_vringh_copy(vdev, riov, ubuf, len,
+ MIC_VRINGH_READ, copy->vr_idx, &out_len);
+ if (ret) {
+ dev_err(vop_dev(vdev), "%s %d err %d\n",
+ __func__, __LINE__, ret);
+ break;
+ }
+ len -= out_len;
+ ubuf += out_len;
+ copy->out_len += out_len;
+ /* Issue the write descriptors next */
+ ret = vop_vringh_copy(vdev, wiov, ubuf, len,
+ !MIC_VRINGH_READ, copy->vr_idx, &out_len);
+ if (ret) {
+ dev_err(vop_dev(vdev), "%s %d err %d\n",
+ __func__, __LINE__, ret);
+ break;
+ }
+ len -= out_len;
+ ubuf += out_len;
+ copy->out_len += out_len;
+ if (!len) {
+ /* One user space iovec is now completed */
+ iovcnt--;
+ u_iov++;
+ }
+ /* Exit loop if all elements in KIOVs have been processed. */
+ if (riov->i == riov->used && wiov->i == wiov->used)
+ break;
+ }
+ /*
+ * Update the used ring if a descriptor was available and some data was
+ * copied in/out and the user asked for a used ring update.
+ */
+ if (*head != USHRT_MAX && copy->out_len && copy->update_used) {
+ u32 total = 0;
+
+ /* Determine the total data consumed */
+ total += vop_vringh_iov_consumed(riov);
+ total += vop_vringh_iov_consumed(wiov);
+ vringh_complete_kern(vrh, *head, total);
+ *head = USHRT_MAX;
+ if (vringh_need_notify_kern(vrh) > 0)
+ vringh_notify(vrh);
+ vringh_kiov_cleanup(riov);
+ vringh_kiov_cleanup(wiov);
+ /* Update avail idx for user space */
+ vr->info->avail_idx = vrh->last_avail_idx;
+ }
+ return ret;
+}
+
+static inline int vop_verify_copy_args(struct vop_vdev *vdev,
+ struct mic_copy_desc *copy)
+{
+ if (!vdev || copy->vr_idx >= vdev->dd->num_vq)
+ return -EINVAL;
+ return 0;
+}
+
+/* Copy a specified number of virtio descriptors in a chain */
+static int vop_virtio_copy_desc(struct vop_vdev *vdev,
+ struct mic_copy_desc *copy)
+{
+ int err;
+ struct vop_vringh *vvr;
+
+ err = vop_verify_copy_args(vdev, copy);
+ if (err)
+ return err;
+
+ vvr = &vdev->vvr[copy->vr_idx];
+ mutex_lock(&vvr->vr_mutex);
+ if (!vop_vdevup(vdev)) {
+ err = -ENODEV;
+ dev_err(vop_dev(vdev), "%s %d err %d\n",
+ __func__, __LINE__, err);
+ goto err;
+ }
+ err = _vop_virtio_copy(vdev, copy);
+ if (err) {
+ dev_err(vop_dev(vdev), "%s %d err %d\n",
+ __func__, __LINE__, err);
+ }
+err:
+ mutex_unlock(&vvr->vr_mutex);
+ return err;
+}
+
+static int vop_open(struct inode *inode, struct file *f)
+{
+ struct vop_vdev *vdev;
+ struct vop_info *vi = container_of(f->private_data,
+ struct vop_info, miscdev);
+
+ vdev = kzalloc(sizeof(*vdev), GFP_KERNEL);
+ if (!vdev)
+ return -ENOMEM;
+ vdev->vi = vi;
+ mutex_init(&vdev->vdev_mutex);
+ f->private_data = vdev;
+ init_completion(&vdev->destroy);
+ complete(&vdev->destroy);
+ return 0;
+}
+
+static int vop_release(struct inode *inode, struct file *f)
+{
+ struct vop_vdev *vdev = f->private_data, *vdev_tmp;
+ struct vop_info *vi = vdev->vi;
+ struct list_head *pos, *tmp;
+ bool found = false;
+
+ mutex_lock(&vdev->vdev_mutex);
+ if (vdev->deleted)
+ goto unlock;
+ mutex_lock(&vi->vop_mutex);
+ list_for_each_safe(pos, tmp, &vi->vdev_list) {
+ vdev_tmp = list_entry(pos, struct vop_vdev, list);
+ if (vdev == vdev_tmp) {
+ vop_virtio_del_device(vdev);
+ list_del(pos);
+ found = true;
+ break;
+ }
+ }
+ mutex_unlock(&vi->vop_mutex);
+unlock:
+ mutex_unlock(&vdev->vdev_mutex);
+ if (!found)
+ wait_for_completion(&vdev->destroy);
+ f->private_data = NULL;
+ kfree(vdev);
+ return 0;
+}
+
+static long vop_ioctl(struct file *f, unsigned int cmd, unsigned long arg)
+{
+ struct vop_vdev *vdev = f->private_data;
+ struct vop_info *vi = vdev->vi;
+ void __user *argp = (void __user *)arg;
+ int ret;
+
+ switch (cmd) {
+ case MIC_VIRTIO_ADD_DEVICE:
+ {
+ struct mic_device_desc dd, *dd_config;
+
+ if (copy_from_user(&dd, argp, sizeof(dd)))
+ return -EFAULT;
+
+ if (mic_aligned_desc_size(&dd) > MIC_MAX_DESC_BLK_SIZE ||
+ dd.num_vq > MIC_MAX_VRINGS)
+ return -EINVAL;
+
+ dd_config = kzalloc(mic_desc_size(&dd), GFP_KERNEL);
+ if (!dd_config)
+ return -ENOMEM;
+ if (copy_from_user(dd_config, argp, mic_desc_size(&dd))) {
+ ret = -EFAULT;
+ goto free_ret;
+ }
+ mutex_lock(&vdev->vdev_mutex);
+ mutex_lock(&vi->vop_mutex);
+ ret = vop_virtio_add_device(vdev, dd_config);
+ if (ret)
+ goto unlock_ret;
+ list_add_tail(&vdev->list, &vi->vdev_list);
+unlock_ret:
+ mutex_unlock(&vi->vop_mutex);
+ mutex_unlock(&vdev->vdev_mutex);
+free_ret:
+ kfree(dd_config);
+ return ret;
+ }
+ case MIC_VIRTIO_COPY_DESC:
+ {
+ struct mic_copy_desc copy;
+
+ mutex_lock(&vdev->vdev_mutex);
+ ret = vop_vdev_inited(vdev);
+ if (ret)
+ goto _unlock_ret;
+
+ if (copy_from_user(©, argp, sizeof(copy))) {
+ ret = -EFAULT;
+ goto _unlock_ret;
+ }
+
+ ret = vop_virtio_copy_desc(vdev, ©);
+ if (ret < 0)
+ goto _unlock_ret;
+ if (copy_to_user(
+ &((struct mic_copy_desc __user *)argp)->out_len,
+ ©.out_len, sizeof(copy.out_len)))
+ ret = -EFAULT;
+_unlock_ret:
+ mutex_unlock(&vdev->vdev_mutex);
+ return ret;
+ }
+ case MIC_VIRTIO_CONFIG_CHANGE:
+ {
+ void *buf;
+
+ mutex_lock(&vdev->vdev_mutex);
+ ret = vop_vdev_inited(vdev);
+ if (ret)
+ goto __unlock_ret;
+ buf = kzalloc(vdev->dd->config_len, GFP_KERNEL);
+ if (!buf) {
+ ret = -ENOMEM;
+ goto __unlock_ret;
+ }
+ if (copy_from_user(buf, argp, vdev->dd->config_len)) {
+ ret = -EFAULT;
+ goto done;
+ }
+ ret = vop_virtio_config_change(vdev, buf);
+done:
+ kfree(buf);
+__unlock_ret:
+ mutex_unlock(&vdev->vdev_mutex);
+ return ret;
+ }
+ default:
+ return -ENOIOCTLCMD;
+ };
+ return 0;
+}
+
+/*
+ * We return POLLIN | POLLOUT from poll when new buffers are enqueued, and
+ * not when previously enqueued buffers may be available. This means that
+ * in the card->host (TX) path, when userspace is unblocked by poll it
+ * must drain all available descriptors or it can stall.
+ */
+static unsigned int vop_poll(struct file *f, poll_table *wait)
+{
+ struct vop_vdev *vdev = f->private_data;
+ int mask = 0;
+
+ mutex_lock(&vdev->vdev_mutex);
+ if (vop_vdev_inited(vdev)) {
+ mask = POLLERR;
+ goto done;
+ }
+ poll_wait(f, &vdev->waitq, wait);
+ if (vop_vdev_inited(vdev)) {
+ mask = POLLERR;
+ } else if (vdev->poll_wake) {
+ vdev->poll_wake = 0;
+ mask = POLLIN | POLLOUT;
+ }
+done:
+ mutex_unlock(&vdev->vdev_mutex);
+ return mask;
+}
+
+static inline int
+vop_query_offset(struct vop_vdev *vdev, unsigned long offset,
+ unsigned long *size, unsigned long *pa)
+{
+ struct vop_device *vpdev = vdev->vpdev;
+ unsigned long start = MIC_DP_SIZE;
+ int i;
+
+ /*
+ * MMAP interface is as follows:
+ * offset region
+ * 0x0 virtio device_page
+ * 0x1000 first vring
+ * 0x1000 + size of 1st vring second vring
+ * ....
+ */
+ if (!offset) {
+ *pa = virt_to_phys(vpdev->hw_ops->get_dp(vpdev));
+ *size = MIC_DP_SIZE;
+ return 0;
+ }
+
+ for (i = 0; i < vdev->dd->num_vq; i++) {
+ struct vop_vringh *vvr = &vdev->vvr[i];
+
+ if (offset == start) {
+ *pa = virt_to_phys(vvr->vring.va);
+ *size = vvr->vring.len;
+ return 0;
+ }
+ start += vvr->vring.len;
+ }
+ return -1;
+}
+
+/*
+ * Maps the device page and virtio rings to user space for readonly access.
+ */
+static int vop_mmap(struct file *f, struct vm_area_struct *vma)
+{
+ struct vop_vdev *vdev = f->private_data;
+ unsigned long offset = vma->vm_pgoff << PAGE_SHIFT;
+ unsigned long pa, size = vma->vm_end - vma->vm_start, size_rem = size;
+ int i, err;
+
+ err = vop_vdev_inited(vdev);
+ if (err)
+ goto ret;
+ if (vma->vm_flags & VM_WRITE) {
+ err = -EACCES;
+ goto ret;
+ }
+ while (size_rem) {
+ i = vop_query_offset(vdev, offset, &size, &pa);
+ if (i < 0) {
+ err = -EINVAL;
+ goto ret;
+ }
+ err = remap_pfn_range(vma, vma->vm_start + offset,
+ pa >> PAGE_SHIFT, size,
+ vma->vm_page_prot);
+ if (err)
+ goto ret;
+ size_rem -= size;
+ offset += size;
+ }
+ret:
+ return err;
+}
+
+static const struct file_operations vop_fops = {
+ .open = vop_open,
+ .release = vop_release,
+ .unlocked_ioctl = vop_ioctl,
+ .poll = vop_poll,
+ .mmap = vop_mmap,
+ .owner = THIS_MODULE,
+};
+
+int vop_host_init(struct vop_info *vi)
+{
+ int rc;
+ struct miscdevice *mdev;
+ struct vop_device *vpdev = vi->vpdev;
+
+ INIT_LIST_HEAD(&vi->vdev_list);
+ vi->dma_ch = vpdev->dma_ch;
+ mdev = &vi->miscdev;
+ mdev->minor = MISC_DYNAMIC_MINOR;
+ snprintf(vi->name, sizeof(vi->name), "vop_virtio%d", vpdev->index);
+ mdev->name = vi->name;
+ mdev->fops = &vop_fops;
+ mdev->parent = &vpdev->dev;
+
+ rc = misc_register(mdev);
+ if (rc)
+ dev_err(&vpdev->dev, "%s failed rc %d\n", __func__, rc);
+ return rc;
+}
+
+void vop_host_uninit(struct vop_info *vi)
+{
+ struct list_head *pos, *tmp;
+ struct vop_vdev *vdev;
+
+ mutex_lock(&vi->vop_mutex);
+ vop_virtio_reset_devices(vi);
+ list_for_each_safe(pos, tmp, &vi->vdev_list) {
+ vdev = list_entry(pos, struct vop_vdev, list);
+ list_del(pos);
+ reinit_completion(&vdev->destroy);
+ mutex_unlock(&vi->vop_mutex);
+ mutex_lock(&vdev->vdev_mutex);
+ vop_virtio_del_device(vdev);
+ vdev->deleted = true;
+ mutex_unlock(&vdev->vdev_mutex);
+ complete(&vdev->destroy);
+ mutex_lock(&vi->vop_mutex);
+ }
+ mutex_unlock(&vi->vop_mutex);
+ misc_deregister(&vi->miscdev);
+}
diff --git a/drivers/misc/pch_phub.c b/drivers/misc/pch_phub.c
index 9a17a9ba..15bb0c8 100644
--- a/drivers/misc/pch_phub.c
+++ b/drivers/misc/pch_phub.c
@@ -503,8 +503,7 @@
int err;
ssize_t rom_size;
- struct pch_phub_reg *chip =
- dev_get_drvdata(container_of(kobj, struct device, kobj));
+ struct pch_phub_reg *chip = dev_get_drvdata(kobj_to_dev(kobj));
ret = mutex_lock_interruptible(&pch_phub_mutex);
if (ret) {
@@ -567,8 +566,7 @@
unsigned int addr_offset;
int ret;
ssize_t rom_size;
- struct pch_phub_reg *chip =
- dev_get_drvdata(container_of(kobj, struct device, kobj));
+ struct pch_phub_reg *chip = dev_get_drvdata(kobj_to_dev(kobj));
ret = mutex_lock_interruptible(&pch_phub_mutex);
if (ret)
diff --git a/drivers/misc/ti-st/st_core.c b/drivers/misc/ti-st/st_core.c
index 6e3af8b..dcdbd58 100644
--- a/drivers/misc/ti-st/st_core.c
+++ b/drivers/misc/ti-st/st_core.c
@@ -632,7 +632,6 @@
spin_unlock_irqrestore(&st_gdata->lock, flags);
return err;
}
- pr_debug("done %s(%d) ", __func__, new_proto->chnl_id);
}
EXPORT_SYMBOL_GPL(st_register);
diff --git a/drivers/misc/vmw_vmci/vmci_driver.c b/drivers/misc/vmw_vmci/vmci_driver.c
index b823f9a..896be15 100644
--- a/drivers/misc/vmw_vmci/vmci_driver.c
+++ b/drivers/misc/vmw_vmci/vmci_driver.c
@@ -113,5 +113,5 @@
MODULE_AUTHOR("VMware, Inc.");
MODULE_DESCRIPTION("VMware Virtual Machine Communication Interface.");
-MODULE_VERSION("1.1.3.0-k");
+MODULE_VERSION("1.1.4.0-k");
MODULE_LICENSE("GPL v2");
diff --git a/drivers/mmc/card/block.c b/drivers/mmc/card/block.c
index 5914263..fe207e5 100644
--- a/drivers/mmc/card/block.c
+++ b/drivers/mmc/card/block.c
@@ -47,13 +47,10 @@
#include "queue.h"
MODULE_ALIAS("mmc:block");
-
-#ifdef KERNEL
#ifdef MODULE_PARAM_PREFIX
#undef MODULE_PARAM_PREFIX
#endif
#define MODULE_PARAM_PREFIX "mmcblk."
-#endif
#define INAND_CMD38_ARG_EXT_CSD 113
#define INAND_CMD38_ARG_ERASE 0x00
@@ -655,8 +652,10 @@
}
md = mmc_blk_get(bdev->bd_disk);
- if (!md)
+ if (!md) {
+ err = -EINVAL;
goto cmd_err;
+ }
card = md->queue.card;
if (IS_ERR(card)) {
diff --git a/drivers/mmc/host/mmc_spi.c b/drivers/mmc/host/mmc_spi.c
index 1c1b45e..3446097 100644
--- a/drivers/mmc/host/mmc_spi.c
+++ b/drivers/mmc/host/mmc_spi.c
@@ -925,6 +925,10 @@
dma_addr = dma_map_page(dma_dev, sg_page(sg), 0,
PAGE_SIZE, dir);
+ if (dma_mapping_error(dma_dev, dma_addr)) {
+ data->error = -EFAULT;
+ break;
+ }
if (direction == DMA_TO_DEVICE)
t->tx_dma = dma_addr + sg->offset;
else
@@ -1393,10 +1397,12 @@
host->dma_dev = dev;
host->ones_dma = dma_map_single(dev, ones,
MMC_SPI_BLOCKSIZE, DMA_TO_DEVICE);
+ if (dma_mapping_error(dev, host->ones_dma))
+ goto fail_ones_dma;
host->data_dma = dma_map_single(dev, host->data,
sizeof(*host->data), DMA_BIDIRECTIONAL);
-
- /* REVISIT in theory those map operations can fail... */
+ if (dma_mapping_error(dev, host->data_dma))
+ goto fail_data_dma;
dma_sync_single_for_cpu(host->dma_dev,
host->data_dma, sizeof(*host->data),
@@ -1462,6 +1468,11 @@
if (host->dma_dev)
dma_unmap_single(host->dma_dev, host->data_dma,
sizeof(*host->data), DMA_BIDIRECTIONAL);
+fail_data_dma:
+ if (host->dma_dev)
+ dma_unmap_single(host->dma_dev, host->ones_dma,
+ MMC_SPI_BLOCKSIZE, DMA_TO_DEVICE);
+fail_ones_dma:
kfree(host->data);
fail_nobuf1:
diff --git a/drivers/mmc/host/omap_hsmmc.c b/drivers/mmc/host/omap_hsmmc.c
index b6639ea..f6e4d97 100644
--- a/drivers/mmc/host/omap_hsmmc.c
+++ b/drivers/mmc/host/omap_hsmmc.c
@@ -2232,6 +2232,7 @@
dma_release_channel(host->tx_chan);
if (host->rx_chan)
dma_release_channel(host->rx_chan);
+ pm_runtime_dont_use_autosuspend(host->dev);
pm_runtime_put_sync(host->dev);
pm_runtime_disable(host->dev);
if (host->dbclk)
@@ -2253,6 +2254,7 @@
dma_release_channel(host->tx_chan);
dma_release_channel(host->rx_chan);
+ pm_runtime_dont_use_autosuspend(host->dev);
pm_runtime_put_sync(host->dev);
pm_runtime_disable(host->dev);
device_init_wakeup(&pdev->dev, false);
diff --git a/drivers/mmc/host/pxamci.c b/drivers/mmc/host/pxamci.c
index ce08896..da82477 100644
--- a/drivers/mmc/host/pxamci.c
+++ b/drivers/mmc/host/pxamci.c
@@ -86,7 +86,7 @@
static inline void pxamci_init_ocr(struct pxamci_host *host)
{
#ifdef CONFIG_REGULATOR
- host->vcc = regulator_get_optional(mmc_dev(host->mmc), "vmmc");
+ host->vcc = devm_regulator_get_optional(mmc_dev(host->mmc), "vmmc");
if (IS_ERR(host->vcc))
host->vcc = NULL;
@@ -654,12 +654,8 @@
r = platform_get_resource(pdev, IORESOURCE_MEM, 0);
irq = platform_get_irq(pdev, 0);
- if (!r || irq < 0)
- return -ENXIO;
-
- r = request_mem_region(r->start, SZ_4K, DRIVER_NAME);
- if (!r)
- return -EBUSY;
+ if (irq < 0)
+ return irq;
mmc = mmc_alloc_host(sizeof(struct pxamci_host), &pdev->dev);
if (!mmc) {
@@ -695,7 +691,7 @@
host->pdata = pdev->dev.platform_data;
host->clkrt = CLKRT_OFF;
- host->clk = clk_get(&pdev->dev, NULL);
+ host->clk = devm_clk_get(&pdev->dev, NULL);
if (IS_ERR(host->clk)) {
ret = PTR_ERR(host->clk);
host->clk = NULL;
@@ -727,9 +723,9 @@
host->irq = irq;
host->imask = MMC_I_MASK_ALL;
- host->base = ioremap(r->start, SZ_4K);
- if (!host->base) {
- ret = -ENOMEM;
+ host->base = devm_ioremap_resource(&pdev->dev, r);
+ if (IS_ERR(host->base)) {
+ ret = PTR_ERR(host->base);
goto out;
}
@@ -742,7 +738,8 @@
writel(64, host->base + MMC_RESTO);
writel(host->imask, host->base + MMC_I_MASK);
- ret = request_irq(host->irq, pxamci_irq, 0, DRIVER_NAME, host);
+ ret = devm_request_irq(&pdev->dev, host->irq, pxamci_irq, 0,
+ DRIVER_NAME, host);
if (ret)
goto out;
@@ -804,7 +801,7 @@
dev_err(&pdev->dev, "Failed requesting gpio_ro %d\n", gpio_ro);
goto out;
} else {
- mmc->caps |= host->pdata->gpio_card_ro_invert ?
+ mmc->caps2 |= host->pdata->gpio_card_ro_invert ?
0 : MMC_CAP2_RO_ACTIVE_HIGH;
}
@@ -833,14 +830,9 @@
dma_release_channel(host->dma_chan_rx);
if (host->dma_chan_tx)
dma_release_channel(host->dma_chan_tx);
- if (host->base)
- iounmap(host->base);
- if (host->clk)
- clk_put(host->clk);
}
if (mmc)
mmc_free_host(mmc);
- release_resource(r);
return ret;
}
@@ -859,9 +851,6 @@
gpio_ro = host->pdata->gpio_card_ro;
gpio_power = host->pdata->gpio_power;
}
- if (host->vcc)
- regulator_put(host->vcc);
-
if (host->pdata && host->pdata->exit)
host->pdata->exit(&pdev->dev, mmc);
@@ -870,16 +859,10 @@
END_CMD_RES|PRG_DONE|DATA_TRAN_DONE,
host->base + MMC_I_MASK);
- free_irq(host->irq, host);
dmaengine_terminate_all(host->dma_chan_rx);
dmaengine_terminate_all(host->dma_chan_tx);
dma_release_channel(host->dma_chan_rx);
dma_release_channel(host->dma_chan_tx);
- iounmap(host->base);
-
- clk_put(host->clk);
-
- release_resource(host->res);
mmc_free_host(mmc);
}
diff --git a/drivers/mmc/host/sdhci-acpi.c b/drivers/mmc/host/sdhci-acpi.c
index f6047fc..a5cda92 100644
--- a/drivers/mmc/host/sdhci-acpi.c
+++ b/drivers/mmc/host/sdhci-acpi.c
@@ -146,6 +146,33 @@
.ops = &sdhci_acpi_ops_int,
};
+static int bxt_get_cd(struct mmc_host *mmc)
+{
+ int gpio_cd = mmc_gpio_get_cd(mmc);
+ struct sdhci_host *host = mmc_priv(mmc);
+ unsigned long flags;
+ int ret = 0;
+
+ if (!gpio_cd)
+ return 0;
+
+ pm_runtime_get_sync(mmc->parent);
+
+ spin_lock_irqsave(&host->lock, flags);
+
+ if (host->flags & SDHCI_DEVICE_DEAD)
+ goto out;
+
+ ret = !!(sdhci_readl(host, SDHCI_PRESENT_STATE) & SDHCI_CARD_PRESENT);
+out:
+ spin_unlock_irqrestore(&host->lock, flags);
+
+ pm_runtime_mark_last_busy(mmc->parent);
+ pm_runtime_put_autosuspend(mmc->parent);
+
+ return ret;
+}
+
static int sdhci_acpi_emmc_probe_slot(struct platform_device *pdev,
const char *hid, const char *uid)
{
@@ -196,6 +223,9 @@
/* Platform specific code during sd probe slot goes here */
+ if (hid && !strcmp(hid, "80865ACA"))
+ host->mmc_host_ops.get_cd = bxt_get_cd;
+
return 0;
}
diff --git a/drivers/mmc/host/sdhci-of-at91.c b/drivers/mmc/host/sdhci-of-at91.c
index 7e7d8f0..9cb86fb 100644
--- a/drivers/mmc/host/sdhci-of-at91.c
+++ b/drivers/mmc/host/sdhci-of-at91.c
@@ -217,6 +217,7 @@
pm_runtime_disable:
pm_runtime_disable(&pdev->dev);
pm_runtime_set_suspended(&pdev->dev);
+ pm_runtime_put_noidle(&pdev->dev);
clocks_disable_unprepare:
clk_disable_unprepare(priv->gck);
clk_disable_unprepare(priv->mainck);
diff --git a/drivers/mmc/host/sdhci-pci-core.c b/drivers/mmc/host/sdhci-pci-core.c
index cc851b0..df3b8ec 100644
--- a/drivers/mmc/host/sdhci-pci-core.c
+++ b/drivers/mmc/host/sdhci-pci-core.c
@@ -330,6 +330,33 @@
sdhci_pci_spt_drive_strength = 0x10 | ((val >> 12) & 0xf);
}
+static int bxt_get_cd(struct mmc_host *mmc)
+{
+ int gpio_cd = mmc_gpio_get_cd(mmc);
+ struct sdhci_host *host = mmc_priv(mmc);
+ unsigned long flags;
+ int ret = 0;
+
+ if (!gpio_cd)
+ return 0;
+
+ pm_runtime_get_sync(mmc->parent);
+
+ spin_lock_irqsave(&host->lock, flags);
+
+ if (host->flags & SDHCI_DEVICE_DEAD)
+ goto out;
+
+ ret = !!(sdhci_readl(host, SDHCI_PRESENT_STATE) & SDHCI_CARD_PRESENT);
+out:
+ spin_unlock_irqrestore(&host->lock, flags);
+
+ pm_runtime_mark_last_busy(mmc->parent);
+ pm_runtime_put_autosuspend(mmc->parent);
+
+ return ret;
+}
+
static int byt_emmc_probe_slot(struct sdhci_pci_slot *slot)
{
slot->host->mmc->caps |= MMC_CAP_8_BIT_DATA | MMC_CAP_NONREMOVABLE |
@@ -362,6 +389,10 @@
slot->cd_con_id = NULL;
slot->cd_idx = 0;
slot->cd_override_level = true;
+ if (slot->chip->pdev->device == PCI_DEVICE_ID_INTEL_BXT_SD ||
+ slot->chip->pdev->device == PCI_DEVICE_ID_INTEL_APL_SD)
+ slot->host->mmc_host_ops.get_cd = bxt_get_cd;
+
return 0;
}
diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
index d622435..add9fdf 100644
--- a/drivers/mmc/host/sdhci.c
+++ b/drivers/mmc/host/sdhci.c
@@ -1360,7 +1360,7 @@
sdhci_runtime_pm_get(host);
/* Firstly check card presence */
- present = sdhci_do_get_cd(host);
+ present = mmc->ops->get_cd(mmc);
spin_lock_irqsave(&host->lock, flags);
@@ -2849,6 +2849,8 @@
host = mmc_priv(mmc);
host->mmc = mmc;
+ host->mmc_host_ops = sdhci_ops;
+ mmc->ops = &host->mmc_host_ops;
return host;
}
@@ -3037,7 +3039,6 @@
/*
* Set host parameters.
*/
- mmc->ops = &sdhci_ops;
max_clk = host->max_clk;
if (host->ops->get_min_clock)
diff --git a/drivers/mmc/host/sdhci.h b/drivers/mmc/host/sdhci.h
index 7654ae5..0115e99 100644
--- a/drivers/mmc/host/sdhci.h
+++ b/drivers/mmc/host/sdhci.h
@@ -430,6 +430,7 @@
/* Internal data */
struct mmc_host *mmc; /* MMC structure */
+ struct mmc_host_ops mmc_host_ops; /* MMC host ops */
u64 dma_mask; /* custom DMA mask */
#if defined(CONFIG_LEDS_CLASS) || defined(CONFIG_LEDS_CLASS_MODULE)
diff --git a/drivers/mmc/host/sh_mmcif.c b/drivers/mmc/host/sh_mmcif.c
index 1ca8a13..6234eab3 100644
--- a/drivers/mmc/host/sh_mmcif.c
+++ b/drivers/mmc/host/sh_mmcif.c
@@ -445,7 +445,7 @@
pdata->slave_id_rx);
} else {
host->chan_tx = dma_request_slave_channel(dev, "tx");
- host->chan_tx = dma_request_slave_channel(dev, "rx");
+ host->chan_rx = dma_request_slave_channel(dev, "rx");
}
dev_dbg(dev, "%s: got channel TX %p RX %p\n", __func__, host->chan_tx,
host->chan_rx);
diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
index 56b5605..b7f1a99 100644
--- a/drivers/net/bonding/bond_main.c
+++ b/drivers/net/bonding/bond_main.c
@@ -214,6 +214,8 @@
static struct rtnl_link_stats64 *bond_get_stats(struct net_device *bond_dev,
struct rtnl_link_stats64 *stats);
static void bond_slave_arr_handler(struct work_struct *work);
+static bool bond_time_in_interval(struct bonding *bond, unsigned long last_act,
+ int mod);
/*---------------------------- General routines -----------------------------*/
@@ -2127,6 +2129,7 @@
continue;
case BOND_LINK_UP:
+ bond_update_speed_duplex(slave);
bond_set_slave_link_state(slave, BOND_LINK_UP,
BOND_SLAVE_NOTIFY_NOW);
slave->last_link_up = jiffies;
@@ -2459,7 +2462,7 @@
struct slave *slave)
{
struct arphdr *arp = (struct arphdr *)skb->data;
- struct slave *curr_active_slave;
+ struct slave *curr_active_slave, *curr_arp_slave;
unsigned char *arp_ptr;
__be32 sip, tip;
int alen, is_arp = skb->protocol == __cpu_to_be16(ETH_P_ARP);
@@ -2506,26 +2509,41 @@
&sip, &tip);
curr_active_slave = rcu_dereference(bond->curr_active_slave);
+ curr_arp_slave = rcu_dereference(bond->current_arp_slave);
- /* Backup slaves won't see the ARP reply, but do come through
- * here for each ARP probe (so we swap the sip/tip to validate
- * the probe). In a "redundant switch, common router" type of
- * configuration, the ARP probe will (hopefully) travel from
- * the active, through one switch, the router, then the other
- * switch before reaching the backup.
+ /* We 'trust' the received ARP enough to validate it if:
*
- * We 'trust' the arp requests if there is an active slave and
- * it received valid arp reply(s) after it became active. This
- * is done to avoid endless looping when we can't reach the
+ * (a) the slave receiving the ARP is active (which includes the
+ * current ARP slave, if any), or
+ *
+ * (b) the receiving slave isn't active, but there is a currently
+ * active slave and it received valid arp reply(s) after it became
+ * the currently active slave, or
+ *
+ * (c) there is an ARP slave that sent an ARP during the prior ARP
+ * interval, and we receive an ARP reply on any slave. We accept
+ * these because switch FDB update delays may deliver the ARP
+ * reply to a slave other than the sender of the ARP request.
+ *
+ * Note: for (b), backup slaves are receiving the broadcast ARP
+ * request, not a reply. This request passes from the sending
+ * slave through the L2 switch(es) to the receiving slave. Since
+ * this is checking the request, sip/tip are swapped for
+ * validation.
+ *
+ * This is done to avoid endless looping when we can't reach the
* arp_ip_target and fool ourselves with our own arp requests.
*/
-
if (bond_is_active_slave(slave))
bond_validate_arp(bond, slave, sip, tip);
else if (curr_active_slave &&
time_after(slave_last_rx(bond, curr_active_slave),
curr_active_slave->last_link_up))
bond_validate_arp(bond, slave, tip, sip);
+ else if (curr_arp_slave && (arp->ar_op == htons(ARPOP_REPLY)) &&
+ bond_time_in_interval(bond,
+ dev_trans_start(curr_arp_slave->dev), 1))
+ bond_validate_arp(bond, slave, sip, tip);
out_unlock:
if (arp != (struct arphdr *)skb->data)
diff --git a/drivers/net/can/usb/ems_usb.c b/drivers/net/can/usb/ems_usb.c
index fc5b756..eb7192f 100644
--- a/drivers/net/can/usb/ems_usb.c
+++ b/drivers/net/can/usb/ems_usb.c
@@ -117,6 +117,9 @@
*/
#define EMS_USB_ARM7_CLOCK 8000000
+#define CPC_TX_QUEUE_TRIGGER_LOW 25
+#define CPC_TX_QUEUE_TRIGGER_HIGH 35
+
/*
* CAN-Message representation in a CPC_MSG. Message object type is
* CPC_MSG_TYPE_CAN_FRAME or CPC_MSG_TYPE_RTR_FRAME or
@@ -278,6 +281,11 @@
switch (urb->status) {
case 0:
dev->free_slots = dev->intr_in_buffer[1];
+ if(dev->free_slots > CPC_TX_QUEUE_TRIGGER_HIGH){
+ if (netif_queue_stopped(netdev)){
+ netif_wake_queue(netdev);
+ }
+ }
break;
case -ECONNRESET: /* unlink */
@@ -526,8 +534,6 @@
/* Release context */
context->echo_index = MAX_TX_URBS;
- if (netif_queue_stopped(netdev))
- netif_wake_queue(netdev);
}
/*
@@ -587,7 +593,7 @@
int err, i;
dev->intr_in_buffer[0] = 0;
- dev->free_slots = 15; /* initial size */
+ dev->free_slots = 50; /* initial size */
for (i = 0; i < MAX_RX_URBS; i++) {
struct urb *urb = NULL;
@@ -835,7 +841,7 @@
/* Slow down tx path */
if (atomic_read(&dev->active_tx_urbs) >= MAX_TX_URBS ||
- dev->free_slots < 5) {
+ dev->free_slots < CPC_TX_QUEUE_TRIGGER_LOW) {
netif_stop_queue(netdev);
}
}
diff --git a/drivers/net/dsa/mv88e6352.c b/drivers/net/dsa/mv88e6352.c
index cc6c545..a47f52f 100644
--- a/drivers/net/dsa/mv88e6352.c
+++ b/drivers/net/dsa/mv88e6352.c
@@ -25,6 +25,7 @@
static const struct mv88e6xxx_switch_id mv88e6352_table[] = {
{ PORT_SWITCH_ID_6172, "Marvell 88E6172" },
{ PORT_SWITCH_ID_6176, "Marvell 88E6176" },
+ { PORT_SWITCH_ID_6240, "Marvell 88E6240" },
{ PORT_SWITCH_ID_6320, "Marvell 88E6320" },
{ PORT_SWITCH_ID_6320_A1, "Marvell 88E6320 (A1)" },
{ PORT_SWITCH_ID_6320_A2, "Marvell 88e6320 (A2)" },
diff --git a/drivers/net/dsa/mv88e6xxx.c b/drivers/net/dsa/mv88e6xxx.c
index cf34681..512c8c0 100644
--- a/drivers/net/dsa/mv88e6xxx.c
+++ b/drivers/net/dsa/mv88e6xxx.c
@@ -1555,7 +1555,7 @@
if (vlan.vid != vid || !vlan.valid ||
vlan.data[port] == GLOBAL_VTU_DATA_MEMBER_TAG_NON_MEMBER)
- return -ENOENT;
+ return -EOPNOTSUPP;
vlan.data[port] = GLOBAL_VTU_DATA_MEMBER_TAG_NON_MEMBER;
@@ -1582,6 +1582,7 @@
const struct switchdev_obj_port_vlan *vlan)
{
struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
+ const u16 defpvid = 4000 + ds->index * DSA_MAX_PORTS + port;
u16 pvid, vid;
int err = 0;
@@ -1597,7 +1598,8 @@
goto unlock;
if (vid == pvid) {
- err = _mv88e6xxx_port_pvid_set(ds, port, 0);
+ /* restore reserved VLAN ID */
+ err = _mv88e6xxx_port_pvid_set(ds, port, defpvid);
if (err)
goto unlock;
}
@@ -1889,26 +1891,20 @@
int mv88e6xxx_port_bridge_join(struct dsa_switch *ds, int port, u32 members)
{
- struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
- const u16 pvid = 4000 + ds->index * DSA_MAX_PORTS + port;
- int err;
-
- /* The port joined a bridge, so leave its reserved VLAN */
- mutex_lock(&ps->smi_mutex);
- err = _mv88e6xxx_port_vlan_del(ds, port, pvid);
- if (!err)
- err = _mv88e6xxx_port_pvid_set(ds, port, 0);
- mutex_unlock(&ps->smi_mutex);
- return err;
+ return 0;
}
int mv88e6xxx_port_bridge_leave(struct dsa_switch *ds, int port, u32 members)
{
+ return 0;
+}
+
+static int mv88e6xxx_setup_port_default_vlan(struct dsa_switch *ds, int port)
+{
struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
const u16 pvid = 4000 + ds->index * DSA_MAX_PORTS + port;
int err;
- /* The port left the bridge, so join its reserved VLAN */
mutex_lock(&ps->smi_mutex);
err = _mv88e6xxx_port_vlan_add(ds, port, pvid, true);
if (!err)
@@ -2192,8 +2188,7 @@
if (dsa_is_cpu_port(ds, i) || dsa_is_dsa_port(ds, i))
continue;
- /* setup the unbridged state */
- ret = mv88e6xxx_port_bridge_leave(ds, i, 0);
+ ret = mv88e6xxx_setup_port_default_vlan(ds, i);
if (ret < 0)
return ret;
}
diff --git a/drivers/net/ethernet/8390/pcnet_cs.c b/drivers/net/ethernet/8390/pcnet_cs.c
index 2777289..2f79d29 100644
--- a/drivers/net/ethernet/8390/pcnet_cs.c
+++ b/drivers/net/ethernet/8390/pcnet_cs.c
@@ -1501,6 +1501,7 @@
PCMCIA_DEVICE_MANF_CARD(0x026f, 0x030a),
PCMCIA_DEVICE_MANF_CARD(0x0274, 0x1103),
PCMCIA_DEVICE_MANF_CARD(0x0274, 0x1121),
+ PCMCIA_DEVICE_MANF_CARD(0xc001, 0x0009),
PCMCIA_DEVICE_PROD_ID12("2408LAN", "Ethernet", 0x352fff7f, 0x00b2e941),
PCMCIA_DEVICE_PROD_ID1234("Socket", "CF 10/100 Ethernet Card", "Revision B", "05/11/06", 0xb38bcc2e, 0x4de88352, 0xeaca6c8d, 0x7e57c22e),
PCMCIA_DEVICE_PROD_ID123("Cardwell", "PCMCIA", "ETHERNET", 0x9533672e, 0x281f1c5d, 0x3ff7175b),
diff --git a/drivers/net/ethernet/agere/et131x.c b/drivers/net/ethernet/agere/et131x.c
index 3f3bcbe..0907ab6 100644
--- a/drivers/net/ethernet/agere/et131x.c
+++ b/drivers/net/ethernet/agere/et131x.c
@@ -2380,7 +2380,7 @@
sizeof(u32),
&tx_ring->tx_status_pa,
GFP_KERNEL);
- if (!tx_ring->tx_status_pa) {
+ if (!tx_ring->tx_status) {
dev_err(&adapter->pdev->dev,
"Cannot alloc memory for Tx status block\n");
return -ENOMEM;
diff --git a/drivers/net/ethernet/amd/am79c961a.c b/drivers/net/ethernet/amd/am79c961a.c
index 87e727b..fcdf5dd 100644
--- a/drivers/net/ethernet/amd/am79c961a.c
+++ b/drivers/net/ethernet/amd/am79c961a.c
@@ -50,8 +50,8 @@
static void write_rreg(u_long base, u_int reg, u_int val)
{
asm volatile(
- "str%?h %1, [%2] @ NET_RAP\n\t"
- "str%?h %0, [%2, #-4] @ NET_RDP"
+ "strh %1, [%2] @ NET_RAP\n\t"
+ "strh %0, [%2, #-4] @ NET_RDP"
:
: "r" (val), "r" (reg), "r" (ISAIO_BASE + 0x0464));
}
@@ -60,8 +60,8 @@
{
unsigned short v;
asm volatile(
- "str%?h %1, [%2] @ NET_RAP\n\t"
- "ldr%?h %0, [%2, #-4] @ NET_RDP"
+ "strh %1, [%2] @ NET_RAP\n\t"
+ "ldrh %0, [%2, #-4] @ NET_RDP"
: "=r" (v)
: "r" (reg), "r" (ISAIO_BASE + 0x0464));
return v;
@@ -70,8 +70,8 @@
static inline void write_ireg(u_long base, u_int reg, u_int val)
{
asm volatile(
- "str%?h %1, [%2] @ NET_RAP\n\t"
- "str%?h %0, [%2, #8] @ NET_IDP"
+ "strh %1, [%2] @ NET_RAP\n\t"
+ "strh %0, [%2, #8] @ NET_IDP"
:
: "r" (val), "r" (reg), "r" (ISAIO_BASE + 0x0464));
}
@@ -80,8 +80,8 @@
{
u_short v;
asm volatile(
- "str%?h %1, [%2] @ NAT_RAP\n\t"
- "ldr%?h %0, [%2, #8] @ NET_IDP\n\t"
+ "strh %1, [%2] @ NAT_RAP\n\t"
+ "ldrh %0, [%2, #8] @ NET_IDP\n\t"
: "=r" (v)
: "r" (reg), "r" (ISAIO_BASE + 0x0464));
return v;
@@ -96,7 +96,7 @@
offset = ISAMEM_BASE + (offset << 1);
length = (length + 1) & ~1;
if ((int)buf & 2) {
- asm volatile("str%?h %2, [%0], #4"
+ asm volatile("strh %2, [%0], #4"
: "=&r" (offset) : "0" (offset), "r" (buf[0] | (buf[1] << 8)));
buf += 2;
length -= 2;
@@ -104,20 +104,20 @@
while (length > 8) {
register unsigned int tmp asm("r2"), tmp2 asm("r3");
asm volatile(
- "ldm%?ia %0!, {%1, %2}"
+ "ldmia %0!, {%1, %2}"
: "+r" (buf), "=&r" (tmp), "=&r" (tmp2));
length -= 8;
asm volatile(
- "str%?h %1, [%0], #4\n\t"
- "mov%? %1, %1, lsr #16\n\t"
- "str%?h %1, [%0], #4\n\t"
- "str%?h %2, [%0], #4\n\t"
- "mov%? %2, %2, lsr #16\n\t"
- "str%?h %2, [%0], #4"
+ "strh %1, [%0], #4\n\t"
+ "mov %1, %1, lsr #16\n\t"
+ "strh %1, [%0], #4\n\t"
+ "strh %2, [%0], #4\n\t"
+ "mov %2, %2, lsr #16\n\t"
+ "strh %2, [%0], #4"
: "+r" (offset), "=&r" (tmp), "=&r" (tmp2));
}
while (length > 0) {
- asm volatile("str%?h %2, [%0], #4"
+ asm volatile("strh %2, [%0], #4"
: "=&r" (offset) : "0" (offset), "r" (buf[0] | (buf[1] << 8)));
buf += 2;
length -= 2;
@@ -132,23 +132,23 @@
if ((int)buf & 2) {
unsigned int tmp;
asm volatile(
- "ldr%?h %2, [%0], #4\n\t"
- "str%?b %2, [%1], #1\n\t"
- "mov%? %2, %2, lsr #8\n\t"
- "str%?b %2, [%1], #1"
+ "ldrh %2, [%0], #4\n\t"
+ "strb %2, [%1], #1\n\t"
+ "mov %2, %2, lsr #8\n\t"
+ "strb %2, [%1], #1"
: "=&r" (offset), "=&r" (buf), "=r" (tmp): "0" (offset), "1" (buf));
length -= 2;
}
while (length > 8) {
register unsigned int tmp asm("r2"), tmp2 asm("r3"), tmp3;
asm volatile(
- "ldr%?h %2, [%0], #4\n\t"
- "ldr%?h %4, [%0], #4\n\t"
- "ldr%?h %3, [%0], #4\n\t"
- "orr%? %2, %2, %4, lsl #16\n\t"
- "ldr%?h %4, [%0], #4\n\t"
- "orr%? %3, %3, %4, lsl #16\n\t"
- "stm%?ia %1!, {%2, %3}"
+ "ldrh %2, [%0], #4\n\t"
+ "ldrh %4, [%0], #4\n\t"
+ "ldrh %3, [%0], #4\n\t"
+ "orr %2, %2, %4, lsl #16\n\t"
+ "ldrh %4, [%0], #4\n\t"
+ "orr %3, %3, %4, lsl #16\n\t"
+ "stmia %1!, {%2, %3}"
: "=&r" (offset), "=&r" (buf), "=r" (tmp), "=r" (tmp2), "=r" (tmp3)
: "0" (offset), "1" (buf));
length -= 8;
@@ -156,10 +156,10 @@
while (length > 0) {
unsigned int tmp;
asm volatile(
- "ldr%?h %2, [%0], #4\n\t"
- "str%?b %2, [%1], #1\n\t"
- "mov%? %2, %2, lsr #8\n\t"
- "str%?b %2, [%1], #1"
+ "ldrh %2, [%0], #4\n\t"
+ "strb %2, [%1], #1\n\t"
+ "mov %2, %2, lsr #8\n\t"
+ "strb %2, [%1], #1"
: "=&r" (offset), "=&r" (buf), "=r" (tmp) : "0" (offset), "1" (buf));
length -= 2;
}
diff --git a/drivers/net/ethernet/amd/lance.c b/drivers/net/ethernet/amd/lance.c
index 256f590..3a7ebfd 100644
--- a/drivers/net/ethernet/amd/lance.c
+++ b/drivers/net/ethernet/amd/lance.c
@@ -547,8 +547,8 @@
/* Make certain the data structures used by the LANCE are aligned and DMAble. */
lp = kzalloc(sizeof(*lp), GFP_DMA | GFP_KERNEL);
- if(lp==NULL)
- return -ENODEV;
+ if (!lp)
+ return -ENOMEM;
if (lance_debug > 6) printk(" (#0x%05lx)", (unsigned long)lp);
dev->ml_priv = lp;
lp->name = chipname;
diff --git a/drivers/net/ethernet/arc/emac_main.c b/drivers/net/ethernet/arc/emac_main.c
index abe1eab..6446af1 100644
--- a/drivers/net/ethernet/arc/emac_main.c
+++ b/drivers/net/ethernet/arc/emac_main.c
@@ -163,7 +163,7 @@
struct sk_buff *skb = tx_buff->skb;
unsigned int info = le32_to_cpu(txbd->info);
- if ((info & FOR_EMAC) || !txbd->data)
+ if ((info & FOR_EMAC) || !txbd->data || !skb)
break;
if (unlikely(info & (DROP | DEFR | LTCL | UFLO))) {
@@ -191,6 +191,7 @@
txbd->data = 0;
txbd->info = 0;
+ tx_buff->skb = NULL;
*txbd_dirty = (*txbd_dirty + 1) % TX_BD_NUM;
}
@@ -446,6 +447,9 @@
*last_rx_bd = (*last_rx_bd + 1) % RX_BD_NUM;
}
+ priv->txbd_curr = 0;
+ priv->txbd_dirty = 0;
+
/* Clean Tx BD's */
memset(priv->txbd, 0, TX_RING_SZ);
@@ -514,6 +518,64 @@
}
/**
+ * arc_free_tx_queue - free skb from tx queue
+ * @ndev: Pointer to the network device.
+ *
+ * This function must be called while EMAC disable
+ */
+static void arc_free_tx_queue(struct net_device *ndev)
+{
+ struct arc_emac_priv *priv = netdev_priv(ndev);
+ unsigned int i;
+
+ for (i = 0; i < TX_BD_NUM; i++) {
+ struct arc_emac_bd *txbd = &priv->txbd[i];
+ struct buffer_state *tx_buff = &priv->tx_buff[i];
+
+ if (tx_buff->skb) {
+ dma_unmap_single(&ndev->dev, dma_unmap_addr(tx_buff, addr),
+ dma_unmap_len(tx_buff, len), DMA_TO_DEVICE);
+
+ /* return the sk_buff to system */
+ dev_kfree_skb_irq(tx_buff->skb);
+ }
+
+ txbd->info = 0;
+ txbd->data = 0;
+ tx_buff->skb = NULL;
+ }
+}
+
+/**
+ * arc_free_rx_queue - free skb from rx queue
+ * @ndev: Pointer to the network device.
+ *
+ * This function must be called while EMAC disable
+ */
+static void arc_free_rx_queue(struct net_device *ndev)
+{
+ struct arc_emac_priv *priv = netdev_priv(ndev);
+ unsigned int i;
+
+ for (i = 0; i < RX_BD_NUM; i++) {
+ struct arc_emac_bd *rxbd = &priv->rxbd[i];
+ struct buffer_state *rx_buff = &priv->rx_buff[i];
+
+ if (rx_buff->skb) {
+ dma_unmap_single(&ndev->dev, dma_unmap_addr(rx_buff, addr),
+ dma_unmap_len(rx_buff, len), DMA_FROM_DEVICE);
+
+ /* return the sk_buff to system */
+ dev_kfree_skb_irq(rx_buff->skb);
+ }
+
+ rxbd->info = 0;
+ rxbd->data = 0;
+ rx_buff->skb = NULL;
+ }
+}
+
+/**
* arc_emac_stop - Close the network device.
* @ndev: Pointer to the network device.
*
@@ -534,6 +596,10 @@
/* Disable EMAC */
arc_reg_clr(priv, R_CTRL, EN_MASK);
+ /* Return the sk_buff to system */
+ arc_free_tx_queue(ndev);
+ arc_free_rx_queue(ndev);
+
return 0;
}
@@ -610,7 +676,6 @@
dma_unmap_addr_set(&priv->tx_buff[*txbd_curr], addr, addr);
dma_unmap_len_set(&priv->tx_buff[*txbd_curr], len, len);
- priv->tx_buff[*txbd_curr].skb = skb;
priv->txbd[*txbd_curr].data = cpu_to_le32(addr);
/* Make sure pointer to data buffer is set */
@@ -620,6 +685,11 @@
*info = cpu_to_le32(FOR_EMAC | FIRST_OR_LAST_MASK | len);
+ /* Make sure info word is set */
+ wmb();
+
+ priv->tx_buff[*txbd_curr].skb = skb;
+
/* Increment index to point to the next BD */
*txbd_curr = (*txbd_curr + 1) % TX_BD_NUM;
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_link.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_link.c
index d946bba..1fb8010 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_link.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_link.c
@@ -6185,26 +6185,80 @@
shift -= 4;
digit = ((num & mask) >> shift);
if (digit == 0 && remove_leading_zeros) {
- mask = mask >> 4;
- continue;
- } else if (digit < 0xa)
- *str_ptr = digit + '0';
- else
- *str_ptr = digit - 0xa + 'a';
- remove_leading_zeros = 0;
- str_ptr++;
- (*len)--;
+ *str_ptr = '0';
+ } else {
+ if (digit < 0xa)
+ *str_ptr = digit + '0';
+ else
+ *str_ptr = digit - 0xa + 'a';
+
+ remove_leading_zeros = 0;
+ str_ptr++;
+ (*len)--;
+ }
mask = mask >> 4;
if (shift == 4*4) {
+ if (remove_leading_zeros) {
+ str_ptr++;
+ (*len)--;
+ }
*str_ptr = '.';
str_ptr++;
(*len)--;
remove_leading_zeros = 1;
}
}
+ if (remove_leading_zeros)
+ (*len)--;
return 0;
}
+static int bnx2x_3_seq_format_ver(u32 num, u8 *str, u16 *len)
+{
+ u8 *str_ptr = str;
+ u32 mask = 0x00f00000;
+ u8 shift = 8*3;
+ u8 digit;
+ u8 remove_leading_zeros = 1;
+
+ if (*len < 10) {
+ /* Need more than 10chars for this format */
+ *str_ptr = '\0';
+ (*len)--;
+ return -EINVAL;
+ }
+
+ while (shift > 0) {
+ shift -= 4;
+ digit = ((num & mask) >> shift);
+ if (digit == 0 && remove_leading_zeros) {
+ *str_ptr = '0';
+ } else {
+ if (digit < 0xa)
+ *str_ptr = digit + '0';
+ else
+ *str_ptr = digit - 0xa + 'a';
+
+ remove_leading_zeros = 0;
+ str_ptr++;
+ (*len)--;
+ }
+ mask = mask >> 4;
+ if ((shift == 4*4) || (shift == 4*2)) {
+ if (remove_leading_zeros) {
+ str_ptr++;
+ (*len)--;
+ }
+ *str_ptr = '.';
+ str_ptr++;
+ (*len)--;
+ remove_leading_zeros = 1;
+ }
+ }
+ if (remove_leading_zeros)
+ (*len)--;
+ return 0;
+}
static int bnx2x_null_format_ver(u32 spirom_ver, u8 *str, u16 *len)
{
@@ -9677,8 +9731,9 @@
if (bnx2x_is_8483x_8485x(phy)) {
bnx2x_cl45_read(bp, phy, MDIO_CTL_DEVAD, 0x400f, &fw_ver1);
- bnx2x_save_spirom_version(bp, port, fw_ver1 & 0xfff,
- phy->ver_addr);
+ if (phy->type != PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM84858)
+ fw_ver1 &= 0xfff;
+ bnx2x_save_spirom_version(bp, port, fw_ver1, phy->ver_addr);
} else {
/* For 32-bit registers in 848xx, access via MDIO2ARM i/f. */
/* (1) set reg 0xc200_0014(SPI_BRIDGE_CTRL_2) to 0x03000000 */
@@ -9732,16 +9787,32 @@
static void bnx2x_848xx_set_led(struct bnx2x *bp,
struct bnx2x_phy *phy)
{
- u16 val, offset, i;
+ u16 val, led3_blink_rate, offset, i;
static struct bnx2x_reg_set reg_set[] = {
{MDIO_PMA_DEVAD, MDIO_PMA_REG_8481_LED1_MASK, 0x0080},
{MDIO_PMA_DEVAD, MDIO_PMA_REG_8481_LED2_MASK, 0x0018},
{MDIO_PMA_DEVAD, MDIO_PMA_REG_8481_LED3_MASK, 0x0006},
- {MDIO_PMA_DEVAD, MDIO_PMA_REG_8481_LED3_BLINK, 0x0000},
{MDIO_PMA_DEVAD, MDIO_PMA_REG_84823_CTL_SLOW_CLK_CNT_HIGH,
MDIO_PMA_REG_84823_BLINK_RATE_VAL_15P9HZ},
{MDIO_AN_DEVAD, 0xFFFB, 0xFFFD}
};
+
+ if (phy->type == PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM84858) {
+ /* Set LED5 source */
+ bnx2x_cl45_write(bp, phy,
+ MDIO_PMA_DEVAD,
+ MDIO_PMA_REG_8481_LED5_MASK,
+ 0x90);
+ led3_blink_rate = 0x000f;
+ } else {
+ led3_blink_rate = 0x0000;
+ }
+ /* Set LED3 BLINK */
+ bnx2x_cl45_write(bp, phy,
+ MDIO_PMA_DEVAD,
+ MDIO_PMA_REG_8481_LED3_BLINK,
+ led3_blink_rate);
+
/* PHYC_CTL_LED_CTL */
bnx2x_cl45_read(bp, phy,
MDIO_PMA_DEVAD,
@@ -9749,6 +9820,9 @@
val &= 0xFE00;
val |= 0x0092;
+ if (phy->type == PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM84858)
+ val |= 2 << 12; /* LED5 ON based on source */
+
bnx2x_cl45_write(bp, phy,
MDIO_PMA_DEVAD,
MDIO_PMA_REG_8481_LINK_SIGNAL, val);
@@ -9762,10 +9836,17 @@
else
offset = MDIO_PMA_REG_84823_CTL_LED_CTL_1;
- /* stretch_en for LED3*/
+ if (phy->type == PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM84858)
+ val = MDIO_PMA_REG_84858_ALLOW_GPHY_ACT |
+ MDIO_PMA_REG_84823_LED3_STRETCH_EN;
+ else
+ val = MDIO_PMA_REG_84823_LED3_STRETCH_EN;
+
+ /* stretch_en for LEDs */
bnx2x_cl45_read_or_write(bp, phy,
- MDIO_PMA_DEVAD, offset,
- MDIO_PMA_REG_84823_LED3_STRETCH_EN);
+ MDIO_PMA_DEVAD,
+ offset,
+ val);
}
static void bnx2x_848xx_specific_func(struct bnx2x_phy *phy,
@@ -9775,7 +9856,7 @@
struct bnx2x *bp = params->bp;
switch (action) {
case PHY_INIT:
- if (!bnx2x_is_8483x_8485x(phy)) {
+ if (bnx2x_is_8483x_8485x(phy)) {
/* Save spirom version */
bnx2x_save_848xx_spirom_version(phy, bp, params->port);
}
@@ -10036,15 +10117,20 @@
static int bnx2x_84833_cmd_hdlr(struct bnx2x_phy *phy,
struct link_params *params, u16 fw_cmd,
- u16 cmd_args[], int argc)
+ u16 cmd_args[], int argc, int process)
{
int idx;
u16 val;
struct bnx2x *bp = params->bp;
- /* Write CMD_OPEN_OVERRIDE to STATUS reg */
- bnx2x_cl45_write(bp, phy, MDIO_CTL_DEVAD,
- MDIO_848xx_CMD_HDLR_STATUS,
- PHY84833_STATUS_CMD_OPEN_OVERRIDE);
+ int rc = 0;
+
+ if (process == PHY84833_MB_PROCESS2) {
+ /* Write CMD_OPEN_OVERRIDE to STATUS reg */
+ bnx2x_cl45_write(bp, phy, MDIO_CTL_DEVAD,
+ MDIO_848xx_CMD_HDLR_STATUS,
+ PHY84833_STATUS_CMD_OPEN_OVERRIDE);
+ }
+
for (idx = 0; idx < PHY848xx_CMDHDLR_WAIT; idx++) {
bnx2x_cl45_read(bp, phy, MDIO_CTL_DEVAD,
MDIO_848xx_CMD_HDLR_STATUS, &val);
@@ -10054,15 +10140,27 @@
}
if (idx >= PHY848xx_CMDHDLR_WAIT) {
DP(NETIF_MSG_LINK, "FW cmd: FW not ready.\n");
+ /* if the status is CMD_COMPLETE_PASS or CMD_COMPLETE_ERROR
+ * clear the status to CMD_CLEAR_COMPLETE
+ */
+ if (val == PHY84833_STATUS_CMD_COMPLETE_PASS ||
+ val == PHY84833_STATUS_CMD_COMPLETE_ERROR) {
+ bnx2x_cl45_write(bp, phy, MDIO_CTL_DEVAD,
+ MDIO_848xx_CMD_HDLR_STATUS,
+ PHY84833_STATUS_CMD_CLEAR_COMPLETE);
+ }
return -EINVAL;
}
-
- /* Prepare argument(s) and issue command */
- for (idx = 0; idx < argc; idx++) {
- bnx2x_cl45_write(bp, phy, MDIO_CTL_DEVAD,
- MDIO_848xx_CMD_HDLR_DATA1 + idx,
- cmd_args[idx]);
+ if (process == PHY84833_MB_PROCESS1 ||
+ process == PHY84833_MB_PROCESS2) {
+ /* Prepare argument(s) */
+ for (idx = 0; idx < argc; idx++) {
+ bnx2x_cl45_write(bp, phy, MDIO_CTL_DEVAD,
+ MDIO_848xx_CMD_HDLR_DATA1 + idx,
+ cmd_args[idx]);
+ }
}
+
bnx2x_cl45_write(bp, phy, MDIO_CTL_DEVAD,
MDIO_848xx_CMD_HDLR_COMMAND, fw_cmd);
for (idx = 0; idx < PHY848xx_CMDHDLR_WAIT; idx++) {
@@ -10076,24 +10174,30 @@
if ((idx >= PHY848xx_CMDHDLR_WAIT) ||
(val == PHY84833_STATUS_CMD_COMPLETE_ERROR)) {
DP(NETIF_MSG_LINK, "FW cmd failed.\n");
- return -EINVAL;
+ rc = -EINVAL;
}
- /* Gather returning data */
- for (idx = 0; idx < argc; idx++) {
- bnx2x_cl45_read(bp, phy, MDIO_CTL_DEVAD,
- MDIO_848xx_CMD_HDLR_DATA1 + idx,
- &cmd_args[idx]);
+ if (process == PHY84833_MB_PROCESS3 && rc == 0) {
+ /* Gather returning data */
+ for (idx = 0; idx < argc; idx++) {
+ bnx2x_cl45_read(bp, phy, MDIO_CTL_DEVAD,
+ MDIO_848xx_CMD_HDLR_DATA1 + idx,
+ &cmd_args[idx]);
+ }
}
- bnx2x_cl45_write(bp, phy, MDIO_CTL_DEVAD,
- MDIO_848xx_CMD_HDLR_STATUS,
- PHY84833_STATUS_CMD_CLEAR_COMPLETE);
- return 0;
+ if (val == PHY84833_STATUS_CMD_COMPLETE_ERROR ||
+ val == PHY84833_STATUS_CMD_COMPLETE_PASS) {
+ bnx2x_cl45_write(bp, phy, MDIO_CTL_DEVAD,
+ MDIO_848xx_CMD_HDLR_STATUS,
+ PHY84833_STATUS_CMD_CLEAR_COMPLETE);
+ }
+ return rc;
}
static int bnx2x_848xx_cmd_hdlr(struct bnx2x_phy *phy,
struct link_params *params,
u16 fw_cmd,
- u16 cmd_args[], int argc)
+ u16 cmd_args[], int argc,
+ int process)
{
struct bnx2x *bp = params->bp;
@@ -10106,7 +10210,7 @@
argc);
} else {
return bnx2x_84833_cmd_hdlr(phy, params, fw_cmd, cmd_args,
- argc);
+ argc, process);
}
}
@@ -10133,7 +10237,7 @@
status = bnx2x_848xx_cmd_hdlr(phy, params,
PHY848xx_CMD_SET_PAIR_SWAP, data,
- PHY848xx_CMDHDLR_MAX_ARGS);
+ 2, PHY84833_MB_PROCESS2);
if (status == 0)
DP(NETIF_MSG_LINK, "Pairswap OK, val=0x%x\n", data[1]);
@@ -10222,8 +10326,8 @@
DP(NETIF_MSG_LINK, "Don't Advertise 10GBase-T EEE\n");
/* Prevent Phy from working in EEE and advertising it */
- rc = bnx2x_848xx_cmd_hdlr(phy, params,
- PHY848xx_CMD_SET_EEE_MODE, &cmd_args, 1);
+ rc = bnx2x_848xx_cmd_hdlr(phy, params, PHY848xx_CMD_SET_EEE_MODE,
+ &cmd_args, 1, PHY84833_MB_PROCESS1);
if (rc) {
DP(NETIF_MSG_LINK, "EEE disable failed.\n");
return rc;
@@ -10240,8 +10344,8 @@
struct bnx2x *bp = params->bp;
u16 cmd_args = 1;
- rc = bnx2x_848xx_cmd_hdlr(phy, params,
- PHY848xx_CMD_SET_EEE_MODE, &cmd_args, 1);
+ rc = bnx2x_848xx_cmd_hdlr(phy, params, PHY848xx_CMD_SET_EEE_MODE,
+ &cmd_args, 1, PHY84833_MB_PROCESS1);
if (rc) {
DP(NETIF_MSG_LINK, "EEE enable failed.\n");
return rc;
@@ -10362,7 +10466,7 @@
cmd_args[3] = PHY84833_CONSTANT_LATENCY;
rc = bnx2x_848xx_cmd_hdlr(phy, params,
PHY848xx_CMD_SET_EEE_MODE, cmd_args,
- PHY848xx_CMDHDLR_MAX_ARGS);
+ 4, PHY84833_MB_PROCESS1);
if (rc)
DP(NETIF_MSG_LINK, "Cfg AutogrEEEn failed.\n");
}
@@ -10416,6 +10520,32 @@
vars->eee_status &= ~SHMEM_EEE_SUPPORTED_MASK;
}
+ if (phy->type == PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM84833) {
+ /* Additional settings for jumbo packets in 1000BASE-T mode */
+ /* Allow rx extended length */
+ bnx2x_cl45_read(bp, phy, MDIO_AN_DEVAD,
+ MDIO_AN_REG_8481_AUX_CTRL, &val);
+ val |= 0x4000;
+ bnx2x_cl45_write(bp, phy, MDIO_AN_DEVAD,
+ MDIO_AN_REG_8481_AUX_CTRL, val);
+ /* TX FIFO Elasticity LSB */
+ bnx2x_cl45_read(bp, phy, MDIO_AN_DEVAD,
+ MDIO_AN_REG_8481_1G_100T_EXT_CTRL, &val);
+ val |= 0x1;
+ bnx2x_cl45_write(bp, phy, MDIO_AN_DEVAD,
+ MDIO_AN_REG_8481_1G_100T_EXT_CTRL, val);
+ /* TX FIFO Elasticity MSB */
+ /* Enable expansion register 0x46 (Pattern Generator status) */
+ bnx2x_cl45_write(bp, phy, MDIO_AN_DEVAD,
+ MDIO_AN_REG_8481_EXPANSION_REG_ACCESS, 0xf46);
+
+ bnx2x_cl45_read(bp, phy, MDIO_AN_DEVAD,
+ MDIO_AN_REG_8481_EXPANSION_REG_RD_RW, &val);
+ val |= 0x4000;
+ bnx2x_cl45_write(bp, phy, MDIO_AN_DEVAD,
+ MDIO_AN_REG_8481_EXPANSION_REG_RD_RW, val);
+ }
+
if (bnx2x_is_8483x_8485x(phy)) {
/* Bring PHY out of super isolate mode as the final step. */
bnx2x_cl45_read_and_write(bp, phy,
@@ -10555,6 +10685,17 @@
return link_up;
}
+static int bnx2x_8485x_format_ver(u32 raw_ver, u8 *str, u16 *len)
+{
+ int status = 0;
+ u32 num;
+
+ num = ((raw_ver & 0xF80) >> 7) << 16 | ((raw_ver & 0x7F) << 8) |
+ ((raw_ver & 0xF000) >> 12);
+ status = bnx2x_3_seq_format_ver(num, str, len);
+ return status;
+}
+
static int bnx2x_848xx_format_ver(u32 raw_ver, u8 *str, u16 *len)
{
int status = 0;
@@ -10651,10 +10792,25 @@
0x0);
} else {
+ /* LED 1 OFF */
bnx2x_cl45_write(bp, phy,
MDIO_PMA_DEVAD,
MDIO_PMA_REG_8481_LED1_MASK,
0x0);
+
+ if (phy->type ==
+ PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM84858) {
+ /* LED 2 OFF */
+ bnx2x_cl45_write(bp, phy,
+ MDIO_PMA_DEVAD,
+ MDIO_PMA_REG_8481_LED2_MASK,
+ 0x0);
+ /* LED 3 OFF */
+ bnx2x_cl45_write(bp, phy,
+ MDIO_PMA_DEVAD,
+ MDIO_PMA_REG_8481_LED3_MASK,
+ 0x0);
+ }
}
break;
case LED_MODE_FRONT_PANEL_OFF:
@@ -10713,6 +10869,19 @@
MDIO_PMA_REG_8481_SIGNAL_MASK,
0x0);
}
+ if (phy->type ==
+ PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM84858) {
+ /* LED 2 OFF */
+ bnx2x_cl45_write(bp, phy,
+ MDIO_PMA_DEVAD,
+ MDIO_PMA_REG_8481_LED2_MASK,
+ 0x0);
+ /* LED 3 OFF */
+ bnx2x_cl45_write(bp, phy,
+ MDIO_PMA_DEVAD,
+ MDIO_PMA_REG_8481_LED3_MASK,
+ 0x0);
+ }
}
break;
case LED_MODE_ON:
@@ -10776,6 +10945,25 @@
params->port*4,
NIG_MASK_MI_INT);
}
+ }
+ if (phy->type ==
+ PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM84858) {
+ /* Tell LED3 to constant on */
+ bnx2x_cl45_read(bp, phy,
+ MDIO_PMA_DEVAD,
+ MDIO_PMA_REG_8481_LINK_SIGNAL,
+ &val);
+ val &= ~(7<<6);
+ val |= (2<<6); /* A83B[8:6]= 2 */
+ bnx2x_cl45_write(bp, phy,
+ MDIO_PMA_DEVAD,
+ MDIO_PMA_REG_8481_LINK_SIGNAL,
+ val);
+ bnx2x_cl45_write(bp, phy,
+ MDIO_PMA_DEVAD,
+ MDIO_PMA_REG_8481_LED3_MASK,
+ 0x20);
+ } else {
bnx2x_cl45_write(bp, phy,
MDIO_PMA_DEVAD,
MDIO_PMA_REG_8481_SIGNAL_MASK,
@@ -10854,6 +11042,17 @@
MDIO_PMA_REG_8481_LINK_SIGNAL,
val);
if (phy->type ==
+ PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM84858) {
+ bnx2x_cl45_write(bp, phy,
+ MDIO_PMA_DEVAD,
+ MDIO_PMA_REG_8481_LED2_MASK,
+ 0x18);
+ bnx2x_cl45_write(bp, phy,
+ MDIO_PMA_DEVAD,
+ MDIO_PMA_REG_8481_LED3_MASK,
+ 0x06);
+ }
+ if (phy->type ==
PORT_HW_CFG_XGXS_EXT_PHY_TYPE_BCM84834) {
/* Restore LED4 source to external link,
* and re-enable interrupts.
@@ -11982,7 +12181,7 @@
.read_status = (read_status_t)bnx2x_848xx_read_status,
.link_reset = (link_reset_t)bnx2x_848x3_link_reset,
.config_loopback = (config_loopback_t)NULL,
- .format_fw_ver = (format_fw_ver_t)bnx2x_848xx_format_ver,
+ .format_fw_ver = (format_fw_ver_t)bnx2x_8485x_format_ver,
.hw_reset = (hw_reset_t)bnx2x_84833_hw_reset_phy,
.set_link_led = (set_link_led_t)bnx2x_848xx_set_link_led,
.phy_specific_func = (phy_specific_func_t)bnx2x_848xx_specific_func
@@ -13807,8 +14006,10 @@
if (CHIP_IS_E3(bp)) {
struct bnx2x_phy *phy = ¶ms->phy[INT_PHY];
bnx2x_set_aer_mmd(params, phy);
- if ((phy->supported & SUPPORTED_20000baseKR2_Full) &&
- (phy->speed_cap_mask & PORT_HW_CFG_SPEED_CAPABILITY_D0_20G))
+ if (((phy->req_line_speed == SPEED_AUTO_NEG) &&
+ (phy->speed_cap_mask &
+ PORT_HW_CFG_SPEED_CAPABILITY_D0_20G)) ||
+ (phy->req_line_speed == SPEED_20000))
bnx2x_check_kr2_wa(params, vars, phy);
bnx2x_check_over_curr(params, vars);
if (vars->rx_tx_asic_rst)
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_reg.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_reg.h
index 4dead49..a43dea2 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_reg.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_reg.h
@@ -7296,6 +7296,8 @@
#define MDIO_PMA_REG_84823_CTL_LED_CTL_1 0xa8e3
#define MDIO_PMA_REG_84833_CTL_LED_CTL_1 0xa8ec
#define MDIO_PMA_REG_84823_LED3_STRETCH_EN 0x0080
+/* BCM84858 only */
+#define MDIO_PMA_REG_84858_ALLOW_GPHY_ACT 0x8000
/* BCM84833 only */
#define MDIO_84833_TOP_CFG_FW_REV 0x400f
@@ -7337,6 +7339,10 @@
#define PHY84833_STATUS_CMD_NOT_OPEN_FOR_CMDS 0x0040
#define PHY84833_STATUS_CMD_CLEAR_COMPLETE 0x0080
#define PHY84833_STATUS_CMD_OPEN_OVERRIDE 0xa5a5
+/* Mailbox Process */
+#define PHY84833_MB_PROCESS1 1
+#define PHY84833_MB_PROCESS2 2
+#define PHY84833_MB_PROCESS3 3
/* Mailbox status set used by 84858 only */
#define PHY84858_STATUS_CMD_RECEIVED 0x0001
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
index 5dc89e5..8ab000d 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
@@ -69,7 +69,7 @@
#define BNXT_RX_DMA_OFFSET NET_SKB_PAD
#define BNXT_RX_COPY_THRESH 256
-#define BNXT_TX_PUSH_THRESH 92
+#define BNXT_TX_PUSH_THRESH 164
enum board_idx {
BCM57301,
@@ -223,11 +223,12 @@
}
if (free_size == bp->tx_ring_size && length <= bp->tx_push_thresh) {
- struct tx_push_bd *push = txr->tx_push;
- struct tx_bd *tx_push = &push->txbd1;
- struct tx_bd_ext *tx_push1 = &push->txbd2;
- void *pdata = tx_push1 + 1;
- int j;
+ struct tx_push_buffer *tx_push_buf = txr->tx_push;
+ struct tx_push_bd *tx_push = &tx_push_buf->push_bd;
+ struct tx_bd_ext *tx_push1 = &tx_push->txbd2;
+ void *pdata = tx_push_buf->data;
+ u64 *end;
+ int j, push_len;
/* Set COAL_NOW to be ready quickly for the next push */
tx_push->tx_bd_len_flags_type =
@@ -247,6 +248,9 @@
tx_push1->tx_bd_cfa_meta = cpu_to_le32(vlan_tag_flags);
tx_push1->tx_bd_cfa_action = cpu_to_le32(cfa_action);
+ end = PTR_ALIGN(pdata + length + 1, 8) - 1;
+ *end = 0;
+
skb_copy_from_linear_data(skb, pdata, len);
pdata += len;
for (j = 0; j < last_frag; j++) {
@@ -261,22 +265,29 @@
pdata += skb_frag_size(frag);
}
- memcpy(txbd, tx_push, sizeof(*txbd));
+ txbd->tx_bd_len_flags_type = tx_push->tx_bd_len_flags_type;
+ txbd->tx_bd_haddr = txr->data_mapping;
prod = NEXT_TX(prod);
txbd = &txr->tx_desc_ring[TX_RING(prod)][TX_IDX(prod)];
memcpy(txbd, tx_push1, sizeof(*txbd));
prod = NEXT_TX(prod);
- push->doorbell =
+ tx_push->doorbell =
cpu_to_le32(DB_KEY_TX_PUSH | DB_LONG_TX_PUSH | prod);
txr->tx_prod = prod;
netdev_tx_sent_queue(txq, skb->len);
- __iowrite64_copy(txr->tx_doorbell, push,
- (length + sizeof(*push) + 8) / 8);
+ push_len = (length + sizeof(*tx_push) + 7) / 8;
+ if (push_len > 16) {
+ __iowrite64_copy(txr->tx_doorbell, tx_push_buf, 16);
+ __iowrite64_copy(txr->tx_doorbell + 4, tx_push_buf + 1,
+ push_len - 16);
+ } else {
+ __iowrite64_copy(txr->tx_doorbell, tx_push_buf,
+ push_len);
+ }
tx_buf->is_push = 1;
-
goto tx_done;
}
@@ -1753,7 +1764,7 @@
push_size = L1_CACHE_ALIGN(sizeof(struct tx_push_bd) +
bp->tx_push_thresh);
- if (push_size > 128) {
+ if (push_size > 256) {
push_size = 0;
bp->tx_push_thresh = 0;
}
@@ -1772,7 +1783,6 @@
return rc;
if (bp->tx_push_size) {
- struct tx_bd *txbd;
dma_addr_t mapping;
/* One pre-allocated DMA buffer to backup
@@ -1786,13 +1796,11 @@
if (!txr->tx_push)
return -ENOMEM;
- txbd = &txr->tx_push->txbd1;
-
mapping = txr->tx_push_mapping +
sizeof(struct tx_push_bd);
- txbd->tx_bd_haddr = cpu_to_le64(mapping);
+ txr->data_mapping = cpu_to_le64(mapping);
- memset(txbd + 1, 0, sizeof(struct tx_bd_ext));
+ memset(txr->tx_push, 0, sizeof(struct tx_push_bd));
}
ring->queue_id = bp->q_info[j].queue_id;
if (i % bp->tx_nr_rings_per_tc == (bp->tx_nr_rings_per_tc - 1))
@@ -4546,20 +4554,18 @@
if (!(link_info->autoneg & BNXT_AUTONEG_FLOW_CTRL) &&
link_info->force_pause_setting != link_info->req_flow_ctrl)
update_pause = true;
- if (link_info->req_duplex != link_info->duplex_setting)
- update_link = true;
if (!(link_info->autoneg & BNXT_AUTONEG_SPEED)) {
if (BNXT_AUTO_MODE(link_info->auto_mode))
update_link = true;
if (link_info->req_link_speed != link_info->force_link_speed)
update_link = true;
+ if (link_info->req_duplex != link_info->duplex_setting)
+ update_link = true;
} else {
if (link_info->auto_mode == BNXT_LINK_AUTO_NONE)
update_link = true;
if (link_info->advertising != link_info->auto_link_speeds)
update_link = true;
- if (link_info->req_link_speed != link_info->auto_link_speed)
- update_link = true;
}
if (update_link)
@@ -4636,7 +4642,7 @@
if (link_re_init) {
rc = bnxt_update_phy_setting(bp);
if (rc)
- goto open_err;
+ netdev_warn(bp->dev, "failed to update phy settings\n");
}
if (irq_re_init) {
@@ -4654,6 +4660,7 @@
/* Enable TX queues */
bnxt_tx_enable(bp);
mod_timer(&bp->timer, jiffies + bp->current_interval);
+ bnxt_update_link(bp, true);
return 0;
@@ -5670,22 +5677,16 @@
}
/*initialize the ethool setting copy with NVM settings */
- if (BNXT_AUTO_MODE(link_info->auto_mode))
- link_info->autoneg |= BNXT_AUTONEG_SPEED;
-
- if (link_info->auto_pause_setting & BNXT_LINK_PAUSE_BOTH) {
- if (link_info->auto_pause_setting == BNXT_LINK_PAUSE_BOTH)
- link_info->autoneg |= BNXT_AUTONEG_FLOW_CTRL;
+ if (BNXT_AUTO_MODE(link_info->auto_mode)) {
+ link_info->autoneg = BNXT_AUTONEG_SPEED |
+ BNXT_AUTONEG_FLOW_CTRL;
+ link_info->advertising = link_info->auto_link_speeds;
link_info->req_flow_ctrl = link_info->auto_pause_setting;
- } else if (link_info->force_pause_setting & BNXT_LINK_PAUSE_BOTH) {
+ } else {
+ link_info->req_link_speed = link_info->force_link_speed;
+ link_info->req_duplex = link_info->duplex_setting;
link_info->req_flow_ctrl = link_info->force_pause_setting;
}
- link_info->req_duplex = link_info->duplex_setting;
- if (link_info->autoneg & BNXT_AUTONEG_SPEED)
- link_info->req_link_speed = link_info->auto_link_speed;
- else
- link_info->req_link_speed = link_info->force_link_speed;
- link_info->advertising = link_info->auto_link_speeds;
snprintf(phy_ver, PHY_VER_STR_LEN, " ph %d.%d.%d",
link_info->phy_ver[0],
link_info->phy_ver[1],
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
index 8af3ca8..2be51b3 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
@@ -411,8 +411,8 @@
#define BNXT_NUM_TESTS(bp) 0
-#define BNXT_DEFAULT_RX_RING_SIZE 1023
-#define BNXT_DEFAULT_TX_RING_SIZE 512
+#define BNXT_DEFAULT_RX_RING_SIZE 511
+#define BNXT_DEFAULT_TX_RING_SIZE 511
#define MAX_TPA 64
@@ -523,10 +523,16 @@
struct tx_push_bd {
__le32 doorbell;
- struct tx_bd txbd1;
+ __le32 tx_bd_len_flags_type;
+ u32 tx_bd_opaque;
struct tx_bd_ext txbd2;
};
+struct tx_push_buffer {
+ struct tx_push_bd push_bd;
+ u32 data[25];
+};
+
struct bnxt_tx_ring_info {
struct bnxt_napi *bnapi;
u16 tx_prod;
@@ -538,8 +544,9 @@
dma_addr_t tx_desc_mapping[MAX_TX_PAGES];
- struct tx_push_bd *tx_push;
+ struct tx_push_buffer *tx_push;
dma_addr_t tx_push_mapping;
+ __le64 data_mapping;
#define BNXT_DEV_STATE_CLOSING 0x1
u32 dev_state;
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
index 922b898..3238817 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
@@ -486,15 +486,8 @@
speed_mask |= SUPPORTED_2500baseX_Full;
if (fw_speeds & BNXT_LINK_SPEED_MSK_10GB)
speed_mask |= SUPPORTED_10000baseT_Full;
- /* TODO: support 25GB, 50GB with different cable type */
- if (fw_speeds & BNXT_LINK_SPEED_MSK_20GB)
- speed_mask |= SUPPORTED_20000baseMLD2_Full |
- SUPPORTED_20000baseKR2_Full;
if (fw_speeds & BNXT_LINK_SPEED_MSK_40GB)
- speed_mask |= SUPPORTED_40000baseKR4_Full |
- SUPPORTED_40000baseCR4_Full |
- SUPPORTED_40000baseSR4_Full |
- SUPPORTED_40000baseLR4_Full;
+ speed_mask |= SUPPORTED_40000baseCR4_Full;
return speed_mask;
}
@@ -514,15 +507,8 @@
speed_mask |= ADVERTISED_2500baseX_Full;
if (fw_speeds & BNXT_LINK_SPEED_MSK_10GB)
speed_mask |= ADVERTISED_10000baseT_Full;
- /* TODO: how to advertise 20, 25, 40, 50GB with different cable type ?*/
- if (fw_speeds & BNXT_LINK_SPEED_MSK_20GB)
- speed_mask |= ADVERTISED_20000baseMLD2_Full |
- ADVERTISED_20000baseKR2_Full;
if (fw_speeds & BNXT_LINK_SPEED_MSK_40GB)
- speed_mask |= ADVERTISED_40000baseKR4_Full |
- ADVERTISED_40000baseCR4_Full |
- ADVERTISED_40000baseSR4_Full |
- ADVERTISED_40000baseLR4_Full;
+ speed_mask |= ADVERTISED_40000baseCR4_Full;
return speed_mask;
}
@@ -557,11 +543,12 @@
u16 ethtool_speed;
cmd->supported = bnxt_fw_to_ethtool_support_spds(link_info);
+ cmd->supported |= SUPPORTED_Pause | SUPPORTED_Asym_Pause;
if (link_info->auto_link_speeds)
cmd->supported |= SUPPORTED_Autoneg;
- if (BNXT_AUTO_MODE(link_info->auto_mode)) {
+ if (link_info->autoneg) {
cmd->advertising =
bnxt_fw_to_ethtool_advertised_spds(link_info);
cmd->advertising |= ADVERTISED_Autoneg;
@@ -570,28 +557,16 @@
cmd->autoneg = AUTONEG_DISABLE;
cmd->advertising = 0;
}
- if (link_info->auto_pause_setting & BNXT_LINK_PAUSE_BOTH) {
+ if (link_info->autoneg & BNXT_AUTONEG_FLOW_CTRL) {
if ((link_info->auto_pause_setting & BNXT_LINK_PAUSE_BOTH) ==
BNXT_LINK_PAUSE_BOTH) {
cmd->advertising |= ADVERTISED_Pause;
- cmd->supported |= SUPPORTED_Pause;
} else {
cmd->advertising |= ADVERTISED_Asym_Pause;
- cmd->supported |= SUPPORTED_Asym_Pause;
if (link_info->auto_pause_setting &
BNXT_LINK_PAUSE_RX)
cmd->advertising |= ADVERTISED_Pause;
}
- } else if (link_info->force_pause_setting & BNXT_LINK_PAUSE_BOTH) {
- if ((link_info->force_pause_setting & BNXT_LINK_PAUSE_BOTH) ==
- BNXT_LINK_PAUSE_BOTH) {
- cmd->supported |= SUPPORTED_Pause;
- } else {
- cmd->supported |= SUPPORTED_Asym_Pause;
- if (link_info->force_pause_setting &
- BNXT_LINK_PAUSE_RX)
- cmd->supported |= SUPPORTED_Pause;
- }
}
cmd->port = PORT_NONE;
@@ -670,6 +645,9 @@
if (advertising & ADVERTISED_10000baseT_Full)
fw_speed_mask |= BNXT_LINK_SPEED_MSK_10GB;
+ if (advertising & ADVERTISED_40000baseCR4_Full)
+ fw_speed_mask |= BNXT_LINK_SPEED_MSK_40GB;
+
return fw_speed_mask;
}
@@ -729,7 +707,7 @@
speed = ethtool_cmd_speed(cmd);
link_info->req_link_speed = bnxt_get_fw_speed(dev, speed);
link_info->req_duplex = BNXT_LINK_DUPLEX_FULL;
- link_info->autoneg &= ~BNXT_AUTONEG_SPEED;
+ link_info->autoneg = 0;
link_info->advertising = 0;
}
@@ -748,8 +726,7 @@
if (BNXT_VF(bp))
return;
- epause->autoneg = !!(link_info->auto_pause_setting &
- BNXT_LINK_PAUSE_BOTH);
+ epause->autoneg = !!(link_info->autoneg & BNXT_AUTONEG_FLOW_CTRL);
epause->rx_pause = ((link_info->pause & BNXT_LINK_PAUSE_RX) != 0);
epause->tx_pause = ((link_info->pause & BNXT_LINK_PAUSE_TX) != 0);
}
@@ -765,6 +742,9 @@
return rc;
if (epause->autoneg) {
+ if (!(link_info->autoneg & BNXT_AUTONEG_SPEED))
+ return -EINVAL;
+
link_info->autoneg |= BNXT_AUTONEG_FLOW_CTRL;
link_info->req_flow_ctrl |= BNXT_LINK_PAUSE_BOTH;
} else {
diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
index b15a60d..d7e01a7 100644
--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
@@ -2445,8 +2445,7 @@
}
/* Link UP/DOWN event */
- if ((priv->hw_params->flags & GENET_HAS_MDIO_INTR) &&
- (priv->irq0_stat & UMAC_IRQ_LINK_EVENT)) {
+ if (priv->irq0_stat & UMAC_IRQ_LINK_EVENT) {
phy_mac_interrupt(priv->phydev,
!!(priv->irq0_stat & UMAC_IRQ_LINK_UP));
priv->irq0_stat &= ~UMAC_IRQ_LINK_EVENT;
diff --git a/drivers/net/ethernet/broadcom/tg3.c b/drivers/net/ethernet/broadcom/tg3.c
index 49eea89..3010080 100644
--- a/drivers/net/ethernet/broadcom/tg3.c
+++ b/drivers/net/ethernet/broadcom/tg3.c
@@ -7831,6 +7831,14 @@
return ret;
}
+static bool tg3_tso_bug_gso_check(struct tg3_napi *tnapi, struct sk_buff *skb)
+{
+ /* Check if we will never have enough descriptors,
+ * as gso_segs can be more than current ring size
+ */
+ return skb_shinfo(skb)->gso_segs < tnapi->tx_pending / 3;
+}
+
static netdev_tx_t tg3_start_xmit(struct sk_buff *, struct net_device *);
/* Use GSO to workaround all TSO packets that meet HW bug conditions
@@ -7934,14 +7942,19 @@
* vlan encapsulated.
*/
if (skb->protocol == htons(ETH_P_8021Q) ||
- skb->protocol == htons(ETH_P_8021AD))
- return tg3_tso_bug(tp, tnapi, txq, skb);
+ skb->protocol == htons(ETH_P_8021AD)) {
+ if (tg3_tso_bug_gso_check(tnapi, skb))
+ return tg3_tso_bug(tp, tnapi, txq, skb);
+ goto drop;
+ }
if (!skb_is_gso_v6(skb)) {
if (unlikely((ETH_HLEN + hdr_len) > 80) &&
- tg3_flag(tp, TSO_BUG))
- return tg3_tso_bug(tp, tnapi, txq, skb);
-
+ tg3_flag(tp, TSO_BUG)) {
+ if (tg3_tso_bug_gso_check(tnapi, skb))
+ return tg3_tso_bug(tp, tnapi, txq, skb);
+ goto drop;
+ }
ip_csum = iph->check;
ip_tot_len = iph->tot_len;
iph->check = 0;
@@ -8073,7 +8086,7 @@
if (would_hit_hwbug) {
tg3_tx_skb_unmap(tnapi, tnapi->tx_prod, i);
- if (mss) {
+ if (mss && tg3_tso_bug_gso_check(tnapi, skb)) {
/* If it's a TSO packet, do GSO instead of
* allocating and copying to a large linear SKB
*/
diff --git a/drivers/net/ethernet/cavium/liquidio/lio_main.c b/drivers/net/ethernet/cavium/liquidio/lio_main.c
index 8727655..34d269c 100644
--- a/drivers/net/ethernet/cavium/liquidio/lio_main.c
+++ b/drivers/net/ethernet/cavium/liquidio/lio_main.c
@@ -1683,7 +1683,7 @@
dev_dbg(&oct->pci_dev->dev, "Creating Droq: %d\n", q_no);
/* droq creation and local register settings. */
ret_val = octeon_create_droq(oct, q_no, num_descs, desc_size, app_ctx);
- if (ret_val == -1)
+ if (ret_val < 0)
return ret_val;
if (ret_val == 1) {
@@ -2524,7 +2524,7 @@
octeon_swap_8B_data(&resp->timestamp, 1);
- if (unlikely((skb_shinfo(skb)->tx_flags | SKBTX_IN_PROGRESS) != 0)) {
+ if (unlikely((skb_shinfo(skb)->tx_flags & SKBTX_IN_PROGRESS) != 0)) {
struct skb_shared_hwtstamps ts;
u64 ns = resp->timestamp;
diff --git a/drivers/net/ethernet/cavium/liquidio/octeon_droq.c b/drivers/net/ethernet/cavium/liquidio/octeon_droq.c
index 4dba86e..174072b 100644
--- a/drivers/net/ethernet/cavium/liquidio/octeon_droq.c
+++ b/drivers/net/ethernet/cavium/liquidio/octeon_droq.c
@@ -983,5 +983,5 @@
create_droq_fail:
octeon_delete_droq(oct, q_no);
- return -1;
+ return -ENOMEM;
}
diff --git a/drivers/net/ethernet/cavium/thunder/nicvf_main.c b/drivers/net/ethernet/cavium/thunder/nicvf_main.c
index c24cb2a..a009bc3 100644
--- a/drivers/net/ethernet/cavium/thunder/nicvf_main.c
+++ b/drivers/net/ethernet/cavium/thunder/nicvf_main.c
@@ -574,8 +574,7 @@
static void nicvf_rcv_pkt_handler(struct net_device *netdev,
struct napi_struct *napi,
- struct cmp_queue *cq,
- struct cqe_rx_t *cqe_rx, int cqe_type)
+ struct cqe_rx_t *cqe_rx)
{
struct sk_buff *skb;
struct nicvf *nic = netdev_priv(netdev);
@@ -591,7 +590,7 @@
}
/* Check for errors */
- err = nicvf_check_cqe_rx_errs(nic, cq, cqe_rx);
+ err = nicvf_check_cqe_rx_errs(nic, cqe_rx);
if (err && !cqe_rx->rb_cnt)
return;
@@ -682,8 +681,7 @@
cq_idx, cq_desc->cqe_type);
switch (cq_desc->cqe_type) {
case CQE_TYPE_RX:
- nicvf_rcv_pkt_handler(netdev, napi, cq,
- cq_desc, CQE_TYPE_RX);
+ nicvf_rcv_pkt_handler(netdev, napi, cq_desc);
work_done++;
break;
case CQE_TYPE_SEND:
@@ -1125,7 +1123,6 @@
/* Clear multiqset info */
nic->pnicvf = nic;
- nic->sqs_count = 0;
return 0;
}
@@ -1354,6 +1351,9 @@
drv_stats->tx_frames_ok = stats->tx_ucast_frames_ok +
stats->tx_bcast_frames_ok +
stats->tx_mcast_frames_ok;
+ drv_stats->rx_frames_ok = stats->rx_ucast_frames +
+ stats->rx_bcast_frames +
+ stats->rx_mcast_frames;
drv_stats->rx_drops = stats->rx_drop_red +
stats->rx_drop_overrun;
drv_stats->tx_drops = stats->tx_drops;
@@ -1538,6 +1538,9 @@
nicvf_send_vf_struct(nic);
+ if (!pass1_silicon(nic->pdev))
+ nic->hw_tso = true;
+
/* Check if this VF is in QS only mode */
if (nic->sqs_mode)
return 0;
@@ -1557,9 +1560,6 @@
netdev->vlan_features = NETIF_F_SG | NETIF_F_IP_CSUM | NETIF_F_TSO;
- if (!pass1_silicon(nic->pdev))
- nic->hw_tso = true;
-
netdev->netdev_ops = &nicvf_netdev_ops;
netdev->watchdog_timeo = NICVF_TX_TIMEOUT;
diff --git a/drivers/net/ethernet/cavium/thunder/nicvf_queues.c b/drivers/net/ethernet/cavium/thunder/nicvf_queues.c
index d0d1b54..767347b 100644
--- a/drivers/net/ethernet/cavium/thunder/nicvf_queues.c
+++ b/drivers/net/ethernet/cavium/thunder/nicvf_queues.c
@@ -1329,16 +1329,12 @@
}
/* Check for errors in the receive cmp.queue entry */
-int nicvf_check_cqe_rx_errs(struct nicvf *nic,
- struct cmp_queue *cq, struct cqe_rx_t *cqe_rx)
+int nicvf_check_cqe_rx_errs(struct nicvf *nic, struct cqe_rx_t *cqe_rx)
{
struct nicvf_hw_stats *stats = &nic->hw_stats;
- struct nicvf_drv_stats *drv_stats = &nic->drv_stats;
- if (!cqe_rx->err_level && !cqe_rx->err_opcode) {
- drv_stats->rx_frames_ok++;
+ if (!cqe_rx->err_level && !cqe_rx->err_opcode)
return 0;
- }
if (netif_msg_rx_err(nic))
netdev_err(nic->netdev,
diff --git a/drivers/net/ethernet/cavium/thunder/nicvf_queues.h b/drivers/net/ethernet/cavium/thunder/nicvf_queues.h
index c5030a7..6673e11 100644
--- a/drivers/net/ethernet/cavium/thunder/nicvf_queues.h
+++ b/drivers/net/ethernet/cavium/thunder/nicvf_queues.h
@@ -338,8 +338,7 @@
/* Stats */
void nicvf_update_rq_stats(struct nicvf *nic, int rq_idx);
void nicvf_update_sq_stats(struct nicvf *nic, int sq_idx);
-int nicvf_check_cqe_rx_errs(struct nicvf *nic,
- struct cmp_queue *cq, struct cqe_rx_t *cqe_rx);
+int nicvf_check_cqe_rx_errs(struct nicvf *nic, struct cqe_rx_t *cqe_rx);
int nicvf_check_cqe_tx_errs(struct nicvf *nic,
struct cmp_queue *cq, struct cqe_send_t *cqe_tx);
#endif /* NICVF_QUEUES_H */
diff --git a/drivers/net/ethernet/chelsio/cxgb3/t3_hw.c b/drivers/net/ethernet/chelsio/cxgb3/t3_hw.c
index ee04caa..a89721f 100644
--- a/drivers/net/ethernet/chelsio/cxgb3/t3_hw.c
+++ b/drivers/net/ethernet/chelsio/cxgb3/t3_hw.c
@@ -681,6 +681,24 @@
return t3_seeprom_write(adapter, EEPROM_STAT_ADDR, enable ? 0xc : 0);
}
+static int vpdstrtouint(char *s, int len, unsigned int base, unsigned int *val)
+{
+ char tok[len + 1];
+
+ memcpy(tok, s, len);
+ tok[len] = 0;
+ return kstrtouint(strim(tok), base, val);
+}
+
+static int vpdstrtou16(char *s, int len, unsigned int base, u16 *val)
+{
+ char tok[len + 1];
+
+ memcpy(tok, s, len);
+ tok[len] = 0;
+ return kstrtou16(strim(tok), base, val);
+}
+
/**
* get_vpd_params - read VPD parameters from VPD EEPROM
* @adapter: adapter to read
@@ -709,19 +727,19 @@
return ret;
}
- ret = kstrtouint(vpd.cclk_data, 10, &p->cclk);
+ ret = vpdstrtouint(vpd.cclk_data, vpd.cclk_len, 10, &p->cclk);
if (ret)
return ret;
- ret = kstrtouint(vpd.mclk_data, 10, &p->mclk);
+ ret = vpdstrtouint(vpd.mclk_data, vpd.mclk_len, 10, &p->mclk);
if (ret)
return ret;
- ret = kstrtouint(vpd.uclk_data, 10, &p->uclk);
+ ret = vpdstrtouint(vpd.uclk_data, vpd.uclk_len, 10, &p->uclk);
if (ret)
return ret;
- ret = kstrtouint(vpd.mdc_data, 10, &p->mdc);
+ ret = vpdstrtouint(vpd.mdc_data, vpd.mdc_len, 10, &p->mdc);
if (ret)
return ret;
- ret = kstrtouint(vpd.mt_data, 10, &p->mem_timing);
+ ret = vpdstrtouint(vpd.mt_data, vpd.mt_len, 10, &p->mem_timing);
if (ret)
return ret;
memcpy(p->sn, vpd.sn_data, SERNUM_LEN);
@@ -733,10 +751,12 @@
} else {
p->port_type[0] = hex_to_bin(vpd.port0_data[0]);
p->port_type[1] = hex_to_bin(vpd.port1_data[0]);
- ret = kstrtou16(vpd.xaui0cfg_data, 16, &p->xauicfg[0]);
+ ret = vpdstrtou16(vpd.xaui0cfg_data, vpd.xaui0cfg_len, 16,
+ &p->xauicfg[0]);
if (ret)
return ret;
- ret = kstrtou16(vpd.xaui1cfg_data, 16, &p->xauicfg[1]);
+ ret = vpdstrtou16(vpd.xaui1cfg_data, vpd.xaui1cfg_len, 16,
+ &p->xauicfg[1]);
if (ret)
return ret;
}
diff --git a/drivers/net/ethernet/chelsio/cxgb4/t4_pci_id_tbl.h b/drivers/net/ethernet/chelsio/cxgb4/t4_pci_id_tbl.h
index a8dda63..06bc2d2 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/t4_pci_id_tbl.h
+++ b/drivers/net/ethernet/chelsio/cxgb4/t4_pci_id_tbl.h
@@ -165,6 +165,7 @@
CH_PCI_ID_TABLE_FENTRY(0x5098), /* Custom 2x40G QSFP */
CH_PCI_ID_TABLE_FENTRY(0x5099), /* Custom 2x40G QSFP */
CH_PCI_ID_TABLE_FENTRY(0x509a), /* Custom T520-CR */
+ CH_PCI_ID_TABLE_FENTRY(0x509b), /* Custom T540-CR LOM */
/* T6 adapters:
*/
diff --git a/drivers/net/ethernet/cisco/enic/enic.h b/drivers/net/ethernet/cisco/enic/enic.h
index 1671fa3..7ba6d53 100644
--- a/drivers/net/ethernet/cisco/enic/enic.h
+++ b/drivers/net/ethernet/cisco/enic/enic.h
@@ -33,7 +33,7 @@
#define DRV_NAME "enic"
#define DRV_DESCRIPTION "Cisco VIC Ethernet NIC Driver"
-#define DRV_VERSION "2.3.0.12"
+#define DRV_VERSION "2.3.0.20"
#define DRV_COPYRIGHT "Copyright 2008-2013 Cisco Systems, Inc"
#define ENIC_BARS_MAX 6
diff --git a/drivers/net/ethernet/cisco/enic/vnic_dev.c b/drivers/net/ethernet/cisco/enic/vnic_dev.c
index 1ffd105..1fdf5fe 100644
--- a/drivers/net/ethernet/cisco/enic/vnic_dev.c
+++ b/drivers/net/ethernet/cisco/enic/vnic_dev.c
@@ -298,7 +298,8 @@
int wait)
{
struct devcmd2_controller *dc2c = vdev->devcmd2;
- struct devcmd2_result *result = dc2c->result + dc2c->next_result;
+ struct devcmd2_result *result;
+ u8 color;
unsigned int i;
int delay, err;
u32 fetch_index, new_posted;
@@ -336,13 +337,17 @@
if (dc2c->cmd_ring[posted].flags & DEVCMD2_FNORESULT)
return 0;
+ result = dc2c->result + dc2c->next_result;
+ color = dc2c->color;
+
+ dc2c->next_result++;
+ if (dc2c->next_result == dc2c->result_size) {
+ dc2c->next_result = 0;
+ dc2c->color = dc2c->color ? 0 : 1;
+ }
+
for (delay = 0; delay < wait; delay++) {
- if (result->color == dc2c->color) {
- dc2c->next_result++;
- if (dc2c->next_result == dc2c->result_size) {
- dc2c->next_result = 0;
- dc2c->color = dc2c->color ? 0 : 1;
- }
+ if (result->color == color) {
if (result->error) {
err = result->error;
if (err != ERR_ECMDUNKNOWN ||
diff --git a/drivers/net/ethernet/davicom/dm9000.c b/drivers/net/ethernet/davicom/dm9000.c
index cf94b72..48d9194 100644
--- a/drivers/net/ethernet/davicom/dm9000.c
+++ b/drivers/net/ethernet/davicom/dm9000.c
@@ -128,7 +128,6 @@
struct resource *data_res;
struct resource *addr_req; /* resources requested */
struct resource *data_req;
- struct resource *irq_res;
int irq_wake;
@@ -1300,22 +1299,16 @@
dm9000_open(struct net_device *dev)
{
struct board_info *db = netdev_priv(dev);
- unsigned long irqflags = db->irq_res->flags & IRQF_TRIGGER_MASK;
if (netif_msg_ifup(db))
dev_dbg(db->dev, "enabling %s\n", dev->name);
- /* If there is no IRQ type specified, default to something that
- * may work, and tell the user that this is a problem */
-
- if (irqflags == IRQF_TRIGGER_NONE)
- irqflags = irq_get_trigger_type(dev->irq);
-
- if (irqflags == IRQF_TRIGGER_NONE)
+ /* If there is no IRQ type specified, tell the user that this is a
+ * problem
+ */
+ if (irq_get_trigger_type(dev->irq) == IRQF_TRIGGER_NONE)
dev_warn(db->dev, "WARNING: no IRQ resource flags set.\n");
- irqflags |= IRQF_SHARED;
-
/* GPIO0 on pre-activate PHY, Reg 1F is not set by reset */
iow(db, DM9000_GPR, 0); /* REG_1F bit0 activate phyxcer */
mdelay(1); /* delay needs by DM9000B */
@@ -1323,7 +1316,8 @@
/* Initialize DM9000 board */
dm9000_init_dm9000(dev);
- if (request_irq(dev->irq, dm9000_interrupt, irqflags, dev->name, dev))
+ if (request_irq(dev->irq, dm9000_interrupt, IRQF_SHARED,
+ dev->name, dev))
return -EAGAIN;
/* Now that we have an interrupt handler hooked up we can unmask
* our interrupts
@@ -1500,15 +1494,22 @@
db->addr_res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
db->data_res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
- db->irq_res = platform_get_resource(pdev, IORESOURCE_IRQ, 0);
- if (db->addr_res == NULL || db->data_res == NULL ||
- db->irq_res == NULL) {
- dev_err(db->dev, "insufficient resources\n");
+ if (!db->addr_res || !db->data_res) {
+ dev_err(db->dev, "insufficient resources addr=%p data=%p\n",
+ db->addr_res, db->data_res);
ret = -ENOENT;
goto out;
}
+ ndev->irq = platform_get_irq(pdev, 0);
+ if (ndev->irq < 0) {
+ dev_err(db->dev, "interrupt resource unavailable: %d\n",
+ ndev->irq);
+ ret = ndev->irq;
+ goto out;
+ }
+
db->irq_wake = platform_get_irq(pdev, 1);
if (db->irq_wake >= 0) {
dev_dbg(db->dev, "wakeup irq %d\n", db->irq_wake);
@@ -1570,7 +1571,6 @@
/* fill in parameters for net-dev structure */
ndev->base_addr = (unsigned long)db->io_addr;
- ndev->irq = db->irq_res->start;
/* ensure at least we have a default set of IO routines */
dm9000_set_io(db, iosize);
diff --git a/drivers/net/ethernet/fujitsu/fmvj18x_cs.c b/drivers/net/ethernet/fujitsu/fmvj18x_cs.c
index a7139f5..678f501 100644
--- a/drivers/net/ethernet/fujitsu/fmvj18x_cs.c
+++ b/drivers/net/ethernet/fujitsu/fmvj18x_cs.c
@@ -469,8 +469,8 @@
goto failed;
}
/* Read MACID from CIS */
- for (i = 5; i < 11; i++)
- dev->dev_addr[i] = buf[i];
+ for (i = 0; i < 6; i++)
+ dev->dev_addr[i] = buf[i + 5];
kfree(buf);
} else {
if (pcmcia_get_mac_from_cis(link, dev))
diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index 662c2ee..b0ae69f 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -370,6 +370,11 @@
struct net_device *dev;
struct notifier_block cpu_notifier;
int rxq_def;
+ /* Protect the access to the percpu interrupt registers,
+ * ensuring that the configuration remains coherent.
+ */
+ spinlock_t lock;
+ bool is_stopped;
/* Core clock */
struct clk *clk;
@@ -1038,6 +1043,43 @@
}
}
+static void mvneta_percpu_unmask_interrupt(void *arg)
+{
+ struct mvneta_port *pp = arg;
+
+ /* All the queue are unmasked, but actually only the ones
+ * mapped to this CPU will be unmasked
+ */
+ mvreg_write(pp, MVNETA_INTR_NEW_MASK,
+ MVNETA_RX_INTR_MASK_ALL |
+ MVNETA_TX_INTR_MASK_ALL |
+ MVNETA_MISCINTR_INTR_MASK);
+}
+
+static void mvneta_percpu_mask_interrupt(void *arg)
+{
+ struct mvneta_port *pp = arg;
+
+ /* All the queue are masked, but actually only the ones
+ * mapped to this CPU will be masked
+ */
+ mvreg_write(pp, MVNETA_INTR_NEW_MASK, 0);
+ mvreg_write(pp, MVNETA_INTR_OLD_MASK, 0);
+ mvreg_write(pp, MVNETA_INTR_MISC_MASK, 0);
+}
+
+static void mvneta_percpu_clear_intr_cause(void *arg)
+{
+ struct mvneta_port *pp = arg;
+
+ /* All the queue are cleared, but actually only the ones
+ * mapped to this CPU will be cleared
+ */
+ mvreg_write(pp, MVNETA_INTR_NEW_CAUSE, 0);
+ mvreg_write(pp, MVNETA_INTR_MISC_CAUSE, 0);
+ mvreg_write(pp, MVNETA_INTR_OLD_CAUSE, 0);
+}
+
/* This method sets defaults to the NETA port:
* Clears interrupt Cause and Mask registers.
* Clears all MAC tables.
@@ -1055,14 +1097,10 @@
int max_cpu = num_present_cpus();
/* Clear all Cause registers */
- mvreg_write(pp, MVNETA_INTR_NEW_CAUSE, 0);
- mvreg_write(pp, MVNETA_INTR_OLD_CAUSE, 0);
- mvreg_write(pp, MVNETA_INTR_MISC_CAUSE, 0);
+ on_each_cpu(mvneta_percpu_clear_intr_cause, pp, true);
/* Mask all interrupts */
- mvreg_write(pp, MVNETA_INTR_NEW_MASK, 0);
- mvreg_write(pp, MVNETA_INTR_OLD_MASK, 0);
- mvreg_write(pp, MVNETA_INTR_MISC_MASK, 0);
+ on_each_cpu(mvneta_percpu_mask_interrupt, pp, true);
mvreg_write(pp, MVNETA_INTR_ENABLE, 0);
/* Enable MBUS Retry bit16 */
@@ -2528,34 +2566,9 @@
return 0;
}
-static void mvneta_percpu_unmask_interrupt(void *arg)
-{
- struct mvneta_port *pp = arg;
-
- /* All the queue are unmasked, but actually only the ones
- * maped to this CPU will be unmasked
- */
- mvreg_write(pp, MVNETA_INTR_NEW_MASK,
- MVNETA_RX_INTR_MASK_ALL |
- MVNETA_TX_INTR_MASK_ALL |
- MVNETA_MISCINTR_INTR_MASK);
-}
-
-static void mvneta_percpu_mask_interrupt(void *arg)
-{
- struct mvneta_port *pp = arg;
-
- /* All the queue are masked, but actually only the ones
- * maped to this CPU will be masked
- */
- mvreg_write(pp, MVNETA_INTR_NEW_MASK, 0);
- mvreg_write(pp, MVNETA_INTR_OLD_MASK, 0);
- mvreg_write(pp, MVNETA_INTR_MISC_MASK, 0);
-}
-
static void mvneta_start_dev(struct mvneta_port *pp)
{
- unsigned int cpu;
+ int cpu;
mvneta_max_rx_size_set(pp, pp->pkt_size);
mvneta_txq_max_tx_size_set(pp, pp->pkt_size);
@@ -2564,16 +2577,15 @@
mvneta_port_enable(pp);
/* Enable polling on the port */
- for_each_present_cpu(cpu) {
+ for_each_online_cpu(cpu) {
struct mvneta_pcpu_port *port = per_cpu_ptr(pp->ports, cpu);
napi_enable(&port->napi);
}
/* Unmask interrupts. It has to be done from each CPU */
- for_each_online_cpu(cpu)
- smp_call_function_single(cpu, mvneta_percpu_unmask_interrupt,
- pp, true);
+ on_each_cpu(mvneta_percpu_unmask_interrupt, pp, true);
+
mvreg_write(pp, MVNETA_INTR_MISC_MASK,
MVNETA_CAUSE_PHY_STATUS_CHANGE |
MVNETA_CAUSE_LINK_CHANGE |
@@ -2589,7 +2601,7 @@
phy_stop(pp->phy_dev);
- for_each_present_cpu(cpu) {
+ for_each_online_cpu(cpu) {
struct mvneta_pcpu_port *port = per_cpu_ptr(pp->ports, cpu);
napi_disable(&port->napi);
@@ -2604,13 +2616,10 @@
mvneta_port_disable(pp);
/* Clear all ethernet port interrupts */
- mvreg_write(pp, MVNETA_INTR_MISC_CAUSE, 0);
- mvreg_write(pp, MVNETA_INTR_OLD_CAUSE, 0);
+ on_each_cpu(mvneta_percpu_clear_intr_cause, pp, true);
/* Mask all ethernet port interrupts */
- mvreg_write(pp, MVNETA_INTR_NEW_MASK, 0);
- mvreg_write(pp, MVNETA_INTR_OLD_MASK, 0);
- mvreg_write(pp, MVNETA_INTR_MISC_MASK, 0);
+ on_each_cpu(mvneta_percpu_mask_interrupt, pp, true);
mvneta_tx_reset(pp);
mvneta_rx_reset(pp);
@@ -2847,11 +2856,20 @@
disable_percpu_irq(pp->dev->irq);
}
+/* Electing a CPU must be done in an atomic way: it should be done
+ * after or before the removal/insertion of a CPU and this function is
+ * not reentrant.
+ */
static void mvneta_percpu_elect(struct mvneta_port *pp)
{
- int online_cpu_idx, max_cpu, cpu, i = 0;
+ int elected_cpu = 0, max_cpu, cpu, i = 0;
- online_cpu_idx = pp->rxq_def % num_online_cpus();
+ /* Use the cpu associated to the rxq when it is online, in all
+ * the other cases, use the cpu 0 which can't be offline.
+ */
+ if (cpu_online(pp->rxq_def))
+ elected_cpu = pp->rxq_def;
+
max_cpu = num_present_cpus();
for_each_online_cpu(cpu) {
@@ -2862,7 +2880,7 @@
if ((rxq % max_cpu) == cpu)
rxq_map |= MVNETA_CPU_RXQ_ACCESS(rxq);
- if (i == online_cpu_idx)
+ if (cpu == elected_cpu)
/* Map the default receive queue queue to the
* elected CPU
*/
@@ -2873,7 +2891,7 @@
* the CPU bound to the default RX queue
*/
if (txq_number == 1)
- txq_map = (i == online_cpu_idx) ?
+ txq_map = (cpu == elected_cpu) ?
MVNETA_CPU_TXQ_ACCESS(1) : 0;
else
txq_map = mvreg_read(pp, MVNETA_CPU_MAP(cpu)) &
@@ -2902,6 +2920,14 @@
switch (action) {
case CPU_ONLINE:
case CPU_ONLINE_FROZEN:
+ spin_lock(&pp->lock);
+ /* Configuring the driver for a new CPU while the
+ * driver is stopping is racy, so just avoid it.
+ */
+ if (pp->is_stopped) {
+ spin_unlock(&pp->lock);
+ break;
+ }
netif_tx_stop_all_queues(pp->dev);
/* We have to synchronise on tha napi of each CPU
@@ -2917,9 +2943,7 @@
}
/* Mask all ethernet port interrupts */
- mvreg_write(pp, MVNETA_INTR_NEW_MASK, 0);
- mvreg_write(pp, MVNETA_INTR_OLD_MASK, 0);
- mvreg_write(pp, MVNETA_INTR_MISC_MASK, 0);
+ on_each_cpu(mvneta_percpu_mask_interrupt, pp, true);
napi_enable(&port->napi);
@@ -2934,27 +2958,25 @@
*/
mvneta_percpu_elect(pp);
- /* Unmask all ethernet port interrupts, as this
- * notifier is called for each CPU then the CPU to
- * Queue mapping is applied
- */
- mvreg_write(pp, MVNETA_INTR_NEW_MASK,
- MVNETA_RX_INTR_MASK(rxq_number) |
- MVNETA_TX_INTR_MASK(txq_number) |
- MVNETA_MISCINTR_INTR_MASK);
+ /* Unmask all ethernet port interrupts */
+ on_each_cpu(mvneta_percpu_unmask_interrupt, pp, true);
mvreg_write(pp, MVNETA_INTR_MISC_MASK,
MVNETA_CAUSE_PHY_STATUS_CHANGE |
MVNETA_CAUSE_LINK_CHANGE |
MVNETA_CAUSE_PSC_SYNC_CHANGE);
netif_tx_start_all_queues(pp->dev);
+ spin_unlock(&pp->lock);
break;
case CPU_DOWN_PREPARE:
case CPU_DOWN_PREPARE_FROZEN:
netif_tx_stop_all_queues(pp->dev);
+ /* Thanks to this lock we are sure that any pending
+ * cpu election is done
+ */
+ spin_lock(&pp->lock);
/* Mask all ethernet port interrupts */
- mvreg_write(pp, MVNETA_INTR_NEW_MASK, 0);
- mvreg_write(pp, MVNETA_INTR_OLD_MASK, 0);
- mvreg_write(pp, MVNETA_INTR_MISC_MASK, 0);
+ on_each_cpu(mvneta_percpu_mask_interrupt, pp, true);
+ spin_unlock(&pp->lock);
napi_synchronize(&port->napi);
napi_disable(&port->napi);
@@ -2968,12 +2990,11 @@
case CPU_DEAD:
case CPU_DEAD_FROZEN:
/* Check if a new CPU must be elected now this on is down */
+ spin_lock(&pp->lock);
mvneta_percpu_elect(pp);
+ spin_unlock(&pp->lock);
/* Unmask all ethernet port interrupts */
- mvreg_write(pp, MVNETA_INTR_NEW_MASK,
- MVNETA_RX_INTR_MASK(rxq_number) |
- MVNETA_TX_INTR_MASK(txq_number) |
- MVNETA_MISCINTR_INTR_MASK);
+ on_each_cpu(mvneta_percpu_unmask_interrupt, pp, true);
mvreg_write(pp, MVNETA_INTR_MISC_MASK,
MVNETA_CAUSE_PHY_STATUS_CHANGE |
MVNETA_CAUSE_LINK_CHANGE |
@@ -2988,7 +3009,7 @@
static int mvneta_open(struct net_device *dev)
{
struct mvneta_port *pp = netdev_priv(dev);
- int ret, cpu;
+ int ret;
pp->pkt_size = MVNETA_RX_PKT_SIZE(pp->dev->mtu);
pp->frag_size = SKB_DATA_ALIGN(MVNETA_RX_BUF_SIZE(pp->pkt_size)) +
@@ -3010,22 +3031,12 @@
goto err_cleanup_txqs;
}
- /* Even though the documentation says that request_percpu_irq
- * doesn't enable the interrupts automatically, it actually
- * does so on the local CPU.
- *
- * Make sure it's disabled.
- */
- mvneta_percpu_disable(pp);
-
/* Enable per-CPU interrupt on all the CPU to handle our RX
* queue interrupts
*/
- for_each_online_cpu(cpu)
- smp_call_function_single(cpu, mvneta_percpu_enable,
- pp, true);
+ on_each_cpu(mvneta_percpu_enable, pp, true);
-
+ pp->is_stopped = false;
/* Register a CPU notifier to handle the case where our CPU
* might be taken offline.
*/
@@ -3057,13 +3068,20 @@
static int mvneta_stop(struct net_device *dev)
{
struct mvneta_port *pp = netdev_priv(dev);
- int cpu;
+ /* Inform that we are stopping so we don't want to setup the
+ * driver for new CPUs in the notifiers
+ */
+ spin_lock(&pp->lock);
+ pp->is_stopped = true;
mvneta_stop_dev(pp);
mvneta_mdio_remove(pp);
unregister_cpu_notifier(&pp->cpu_notifier);
- for_each_present_cpu(cpu)
- smp_call_function_single(cpu, mvneta_percpu_disable, pp, true);
+ /* Now that the notifier are unregistered, we can release le
+ * lock
+ */
+ spin_unlock(&pp->lock);
+ on_each_cpu(mvneta_percpu_disable, pp, true);
free_percpu_irq(dev->irq, pp->ports);
mvneta_cleanup_rxqs(pp);
mvneta_cleanup_txqs(pp);
@@ -3312,9 +3330,7 @@
netif_tx_stop_all_queues(pp->dev);
- for_each_online_cpu(cpu)
- smp_call_function_single(cpu, mvneta_percpu_mask_interrupt,
- pp, true);
+ on_each_cpu(mvneta_percpu_mask_interrupt, pp, true);
/* We have to synchronise on the napi of each CPU */
for_each_online_cpu(cpu) {
@@ -3335,7 +3351,9 @@
mvreg_write(pp, MVNETA_PORT_CONFIG, val);
/* Update the elected CPU matching the new rxq_def */
+ spin_lock(&pp->lock);
mvneta_percpu_elect(pp);
+ spin_unlock(&pp->lock);
/* We have to synchronise on the napi of each CPU */
for_each_online_cpu(cpu) {
diff --git a/drivers/net/ethernet/marvell/mvpp2.c b/drivers/net/ethernet/marvell/mvpp2.c
index a4beccf..c797971a 100644
--- a/drivers/net/ethernet/marvell/mvpp2.c
+++ b/drivers/net/ethernet/marvell/mvpp2.c
@@ -3061,7 +3061,7 @@
pe = kzalloc(sizeof(*pe), GFP_KERNEL);
if (!pe)
- return -1;
+ return -ENOMEM;
mvpp2_prs_tcam_lu_set(pe, MVPP2_PRS_LU_MAC);
pe->index = tid;
@@ -3077,7 +3077,7 @@
if (pmap == 0) {
if (add) {
kfree(pe);
- return -1;
+ return -EINVAL;
}
mvpp2_prs_hw_inv(priv, pe->index);
priv->prs_shadow[pe->index].valid = false;
diff --git a/drivers/net/ethernet/mellanox/mlx4/catas.c b/drivers/net/ethernet/mellanox/mlx4/catas.c
index 715de8a..c7e9399 100644
--- a/drivers/net/ethernet/mellanox/mlx4/catas.c
+++ b/drivers/net/ethernet/mellanox/mlx4/catas.c
@@ -182,10 +182,17 @@
err = mlx4_reset_slave(dev);
else
err = mlx4_reset_master(dev);
- BUG_ON(err != 0);
+ if (!err) {
+ mlx4_err(dev, "device was reset successfully\n");
+ } else {
+ /* EEH could have disabled the PCI channel during reset. That's
+ * recoverable and the PCI error flow will handle it.
+ */
+ if (!pci_channel_offline(dev->persist->pdev))
+ BUG_ON(1);
+ }
dev->persist->state |= MLX4_DEVICE_STATE_INTERNAL_ERROR;
- mlx4_err(dev, "device was reset successfully\n");
mutex_unlock(&persist->device_state_mutex);
/* At that step HW was already reset, now notify clients */
diff --git a/drivers/net/ethernet/mellanox/mlx4/cmd.c b/drivers/net/ethernet/mellanox/mlx4/cmd.c
index d48d579..e94ca1c 100644
--- a/drivers/net/ethernet/mellanox/mlx4/cmd.c
+++ b/drivers/net/ethernet/mellanox/mlx4/cmd.c
@@ -2429,7 +2429,7 @@
flush_workqueue(priv->mfunc.master.comm_wq);
destroy_workqueue(priv->mfunc.master.comm_wq);
err_slaves:
- while (--i) {
+ while (i--) {
for (port = 1; port <= MLX4_MAX_PORTS; port++)
kfree(priv->mfunc.master.slave_state[i].vlan_filter[port]);
}
diff --git a/drivers/net/ethernet/mellanox/mlx4/cq.c b/drivers/net/ethernet/mellanox/mlx4/cq.c
index 3348e64..a849da9 100644
--- a/drivers/net/ethernet/mellanox/mlx4/cq.c
+++ b/drivers/net/ethernet/mellanox/mlx4/cq.c
@@ -318,7 +318,9 @@
if (timestamp_en)
cq_context->flags |= cpu_to_be32(1 << 19);
- cq_context->logsize_usrpage = cpu_to_be32((ilog2(nent) << 24) | uar->index);
+ cq_context->logsize_usrpage =
+ cpu_to_be32((ilog2(nent) << 24) |
+ mlx4_to_hw_uar_index(dev, uar->index));
cq_context->comp_eqn = priv->eq_table.eq[MLX4_CQ_TO_EQ_VECTOR(vector)].eqn;
cq_context->log_page_size = mtt->page_shift - MLX4_ICM_PAGE_SHIFT;
diff --git a/drivers/net/ethernet/mellanox/mlx4/en_clock.c b/drivers/net/ethernet/mellanox/mlx4/en_clock.c
index 038f9ce..1494997 100644
--- a/drivers/net/ethernet/mellanox/mlx4/en_clock.c
+++ b/drivers/net/ethernet/mellanox/mlx4/en_clock.c
@@ -236,6 +236,24 @@
.enable = mlx4_en_phc_enable,
};
+#define MLX4_EN_WRAP_AROUND_SEC 10ULL
+
+/* This function calculates the max shift that enables the user range
+ * of MLX4_EN_WRAP_AROUND_SEC values in the cycles register.
+ */
+static u32 freq_to_shift(u16 freq)
+{
+ u32 freq_khz = freq * 1000;
+ u64 max_val_cycles = freq_khz * 1000 * MLX4_EN_WRAP_AROUND_SEC;
+ u64 max_val_cycles_rounded = is_power_of_2(max_val_cycles + 1) ?
+ max_val_cycles : roundup_pow_of_two(max_val_cycles) - 1;
+ /* calculate max possible multiplier in order to fit in 64bit */
+ u64 max_mul = div_u64(0xffffffffffffffffULL, max_val_cycles_rounded);
+
+ /* This comes from the reverse of clocksource_khz2mult */
+ return ilog2(div_u64(max_mul * freq_khz, 1000000));
+}
+
void mlx4_en_init_timestamp(struct mlx4_en_dev *mdev)
{
struct mlx4_dev *dev = mdev->dev;
@@ -254,12 +272,7 @@
memset(&mdev->cycles, 0, sizeof(mdev->cycles));
mdev->cycles.read = mlx4_en_read_clock;
mdev->cycles.mask = CLOCKSOURCE_MASK(48);
- /* Using shift to make calculation more accurate. Since current HW
- * clock frequency is 427 MHz, and cycles are given using a 48 bits
- * register, the biggest shift when calculating using u64, is 14
- * (max_cycles * multiplier < 2^64)
- */
- mdev->cycles.shift = 14;
+ mdev->cycles.shift = freq_to_shift(dev->caps.hca_core_clock);
mdev->cycles.mult =
clocksource_khz2mult(1000 * dev->caps.hca_core_clock, mdev->cycles.shift);
mdev->nominal_c_mult = mdev->cycles.mult;
diff --git a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
index 0c7e3f6..f191a16 100644
--- a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
+++ b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
@@ -2344,8 +2344,6 @@
/* set offloads */
priv->dev->hw_enc_features |= NETIF_F_IP_CSUM | NETIF_F_RXCSUM |
NETIF_F_TSO | NETIF_F_GSO_UDP_TUNNEL;
- priv->dev->hw_features |= NETIF_F_GSO_UDP_TUNNEL;
- priv->dev->features |= NETIF_F_GSO_UDP_TUNNEL;
}
static void mlx4_en_del_vxlan_offloads(struct work_struct *work)
@@ -2356,8 +2354,6 @@
/* unset offloads */
priv->dev->hw_enc_features &= ~(NETIF_F_IP_CSUM | NETIF_F_RXCSUM |
NETIF_F_TSO | NETIF_F_GSO_UDP_TUNNEL);
- priv->dev->hw_features &= ~NETIF_F_GSO_UDP_TUNNEL;
- priv->dev->features &= ~NETIF_F_GSO_UDP_TUNNEL;
ret = mlx4_SET_PORT_VXLAN(priv->mdev->dev, priv->port,
VXLAN_STEER_BY_OUTER_MAC, 0);
@@ -2980,6 +2976,11 @@
priv->rss_hash_fn = ETH_RSS_HASH_TOP;
}
+ if (mdev->dev->caps.tunnel_offload_mode == MLX4_TUNNEL_OFFLOAD_MODE_VXLAN) {
+ dev->hw_features |= NETIF_F_GSO_UDP_TUNNEL;
+ dev->features |= NETIF_F_GSO_UDP_TUNNEL;
+ }
+
mdev->pndev[port] = dev;
mdev->upper[port] = NULL;
diff --git a/drivers/net/ethernet/mellanox/mlx4/en_port.c b/drivers/net/ethernet/mellanox/mlx4/en_port.c
index ee99e67..3904b5f 100644
--- a/drivers/net/ethernet/mellanox/mlx4/en_port.c
+++ b/drivers/net/ethernet/mellanox/mlx4/en_port.c
@@ -238,11 +238,11 @@
stats->collisions = 0;
stats->rx_dropped = be32_to_cpu(mlx4_en_stats->RDROP);
stats->rx_length_errors = be32_to_cpu(mlx4_en_stats->RdropLength);
- stats->rx_over_errors = be32_to_cpu(mlx4_en_stats->RdropOvflw);
+ stats->rx_over_errors = 0;
stats->rx_crc_errors = be32_to_cpu(mlx4_en_stats->RCRC);
stats->rx_frame_errors = 0;
stats->rx_fifo_errors = be32_to_cpu(mlx4_en_stats->RdropOvflw);
- stats->rx_missed_errors = be32_to_cpu(mlx4_en_stats->RdropOvflw);
+ stats->rx_missed_errors = 0;
stats->tx_aborted_errors = 0;
stats->tx_carrier_errors = 0;
stats->tx_fifo_errors = 0;
diff --git a/drivers/net/ethernet/mellanox/mlx4/en_resources.c b/drivers/net/ethernet/mellanox/mlx4/en_resources.c
index 12aab5a..02e925d 100644
--- a/drivers/net/ethernet/mellanox/mlx4/en_resources.c
+++ b/drivers/net/ethernet/mellanox/mlx4/en_resources.c
@@ -58,7 +58,8 @@
} else {
context->sq_size_stride = ilog2(TXBB_SIZE) - 4;
}
- context->usr_page = cpu_to_be32(mdev->priv_uar.index);
+ context->usr_page = cpu_to_be32(mlx4_to_hw_uar_index(mdev->dev,
+ mdev->priv_uar.index));
context->local_qpn = cpu_to_be32(qpn);
context->pri_path.ackto = 1 & 0x07;
context->pri_path.sched_queue = 0x83 | (priv->port - 1) << 6;
diff --git a/drivers/net/ethernet/mellanox/mlx4/en_tx.c b/drivers/net/ethernet/mellanox/mlx4/en_tx.c
index 4421bf5..e0946ab 100644
--- a/drivers/net/ethernet/mellanox/mlx4/en_tx.c
+++ b/drivers/net/ethernet/mellanox/mlx4/en_tx.c
@@ -213,7 +213,9 @@
mlx4_en_fill_qp_context(priv, ring->size, ring->stride, 1, 0, ring->qpn,
ring->cqn, user_prio, &ring->context);
if (ring->bf_alloced)
- ring->context.usr_page = cpu_to_be32(ring->bf.uar->index);
+ ring->context.usr_page =
+ cpu_to_be32(mlx4_to_hw_uar_index(mdev->dev,
+ ring->bf.uar->index));
err = mlx4_qp_to_ready(mdev->dev, &ring->wqres.mtt, &ring->context,
&ring->qp, &ring->qp_state);
diff --git a/drivers/net/ethernet/mellanox/mlx4/eq.c b/drivers/net/ethernet/mellanox/mlx4/eq.c
index 4696053..f613977 100644
--- a/drivers/net/ethernet/mellanox/mlx4/eq.c
+++ b/drivers/net/ethernet/mellanox/mlx4/eq.c
@@ -940,9 +940,10 @@
if (!priv->eq_table.uar_map[index]) {
priv->eq_table.uar_map[index] =
- ioremap(pci_resource_start(dev->persist->pdev, 2) +
- ((eq->eqn / 4) << PAGE_SHIFT),
- PAGE_SIZE);
+ ioremap(
+ pci_resource_start(dev->persist->pdev, 2) +
+ ((eq->eqn / 4) << (dev->uar_page_shift)),
+ (1 << (dev->uar_page_shift)));
if (!priv->eq_table.uar_map[index]) {
mlx4_err(dev, "Couldn't map EQ doorbell for EQN 0x%06x\n",
eq->eqn);
diff --git a/drivers/net/ethernet/mellanox/mlx4/main.c b/drivers/net/ethernet/mellanox/mlx4/main.c
index f1b6d21..2cc3c62 100644
--- a/drivers/net/ethernet/mellanox/mlx4/main.c
+++ b/drivers/net/ethernet/mellanox/mlx4/main.c
@@ -168,6 +168,20 @@
static atomic_t pf_loading = ATOMIC_INIT(0);
+static inline void mlx4_set_num_reserved_uars(struct mlx4_dev *dev,
+ struct mlx4_dev_cap *dev_cap)
+{
+ /* The reserved_uars is calculated by system page size unit.
+ * Therefore, adjustment is added when the uar page size is less
+ * than the system page size
+ */
+ dev->caps.reserved_uars =
+ max_t(int,
+ mlx4_get_num_reserved_uar(dev),
+ dev_cap->reserved_uars /
+ (1 << (PAGE_SHIFT - dev->uar_page_shift)));
+}
+
int mlx4_check_port_params(struct mlx4_dev *dev,
enum mlx4_port_type *port_type)
{
@@ -386,8 +400,6 @@
dev->caps.reserved_mtts = dev_cap->reserved_mtts;
dev->caps.reserved_mrws = dev_cap->reserved_mrws;
- /* The first 128 UARs are used for EQ doorbells */
- dev->caps.reserved_uars = max_t(int, 128, dev_cap->reserved_uars);
dev->caps.reserved_pds = dev_cap->reserved_pds;
dev->caps.reserved_xrcds = (dev->caps.flags & MLX4_DEV_CAP_FLAG_XRC) ?
dev_cap->reserved_xrcds : 0;
@@ -405,6 +417,15 @@
dev->caps.max_gso_sz = dev_cap->max_gso_sz;
dev->caps.max_rss_tbl_sz = dev_cap->max_rss_tbl_sz;
+ /* Save uar page shift */
+ if (!mlx4_is_slave(dev)) {
+ /* Virtual PCI function needs to determine UAR page size from
+ * firmware. Only master PCI function can set the uar page size
+ */
+ dev->uar_page_shift = DEFAULT_UAR_PAGE_SHIFT;
+ mlx4_set_num_reserved_uars(dev, dev_cap);
+ }
+
if (dev->caps.flags2 & MLX4_DEV_CAP_FLAG2_PHV_EN) {
struct mlx4_init_hca_param hca_param;
@@ -815,16 +836,25 @@
return -ENODEV;
}
- /* slave gets uar page size from QUERY_HCA fw command */
- dev->caps.uar_page_size = 1 << (hca_param.uar_page_sz + 12);
+ /* Set uar_page_shift for VF */
+ dev->uar_page_shift = hca_param.uar_page_sz + 12;
- /* TODO: relax this assumption */
- if (dev->caps.uar_page_size != PAGE_SIZE) {
- mlx4_err(dev, "UAR size:%d != kernel PAGE_SIZE of %ld\n",
- dev->caps.uar_page_size, PAGE_SIZE);
- return -ENODEV;
+ /* Make sure the master uar page size is valid */
+ if (dev->uar_page_shift > PAGE_SHIFT) {
+ mlx4_err(dev,
+ "Invalid configuration: uar page size is larger than system page size\n");
+ return -ENODEV;
}
+ /* Set reserved_uars based on the uar_page_shift */
+ mlx4_set_num_reserved_uars(dev, &dev_cap);
+
+ /* Although uar page size in FW differs from system page size,
+ * upper software layers (mlx4_ib, mlx4_en and part of mlx4_core)
+ * still works with assumption that uar page size == system page size
+ */
+ dev->caps.uar_page_size = PAGE_SIZE;
+
memset(&func_cap, 0, sizeof(func_cap));
err = mlx4_QUERY_FUNC_CAP(dev, 0, &func_cap);
if (err) {
@@ -2179,8 +2209,12 @@
dev->caps.max_fmr_maps = (1 << (32 - ilog2(dev->caps.num_mpts))) - 1;
- init_hca.log_uar_sz = ilog2(dev->caps.num_uars);
- init_hca.uar_page_sz = PAGE_SHIFT - 12;
+ /* Always set UAR page size 4KB, set log_uar_sz accordingly */
+ init_hca.log_uar_sz = ilog2(dev->caps.num_uars) +
+ PAGE_SHIFT -
+ DEFAULT_UAR_PAGE_SHIFT;
+ init_hca.uar_page_sz = DEFAULT_UAR_PAGE_SHIFT - 12;
+
init_hca.mw_enabled = 0;
if (dev->caps.flags & MLX4_DEV_CAP_FLAG_MEM_WINDOW ||
dev->caps.bmme_flags & MLX4_BMME_FLAG_TYPE_2_WIN)
diff --git a/drivers/net/ethernet/mellanox/mlx4/pd.c b/drivers/net/ethernet/mellanox/mlx4/pd.c
index 609c59d..b3cc3ab 100644
--- a/drivers/net/ethernet/mellanox/mlx4/pd.c
+++ b/drivers/net/ethernet/mellanox/mlx4/pd.c
@@ -269,9 +269,15 @@
int mlx4_init_uar_table(struct mlx4_dev *dev)
{
- if (dev->caps.num_uars <= 128) {
- mlx4_err(dev, "Only %d UAR pages (need more than 128)\n",
- dev->caps.num_uars);
+ int num_reserved_uar = mlx4_get_num_reserved_uar(dev);
+
+ mlx4_dbg(dev, "uar_page_shift = %d", dev->uar_page_shift);
+ mlx4_dbg(dev, "Effective reserved_uars=%d", dev->caps.reserved_uars);
+
+ if (dev->caps.num_uars <= num_reserved_uar) {
+ mlx4_err(
+ dev, "Only %d UAR pages (need more than %d)\n",
+ dev->caps.num_uars, num_reserved_uar);
mlx4_err(dev, "Increase firmware log2_uar_bar_megabytes?\n");
return -ENODEV;
}
diff --git a/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c b/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c
index b46dbe2..25ce1b0 100644
--- a/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c
+++ b/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c
@@ -915,11 +915,13 @@
spin_lock_irq(mlx4_tlock(dev));
r = find_res(dev, counter_index, RES_COUNTER);
- if (!r || r->owner != slave)
+ if (!r || r->owner != slave) {
ret = -EINVAL;
- counter = container_of(r, struct res_counter, com);
- if (!counter->port)
- counter->port = port;
+ } else {
+ counter = container_of(r, struct res_counter, com);
+ if (!counter->port)
+ counter->port = port;
+ }
spin_unlock_irq(mlx4_tlock(dev));
return ret;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
index 6a3e430..d4e1c30 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
@@ -2024,18 +2024,37 @@
vf_stats);
}
-static struct net_device_ops mlx5e_netdev_ops = {
+static const struct net_device_ops mlx5e_netdev_ops_basic = {
.ndo_open = mlx5e_open,
.ndo_stop = mlx5e_close,
.ndo_start_xmit = mlx5e_xmit,
.ndo_get_stats64 = mlx5e_get_stats,
.ndo_set_rx_mode = mlx5e_set_rx_mode,
.ndo_set_mac_address = mlx5e_set_mac,
- .ndo_vlan_rx_add_vid = mlx5e_vlan_rx_add_vid,
- .ndo_vlan_rx_kill_vid = mlx5e_vlan_rx_kill_vid,
+ .ndo_vlan_rx_add_vid = mlx5e_vlan_rx_add_vid,
+ .ndo_vlan_rx_kill_vid = mlx5e_vlan_rx_kill_vid,
.ndo_set_features = mlx5e_set_features,
- .ndo_change_mtu = mlx5e_change_mtu,
- .ndo_do_ioctl = mlx5e_ioctl,
+ .ndo_change_mtu = mlx5e_change_mtu,
+ .ndo_do_ioctl = mlx5e_ioctl,
+};
+
+static const struct net_device_ops mlx5e_netdev_ops_sriov = {
+ .ndo_open = mlx5e_open,
+ .ndo_stop = mlx5e_close,
+ .ndo_start_xmit = mlx5e_xmit,
+ .ndo_get_stats64 = mlx5e_get_stats,
+ .ndo_set_rx_mode = mlx5e_set_rx_mode,
+ .ndo_set_mac_address = mlx5e_set_mac,
+ .ndo_vlan_rx_add_vid = mlx5e_vlan_rx_add_vid,
+ .ndo_vlan_rx_kill_vid = mlx5e_vlan_rx_kill_vid,
+ .ndo_set_features = mlx5e_set_features,
+ .ndo_change_mtu = mlx5e_change_mtu,
+ .ndo_do_ioctl = mlx5e_ioctl,
+ .ndo_set_vf_mac = mlx5e_set_vf_mac,
+ .ndo_set_vf_vlan = mlx5e_set_vf_vlan,
+ .ndo_get_vf_config = mlx5e_get_vf_config,
+ .ndo_set_vf_link_state = mlx5e_set_vf_link_state,
+ .ndo_get_vf_stats = mlx5e_get_vf_stats,
};
static int mlx5e_check_required_hca_cap(struct mlx5_core_dev *mdev)
@@ -2137,18 +2156,11 @@
SET_NETDEV_DEV(netdev, &mdev->pdev->dev);
- if (priv->params.num_tc > 1)
- mlx5e_netdev_ops.ndo_select_queue = mlx5e_select_queue;
+ if (MLX5_CAP_GEN(mdev, vport_group_manager))
+ netdev->netdev_ops = &mlx5e_netdev_ops_sriov;
+ else
+ netdev->netdev_ops = &mlx5e_netdev_ops_basic;
- if (MLX5_CAP_GEN(mdev, vport_group_manager)) {
- mlx5e_netdev_ops.ndo_set_vf_mac = mlx5e_set_vf_mac;
- mlx5e_netdev_ops.ndo_set_vf_vlan = mlx5e_set_vf_vlan;
- mlx5e_netdev_ops.ndo_get_vf_config = mlx5e_get_vf_config;
- mlx5e_netdev_ops.ndo_set_vf_link_state = mlx5e_set_vf_link_state;
- mlx5e_netdev_ops.ndo_get_vf_stats = mlx5e_get_vf_stats;
- }
-
- netdev->netdev_ops = &mlx5e_netdev_ops;
netdev->watchdog_timeo = 15 * HZ;
netdev->ethtool_ops = &mlx5e_ethtool_ops;
diff --git a/drivers/net/ethernet/mellanox/mlxsw/port.h b/drivers/net/ethernet/mellanox/mlxsw/port.h
index 726f543..ae65b99 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/port.h
+++ b/drivers/net/ethernet/mellanox/mlxsw/port.h
@@ -49,7 +49,7 @@
#define MLXSW_PORT_MID 0xd000
#define MLXSW_PORT_MAX_PHY_PORTS 0x40
-#define MLXSW_PORT_MAX_PORTS MLXSW_PORT_MAX_PHY_PORTS
+#define MLXSW_PORT_MAX_PORTS (MLXSW_PORT_MAX_PHY_PORTS + 1)
#define MLXSW_PORT_DEVID_BITS_OFFSET 10
#define MLXSW_PORT_PHY_BITS_OFFSET 4
diff --git a/drivers/net/ethernet/mellanox/mlxsw/reg.h b/drivers/net/ethernet/mellanox/mlxsw/reg.h
index bb77e22..ffe4c03 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/reg.h
+++ b/drivers/net/ethernet/mellanox/mlxsw/reg.h
@@ -873,6 +873,62 @@
}
}
+/* SPAFT - Switch Port Acceptable Frame Types
+ * ------------------------------------------
+ * The Switch Port Acceptable Frame Types register configures the frame
+ * admittance of the port.
+ */
+#define MLXSW_REG_SPAFT_ID 0x2010
+#define MLXSW_REG_SPAFT_LEN 0x08
+
+static const struct mlxsw_reg_info mlxsw_reg_spaft = {
+ .id = MLXSW_REG_SPAFT_ID,
+ .len = MLXSW_REG_SPAFT_LEN,
+};
+
+/* reg_spaft_local_port
+ * Local port number.
+ * Access: Index
+ *
+ * Note: CPU port is not supported (all tag types are allowed).
+ */
+MLXSW_ITEM32(reg, spaft, local_port, 0x00, 16, 8);
+
+/* reg_spaft_sub_port
+ * Virtual port within the physical port.
+ * Should be set to 0 when virtual ports are not enabled on the port.
+ * Access: RW
+ */
+MLXSW_ITEM32(reg, spaft, sub_port, 0x00, 8, 8);
+
+/* reg_spaft_allow_untagged
+ * When set, untagged frames on the ingress are allowed (default).
+ * Access: RW
+ */
+MLXSW_ITEM32(reg, spaft, allow_untagged, 0x04, 31, 1);
+
+/* reg_spaft_allow_prio_tagged
+ * When set, priority tagged frames on the ingress are allowed (default).
+ * Access: RW
+ */
+MLXSW_ITEM32(reg, spaft, allow_prio_tagged, 0x04, 30, 1);
+
+/* reg_spaft_allow_tagged
+ * When set, tagged frames on the ingress are allowed (default).
+ * Access: RW
+ */
+MLXSW_ITEM32(reg, spaft, allow_tagged, 0x04, 29, 1);
+
+static inline void mlxsw_reg_spaft_pack(char *payload, u8 local_port,
+ bool allow_untagged)
+{
+ MLXSW_REG_ZERO(spaft, payload);
+ mlxsw_reg_spaft_local_port_set(payload, local_port);
+ mlxsw_reg_spaft_allow_untagged_set(payload, allow_untagged);
+ mlxsw_reg_spaft_allow_prio_tagged_set(payload, true);
+ mlxsw_reg_spaft_allow_tagged_set(payload, true);
+}
+
/* SFGC - Switch Flooding Group Configuration
* ------------------------------------------
* The following register controls the association of flooding tables and MIDs
@@ -3203,6 +3259,8 @@
return "SPVID";
case MLXSW_REG_SPVM_ID:
return "SPVM";
+ case MLXSW_REG_SPAFT_ID:
+ return "SPAFT";
case MLXSW_REG_SFGC_ID:
return "SFGC";
case MLXSW_REG_SFTR_ID:
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
index 217856b..09ce451 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
@@ -2123,6 +2123,8 @@
if (flush_fdb && mlxsw_sp_port_fdb_flush(mlxsw_sp_port))
netdev_err(mlxsw_sp_port->dev, "Failed to flush FDB\n");
+ mlxsw_sp_port_pvid_set(mlxsw_sp_port, 1);
+
mlxsw_sp_port->learning = 0;
mlxsw_sp_port->learning_sync = 0;
mlxsw_sp_port->uc_flood = 0;
@@ -2746,6 +2748,13 @@
goto err_vport_flood_set;
}
+ err = mlxsw_sp_port_stp_state_set(mlxsw_sp_vport, vid,
+ MLXSW_REG_SPMS_STATE_FORWARDING);
+ if (err) {
+ netdev_err(dev, "Failed to set STP state\n");
+ goto err_port_stp_state_set;
+ }
+
if (flush_fdb && mlxsw_sp_vport_fdb_flush(mlxsw_sp_vport))
netdev_err(dev, "Failed to flush FDB\n");
@@ -2763,6 +2772,7 @@
return 0;
+err_port_stp_state_set:
err_vport_flood_set:
err_port_vid_learning_set:
err_port_vid_to_fid_validate:
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.h b/drivers/net/ethernet/mellanox/mlxsw/spectrum.h
index 7f42eb1..3b89ed2 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.h
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.h
@@ -254,5 +254,6 @@
int mlxsw_sp_vport_flood_set(struct mlxsw_sp_port *mlxsw_sp_vport, u16 vfid,
bool set, bool only_uc);
void mlxsw_sp_port_active_vlans_del(struct mlxsw_sp_port *mlxsw_sp_port);
+int mlxsw_sp_port_pvid_set(struct mlxsw_sp_port *mlxsw_sp_port, u16 vid);
#endif
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c
index e492ca2..7b56098 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c
@@ -370,7 +370,8 @@
return err;
}
-static int mlxsw_sp_port_pvid_set(struct mlxsw_sp_port *mlxsw_sp_port, u16 vid)
+static int __mlxsw_sp_port_pvid_set(struct mlxsw_sp_port *mlxsw_sp_port,
+ u16 vid)
{
struct mlxsw_sp *mlxsw_sp = mlxsw_sp_port->mlxsw_sp;
char spvid_pl[MLXSW_REG_SPVID_LEN];
@@ -379,6 +380,53 @@
return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(spvid), spvid_pl);
}
+static int mlxsw_sp_port_allow_untagged_set(struct mlxsw_sp_port *mlxsw_sp_port,
+ bool allow)
+{
+ struct mlxsw_sp *mlxsw_sp = mlxsw_sp_port->mlxsw_sp;
+ char spaft_pl[MLXSW_REG_SPAFT_LEN];
+
+ mlxsw_reg_spaft_pack(spaft_pl, mlxsw_sp_port->local_port, allow);
+ return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(spaft), spaft_pl);
+}
+
+int mlxsw_sp_port_pvid_set(struct mlxsw_sp_port *mlxsw_sp_port, u16 vid)
+{
+ struct net_device *dev = mlxsw_sp_port->dev;
+ int err;
+
+ if (!vid) {
+ err = mlxsw_sp_port_allow_untagged_set(mlxsw_sp_port, false);
+ if (err) {
+ netdev_err(dev, "Failed to disallow untagged traffic\n");
+ return err;
+ }
+ } else {
+ err = __mlxsw_sp_port_pvid_set(mlxsw_sp_port, vid);
+ if (err) {
+ netdev_err(dev, "Failed to set PVID\n");
+ return err;
+ }
+
+ /* Only allow if not already allowed. */
+ if (!mlxsw_sp_port->pvid) {
+ err = mlxsw_sp_port_allow_untagged_set(mlxsw_sp_port,
+ true);
+ if (err) {
+ netdev_err(dev, "Failed to allow untagged traffic\n");
+ goto err_port_allow_untagged_set;
+ }
+ }
+ }
+
+ mlxsw_sp_port->pvid = vid;
+ return 0;
+
+err_port_allow_untagged_set:
+ __mlxsw_sp_port_pvid_set(mlxsw_sp_port, mlxsw_sp_port->pvid);
+ return err;
+}
+
static int mlxsw_sp_fid_create(struct mlxsw_sp *mlxsw_sp, u16 fid)
{
char sfmr_pl[MLXSW_REG_SFMR_LEN];
@@ -540,7 +588,12 @@
netdev_err(dev, "Unable to add PVID %d\n", vid_begin);
goto err_port_pvid_set;
}
- mlxsw_sp_port->pvid = vid_begin;
+ } else if (!flag_pvid && old_pvid >= vid_begin && old_pvid <= vid_end) {
+ err = mlxsw_sp_port_pvid_set(mlxsw_sp_port, 0);
+ if (err) {
+ netdev_err(dev, "Unable to del PVID\n");
+ goto err_port_pvid_set;
+ }
}
/* Changing activity bits only if HW operation succeded */
@@ -892,20 +945,18 @@
return err;
}
+ if (init)
+ goto out;
+
pvid = mlxsw_sp_port->pvid;
- if (pvid >= vid_begin && pvid <= vid_end && pvid != 1) {
- /* Default VLAN is always 1 */
- err = mlxsw_sp_port_pvid_set(mlxsw_sp_port, 1);
+ if (pvid >= vid_begin && pvid <= vid_end) {
+ err = mlxsw_sp_port_pvid_set(mlxsw_sp_port, 0);
if (err) {
netdev_err(dev, "Unable to del PVID %d\n", pvid);
return err;
}
- mlxsw_sp_port->pvid = 1;
}
- if (init)
- goto out;
-
err = __mlxsw_sp_port_flood_set(mlxsw_sp_port, vid_begin, vid_end,
false, false);
if (err) {
diff --git a/drivers/net/ethernet/realtek/r8169.c b/drivers/net/ethernet/realtek/r8169.c
index 17d5571..537974c 100644
--- a/drivers/net/ethernet/realtek/r8169.c
+++ b/drivers/net/ethernet/realtek/r8169.c
@@ -6137,28 +6137,28 @@
sw_cnt_1ms_ini = 16000000/rg_saw_cnt;
sw_cnt_1ms_ini &= 0x0fff;
data = r8168_mac_ocp_read(tp, 0xd412);
- data &= 0x0fff;
+ data &= ~0x0fff;
data |= sw_cnt_1ms_ini;
r8168_mac_ocp_write(tp, 0xd412, data);
}
data = r8168_mac_ocp_read(tp, 0xe056);
- data &= 0xf0;
- data |= 0x07;
+ data &= ~0xf0;
+ data |= 0x70;
r8168_mac_ocp_write(tp, 0xe056, data);
data = r8168_mac_ocp_read(tp, 0xe052);
- data &= 0x8008;
- data |= 0x6000;
+ data &= ~0x6000;
+ data |= 0x8008;
r8168_mac_ocp_write(tp, 0xe052, data);
data = r8168_mac_ocp_read(tp, 0xe0d6);
- data &= 0x01ff;
+ data &= ~0x01ff;
data |= 0x017f;
r8168_mac_ocp_write(tp, 0xe0d6, data);
data = r8168_mac_ocp_read(tp, 0xd420);
- data &= 0x0fff;
+ data &= ~0x0fff;
data |= 0x047f;
r8168_mac_ocp_write(tp, 0xd420, data);
diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c
index ac43ed9..744d780 100644
--- a/drivers/net/ethernet/renesas/ravb_main.c
+++ b/drivers/net/ethernet/renesas/ravb_main.c
@@ -1139,7 +1139,8 @@
if (netif_running(ndev)) {
netif_device_detach(ndev);
/* Stop PTP Clock driver */
- ravb_ptp_stop(ndev);
+ if (priv->chip_id == RCAR_GEN2)
+ ravb_ptp_stop(ndev);
/* Wait for DMA stopping */
error = ravb_stop_dma(ndev);
if (error) {
@@ -1170,7 +1171,8 @@
ravb_emac_init(ndev);
/* Initialise PTP Clock driver */
- ravb_ptp_init(ndev, priv->pdev);
+ if (priv->chip_id == RCAR_GEN2)
+ ravb_ptp_init(ndev, priv->pdev);
netif_device_attach(ndev);
}
@@ -1298,7 +1300,8 @@
netif_tx_stop_all_queues(ndev);
/* Stop PTP Clock driver */
- ravb_ptp_stop(ndev);
+ if (priv->chip_id == RCAR_GEN2)
+ ravb_ptp_stop(ndev);
/* Wait for DMA stopping */
ravb_stop_dma(ndev);
@@ -1311,7 +1314,8 @@
ravb_emac_init(ndev);
/* Initialise PTP Clock driver */
- ravb_ptp_init(ndev, priv->pdev);
+ if (priv->chip_id == RCAR_GEN2)
+ ravb_ptp_init(ndev, priv->pdev);
netif_tx_start_all_queues(ndev);
}
@@ -1814,10 +1818,6 @@
CCC_OPC_CONFIG | CCC_GAC | CCC_CSEL_HPB, CCC);
}
- /* Set CSEL value */
- ravb_write(ndev, (ravb_read(ndev, CCC) & ~CCC_CSEL) | CCC_CSEL_HPB,
- CCC);
-
/* Set GTI value */
error = ravb_set_gti(ndev);
if (error)
diff --git a/drivers/net/ethernet/smsc/smc91x.c b/drivers/net/ethernet/smsc/smc91x.c
index 0e2fc1a..db7db8a 100644
--- a/drivers/net/ethernet/smsc/smc91x.c
+++ b/drivers/net/ethernet/smsc/smc91x.c
@@ -2342,8 +2342,8 @@
}
ndev->irq = platform_get_irq(pdev, 0);
- if (ndev->irq <= 0) {
- ret = -ENODEV;
+ if (ndev->irq < 0) {
+ ret = ndev->irq;
goto out_release_io;
}
/*
diff --git a/drivers/net/ethernet/synopsys/dwc_eth_qos.c b/drivers/net/ethernet/synopsys/dwc_eth_qos.c
index 70814b7..fc8bbff 100644
--- a/drivers/net/ethernet/synopsys/dwc_eth_qos.c
+++ b/drivers/net/ethernet/synopsys/dwc_eth_qos.c
@@ -1880,9 +1880,9 @@
}
netdev_reset_queue(ndev);
+ dwceqos_init_hw(lp);
napi_enable(&lp->napi);
phy_start(lp->phy_dev);
- dwceqos_init_hw(lp);
netif_start_queue(ndev);
tasklet_enable(&lp->tx_bdreclaim_tasklet);
diff --git a/drivers/net/ethernet/ti/cpsw-phy-sel.c b/drivers/net/ethernet/ti/cpsw-phy-sel.c
index e9cc61e..c3e85ac 100644
--- a/drivers/net/ethernet/ti/cpsw-phy-sel.c
+++ b/drivers/net/ethernet/ti/cpsw-phy-sel.c
@@ -63,8 +63,12 @@
mode = AM33XX_GMII_SEL_MODE_RGMII;
break;
- case PHY_INTERFACE_MODE_MII:
default:
+ dev_warn(priv->dev,
+ "Unsupported PHY mode: \"%s\". Defaulting to MII.\n",
+ phy_modes(phy_mode));
+ /* fallthrough */
+ case PHY_INTERFACE_MODE_MII:
mode = AM33XX_GMII_SEL_MODE_MII;
break;
};
@@ -106,8 +110,12 @@
mode = AM33XX_GMII_SEL_MODE_RGMII;
break;
- case PHY_INTERFACE_MODE_MII:
default:
+ dev_warn(priv->dev,
+ "Unsupported PHY mode: \"%s\". Defaulting to MII.\n",
+ phy_modes(phy_mode));
+ /* fallthrough */
+ case PHY_INTERFACE_MODE_MII:
mode = AM33XX_GMII_SEL_MODE_MII;
break;
};
diff --git a/drivers/net/ethernet/ti/netcp_core.c b/drivers/net/ethernet/ti/netcp_core.c
index c61d66d..029841f 100644
--- a/drivers/net/ethernet/ti/netcp_core.c
+++ b/drivers/net/ethernet/ti/netcp_core.c
@@ -117,21 +117,17 @@
*ndesc = le32_to_cpu(desc->next_desc);
}
-static void get_pad_info(u32 *pad0, u32 *pad1, u32 *pad2, struct knav_dma_desc *desc)
+static u32 get_sw_data(int index, struct knav_dma_desc *desc)
{
- *pad0 = le32_to_cpu(desc->pad[0]);
- *pad1 = le32_to_cpu(desc->pad[1]);
- *pad2 = le32_to_cpu(desc->pad[2]);
+ /* No Endian conversion needed as this data is untouched by hw */
+ return desc->sw_data[index];
}
-static void get_pad_ptr(void **padptr, struct knav_dma_desc *desc)
-{
- u64 pad64;
-
- pad64 = le32_to_cpu(desc->pad[0]) +
- ((u64)le32_to_cpu(desc->pad[1]) << 32);
- *padptr = (void *)(uintptr_t)pad64;
-}
+/* use these macros to get sw data */
+#define GET_SW_DATA0(desc) get_sw_data(0, desc)
+#define GET_SW_DATA1(desc) get_sw_data(1, desc)
+#define GET_SW_DATA2(desc) get_sw_data(2, desc)
+#define GET_SW_DATA3(desc) get_sw_data(3, desc)
static void get_org_pkt_info(dma_addr_t *buff, u32 *buff_len,
struct knav_dma_desc *desc)
@@ -163,13 +159,18 @@
desc->packet_info = cpu_to_le32(pkt_info);
}
-static void set_pad_info(u32 pad0, u32 pad1, u32 pad2, struct knav_dma_desc *desc)
+static void set_sw_data(int index, u32 data, struct knav_dma_desc *desc)
{
- desc->pad[0] = cpu_to_le32(pad0);
- desc->pad[1] = cpu_to_le32(pad1);
- desc->pad[2] = cpu_to_le32(pad1);
+ /* No Endian conversion needed as this data is untouched by hw */
+ desc->sw_data[index] = data;
}
+/* use these macros to set sw data */
+#define SET_SW_DATA0(data, desc) set_sw_data(0, data, desc)
+#define SET_SW_DATA1(data, desc) set_sw_data(1, data, desc)
+#define SET_SW_DATA2(data, desc) set_sw_data(2, data, desc)
+#define SET_SW_DATA3(data, desc) set_sw_data(3, data, desc)
+
static void set_org_pkt_info(dma_addr_t buff, u32 buff_len,
struct knav_dma_desc *desc)
{
@@ -581,7 +582,6 @@
dma_addr_t dma_desc, dma_buf;
unsigned int buf_len, dma_sz = sizeof(*ndesc);
void *buf_ptr;
- u32 pad[2];
u32 tmp;
get_words(&dma_desc, 1, &desc->next_desc);
@@ -593,14 +593,20 @@
break;
}
get_pkt_info(&dma_buf, &tmp, &dma_desc, ndesc);
- get_pad_ptr(&buf_ptr, ndesc);
+ /* warning!!!! We are retrieving the virtual ptr in the sw_data
+ * field as a 32bit value. Will not work on 64bit machines
+ */
+ buf_ptr = (void *)GET_SW_DATA0(ndesc);
+ buf_len = (int)GET_SW_DATA1(desc);
dma_unmap_page(netcp->dev, dma_buf, PAGE_SIZE, DMA_FROM_DEVICE);
__free_page(buf_ptr);
knav_pool_desc_put(netcp->rx_pool, desc);
}
-
- get_pad_info(&pad[0], &pad[1], &buf_len, desc);
- buf_ptr = (void *)(uintptr_t)(pad[0] + ((u64)pad[1] << 32));
+ /* warning!!!! We are retrieving the virtual ptr in the sw_data
+ * field as a 32bit value. Will not work on 64bit machines
+ */
+ buf_ptr = (void *)GET_SW_DATA0(desc);
+ buf_len = (int)GET_SW_DATA1(desc);
if (buf_ptr)
netcp_frag_free(buf_len <= PAGE_SIZE, buf_ptr);
@@ -639,7 +645,6 @@
dma_addr_t dma_desc, dma_buff;
struct netcp_packet p_info;
struct sk_buff *skb;
- u32 pad[2];
void *org_buf_ptr;
dma_desc = knav_queue_pop(netcp->rx_queue, &dma_sz);
@@ -653,8 +658,11 @@
}
get_pkt_info(&dma_buff, &buf_len, &dma_desc, desc);
- get_pad_info(&pad[0], &pad[1], &org_buf_len, desc);
- org_buf_ptr = (void *)(uintptr_t)(pad[0] + ((u64)pad[1] << 32));
+ /* warning!!!! We are retrieving the virtual ptr in the sw_data
+ * field as a 32bit value. Will not work on 64bit machines
+ */
+ org_buf_ptr = (void *)GET_SW_DATA0(desc);
+ org_buf_len = (int)GET_SW_DATA1(desc);
if (unlikely(!org_buf_ptr)) {
dev_err(netcp->ndev_dev, "NULL bufptr in desc\n");
@@ -679,7 +687,6 @@
/* Fill in the page fragment list */
while (dma_desc) {
struct page *page;
- void *ptr;
ndesc = knav_pool_desc_unmap(netcp->rx_pool, dma_desc, dma_sz);
if (unlikely(!ndesc)) {
@@ -688,8 +695,10 @@
}
get_pkt_info(&dma_buff, &buf_len, &dma_desc, ndesc);
- get_pad_ptr(&ptr, ndesc);
- page = ptr;
+ /* warning!!!! We are retrieving the virtual ptr in the sw_data
+ * field as a 32bit value. Will not work on 64bit machines
+ */
+ page = (struct page *)GET_SW_DATA0(desc);
if (likely(dma_buff && buf_len && page)) {
dma_unmap_page(netcp->dev, dma_buff, PAGE_SIZE,
@@ -777,7 +786,10 @@
}
get_org_pkt_info(&dma, &buf_len, desc);
- get_pad_ptr(&buf_ptr, desc);
+ /* warning!!!! We are retrieving the virtual ptr in the sw_data
+ * field as a 32bit value. Will not work on 64bit machines
+ */
+ buf_ptr = (void *)GET_SW_DATA0(desc);
if (unlikely(!dma)) {
dev_err(netcp->ndev_dev, "NULL orig_buff in desc\n");
@@ -829,7 +841,7 @@
struct page *page;
dma_addr_t dma;
void *bufptr;
- u32 pad[3];
+ u32 sw_data[2];
/* Allocate descriptor */
hwdesc = knav_pool_desc_get(netcp->rx_pool);
@@ -846,7 +858,7 @@
SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
bufptr = netdev_alloc_frag(primary_buf_len);
- pad[2] = primary_buf_len;
+ sw_data[1] = primary_buf_len;
if (unlikely(!bufptr)) {
dev_warn_ratelimited(netcp->ndev_dev,
@@ -858,9 +870,10 @@
if (unlikely(dma_mapping_error(netcp->dev, dma)))
goto fail;
- pad[0] = lower_32_bits((uintptr_t)bufptr);
- pad[1] = upper_32_bits((uintptr_t)bufptr);
-
+ /* warning!!!! We are saving the virtual ptr in the sw_data
+ * field as a 32bit value. Will not work on 64bit machines
+ */
+ sw_data[0] = (u32)bufptr;
} else {
/* Allocate a secondary receive queue entry */
page = alloc_page(GFP_ATOMIC | GFP_DMA | __GFP_COLD);
@@ -870,9 +883,11 @@
}
buf_len = PAGE_SIZE;
dma = dma_map_page(netcp->dev, page, 0, buf_len, DMA_TO_DEVICE);
- pad[0] = lower_32_bits(dma);
- pad[1] = upper_32_bits(dma);
- pad[2] = 0;
+ /* warning!!!! We are saving the virtual ptr in the sw_data
+ * field as a 32bit value. Will not work on 64bit machines
+ */
+ sw_data[0] = (u32)page;
+ sw_data[1] = 0;
}
desc_info = KNAV_DMA_DESC_PS_INFO_IN_DESC;
@@ -882,7 +897,8 @@
pkt_info |= (netcp->rx_queue_id & KNAV_DMA_DESC_RETQ_MASK) <<
KNAV_DMA_DESC_RETQ_SHIFT;
set_org_pkt_info(dma, buf_len, hwdesc);
- set_pad_info(pad[0], pad[1], pad[2], hwdesc);
+ SET_SW_DATA0(sw_data[0], hwdesc);
+ SET_SW_DATA1(sw_data[1], hwdesc);
set_desc_info(desc_info, pkt_info, hwdesc);
/* Push to FDQs */
@@ -971,7 +987,6 @@
unsigned int budget)
{
struct knav_dma_desc *desc;
- void *ptr;
struct sk_buff *skb;
unsigned int dma_sz;
dma_addr_t dma;
@@ -988,8 +1003,10 @@
continue;
}
- get_pad_ptr(&ptr, desc);
- skb = ptr;
+ /* warning!!!! We are retrieving the virtual ptr in the sw_data
+ * field as a 32bit value. Will not work on 64bit machines
+ */
+ skb = (struct sk_buff *)GET_SW_DATA0(desc);
netcp_free_tx_desc_chain(netcp, desc, dma_sz);
if (!skb) {
dev_err(netcp->ndev_dev, "No skb in Tx desc\n");
@@ -1194,10 +1211,10 @@
}
set_words(&tmp, 1, &desc->packet_info);
- tmp = lower_32_bits((uintptr_t)&skb);
- set_words(&tmp, 1, &desc->pad[0]);
- tmp = upper_32_bits((uintptr_t)&skb);
- set_words(&tmp, 1, &desc->pad[1]);
+ /* warning!!!! We are saving the virtual ptr in the sw_data
+ * field as a 32bit value. Will not work on 64bit machines
+ */
+ SET_SW_DATA0((u32)skb, desc);
if (tx_pipe->flags & SWITCH_TO_PORT_IN_TAGINFO) {
tmp = tx_pipe->switch_to_port;
diff --git a/drivers/net/geneve.c b/drivers/net/geneve.c
index 0b14ac3..0bf7edd 100644
--- a/drivers/net/geneve.c
+++ b/drivers/net/geneve.c
@@ -1039,6 +1039,34 @@
return geneve_xmit_skb(skb, dev, info);
}
+static int __geneve_change_mtu(struct net_device *dev, int new_mtu, bool strict)
+{
+ /* The max_mtu calculation does not take account of GENEVE
+ * options, to avoid excluding potentially valid
+ * configurations.
+ */
+ int max_mtu = IP_MAX_MTU - GENEVE_BASE_HLEN - sizeof(struct iphdr)
+ - dev->hard_header_len;
+
+ if (new_mtu < 68)
+ return -EINVAL;
+
+ if (new_mtu > max_mtu) {
+ if (strict)
+ return -EINVAL;
+
+ new_mtu = max_mtu;
+ }
+
+ dev->mtu = new_mtu;
+ return 0;
+}
+
+static int geneve_change_mtu(struct net_device *dev, int new_mtu)
+{
+ return __geneve_change_mtu(dev, new_mtu, true);
+}
+
static int geneve_fill_metadata_dst(struct net_device *dev, struct sk_buff *skb)
{
struct ip_tunnel_info *info = skb_tunnel_info(skb);
@@ -1083,7 +1111,7 @@
.ndo_stop = geneve_stop,
.ndo_start_xmit = geneve_xmit,
.ndo_get_stats64 = ip_tunnel_get_stats64,
- .ndo_change_mtu = eth_change_mtu,
+ .ndo_change_mtu = geneve_change_mtu,
.ndo_validate_addr = eth_validate_addr,
.ndo_set_mac_address = eth_mac_addr,
.ndo_fill_metadata_dst = geneve_fill_metadata_dst,
@@ -1150,6 +1178,7 @@
dev->hw_features |= NETIF_F_GSO_SOFTWARE;
netif_keep_dst(dev);
+ dev->priv_flags &= ~IFF_TX_SKB_SHARING;
dev->priv_flags |= IFF_LIVE_ADDR_CHANGE | IFF_NO_QUEUE;
eth_hw_addr_random(dev);
}
@@ -1441,12 +1470,23 @@
return dev;
err = geneve_configure(net, dev, &geneve_remote_unspec,
- 0, 0, 0, htons(dst_port), true, 0);
- if (err) {
- free_netdev(dev);
- return ERR_PTR(err);
- }
+ 0, 0, 0, htons(dst_port), true,
+ GENEVE_F_UDP_ZERO_CSUM6_RX);
+ if (err)
+ goto err;
+
+ /* openvswitch users expect packet sizes to be unrestricted,
+ * so set the largest MTU we can.
+ */
+ err = __geneve_change_mtu(dev, IP_MAX_MTU, false);
+ if (err)
+ goto err;
+
return dev;
+
+ err:
+ free_netdev(dev);
+ return ERR_PTR(err);
}
EXPORT_SYMBOL_GPL(geneve_dev_create_fb);
diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c
index 1d3a665..98e34fe 100644
--- a/drivers/net/hyperv/netvsc_drv.c
+++ b/drivers/net/hyperv/netvsc_drv.c
@@ -1089,6 +1089,9 @@
net->ethtool_ops = ðtool_ops;
SET_NETDEV_DEV(net, &dev->device);
+ /* We always need headroom for rndis header */
+ net->needed_headroom = RNDIS_AND_PPI_SIZE;
+
/* Notify the netvsc driver of the new device */
memset(&device_info, 0, sizeof(device_info));
device_info.ring_size = ring_size;
diff --git a/drivers/net/phy/bcm7xxx.c b/drivers/net/phy/bcm7xxx.c
index bf241a3..db507e3 100644
--- a/drivers/net/phy/bcm7xxx.c
+++ b/drivers/net/phy/bcm7xxx.c
@@ -250,10 +250,6 @@
phy_write(phydev, MII_BCM7XXX_AUX_MODE, MII_BCM7XX_64CLK_MDIO);
phy_read(phydev, MII_BCM7XXX_AUX_MODE);
- /* Workaround only required for 100Mbits/sec capable PHYs */
- if (phydev->supported & PHY_GBIT_FEATURES)
- return 0;
-
/* set shadow mode 2 */
ret = phy_set_clr_bits(phydev, MII_BCM7XXX_TEST,
MII_BCM7XXX_SHD_MODE_2, MII_BCM7XXX_SHD_MODE_2);
@@ -270,7 +266,7 @@
phy_write(phydev, MII_BCM7XXX_100TX_FALSE_CAR, 0x7555);
/* reset shadow mode 2 */
- ret = phy_set_clr_bits(phydev, MII_BCM7XXX_TEST, MII_BCM7XXX_SHD_MODE_2, 0);
+ ret = phy_set_clr_bits(phydev, MII_BCM7XXX_TEST, 0, MII_BCM7XXX_SHD_MODE_2);
if (ret < 0)
return ret;
@@ -307,11 +303,6 @@
return 0;
}
-static int bcm7xxx_dummy_config_init(struct phy_device *phydev)
-{
- return 0;
-}
-
#define BCM7XXX_28NM_GPHY(_oui, _name) \
{ \
.phy_id = (_oui), \
@@ -337,7 +328,7 @@
.phy_id = PHY_ID_BCM7425,
.phy_id_mask = 0xfffffff0,
.name = "Broadcom BCM7425",
- .features = PHY_GBIT_FEATURES |
+ .features = PHY_BASIC_FEATURES |
SUPPORTED_Pause | SUPPORTED_Asym_Pause,
.flags = PHY_IS_INTERNAL,
.config_init = bcm7xxx_config_init,
@@ -349,7 +340,7 @@
.phy_id = PHY_ID_BCM7429,
.phy_id_mask = 0xfffffff0,
.name = "Broadcom BCM7429",
- .features = PHY_GBIT_FEATURES |
+ .features = PHY_BASIC_FEATURES |
SUPPORTED_Pause | SUPPORTED_Asym_Pause,
.flags = PHY_IS_INTERNAL,
.config_init = bcm7xxx_config_init,
@@ -361,7 +352,7 @@
.phy_id = PHY_ID_BCM7435,
.phy_id_mask = 0xfffffff0,
.name = "Broadcom BCM7435",
- .features = PHY_GBIT_FEATURES |
+ .features = PHY_BASIC_FEATURES |
SUPPORTED_Pause | SUPPORTED_Asym_Pause,
.flags = PHY_IS_INTERNAL,
.config_init = bcm7xxx_config_init,
@@ -369,30 +360,6 @@
.read_status = genphy_read_status,
.suspend = bcm7xxx_suspend,
.resume = bcm7xxx_config_init,
-}, {
- .phy_id = PHY_BCM_OUI_4,
- .phy_id_mask = 0xffff0000,
- .name = "Broadcom BCM7XXX 40nm",
- .features = PHY_GBIT_FEATURES |
- SUPPORTED_Pause | SUPPORTED_Asym_Pause,
- .flags = PHY_IS_INTERNAL,
- .config_init = bcm7xxx_config_init,
- .config_aneg = genphy_config_aneg,
- .read_status = genphy_read_status,
- .suspend = bcm7xxx_suspend,
- .resume = bcm7xxx_config_init,
-}, {
- .phy_id = PHY_BCM_OUI_5,
- .phy_id_mask = 0xffffff00,
- .name = "Broadcom BCM7XXX 65nm",
- .features = PHY_BASIC_FEATURES |
- SUPPORTED_Pause | SUPPORTED_Asym_Pause,
- .flags = PHY_IS_INTERNAL,
- .config_init = bcm7xxx_dummy_config_init,
- .config_aneg = genphy_config_aneg,
- .read_status = genphy_read_status,
- .suspend = bcm7xxx_suspend,
- .resume = bcm7xxx_config_init,
} };
static struct mdio_device_id __maybe_unused bcm7xxx_tbl[] = {
@@ -404,8 +371,6 @@
{ PHY_ID_BCM7439, 0xfffffff0, },
{ PHY_ID_BCM7435, 0xfffffff0, },
{ PHY_ID_BCM7445, 0xfffffff0, },
- { PHY_BCM_OUI_4, 0xffff0000 },
- { PHY_BCM_OUI_5, 0xffffff00 },
{ }
};
diff --git a/drivers/net/phy/marvell.c b/drivers/net/phy/marvell.c
index e3eb964..ab1d0fc 100644
--- a/drivers/net/phy/marvell.c
+++ b/drivers/net/phy/marvell.c
@@ -446,6 +446,12 @@
if (err < 0)
return err;
+ return 0;
+}
+
+static int marvell_config_init(struct phy_device *phydev)
+{
+ /* Set registers from marvell,reg-init DT property */
return marvell_of_reg_init(phydev);
}
@@ -495,7 +501,7 @@
mdelay(500);
- return 0;
+ return marvell_config_init(phydev);
}
static int m88e3016_config_init(struct phy_device *phydev)
@@ -514,7 +520,7 @@
if (reg < 0)
return reg;
- return 0;
+ return marvell_config_init(phydev);
}
static int m88e1111_config_init(struct phy_device *phydev)
@@ -1078,6 +1084,7 @@
.features = PHY_GBIT_FEATURES,
.probe = marvell_probe,
.flags = PHY_HAS_INTERRUPT,
+ .config_init = &marvell_config_init,
.config_aneg = &marvell_config_aneg,
.read_status = &genphy_read_status,
.ack_interrupt = &marvell_ack_interrupt,
@@ -1149,6 +1156,7 @@
.features = PHY_GBIT_FEATURES,
.flags = PHY_HAS_INTERRUPT,
.probe = marvell_probe,
+ .config_init = &marvell_config_init,
.config_aneg = &m88e1121_config_aneg,
.read_status = &marvell_read_status,
.ack_interrupt = &marvell_ack_interrupt,
@@ -1167,6 +1175,7 @@
.features = PHY_GBIT_FEATURES,
.flags = PHY_HAS_INTERRUPT,
.probe = marvell_probe,
+ .config_init = &marvell_config_init,
.config_aneg = &m88e1318_config_aneg,
.read_status = &marvell_read_status,
.ack_interrupt = &marvell_ack_interrupt,
@@ -1259,6 +1268,7 @@
.features = PHY_GBIT_FEATURES,
.flags = PHY_HAS_INTERRUPT,
.probe = marvell_probe,
+ .config_init = &marvell_config_init,
.config_aneg = &m88e1510_config_aneg,
.read_status = &marvell_read_status,
.ack_interrupt = &marvell_ack_interrupt,
@@ -1277,6 +1287,7 @@
.features = PHY_GBIT_FEATURES,
.flags = PHY_HAS_INTERRUPT,
.probe = marvell_probe,
+ .config_init = &marvell_config_init,
.config_aneg = &m88e1510_config_aneg,
.read_status = &marvell_read_status,
.ack_interrupt = &marvell_ack_interrupt,
diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c
index bad3f00..e551f3a 100644
--- a/drivers/net/phy/phy_device.c
+++ b/drivers/net/phy/phy_device.c
@@ -1410,7 +1410,7 @@
features = (SUPPORTED_TP | SUPPORTED_MII
| SUPPORTED_AUI | SUPPORTED_FIBRE |
- SUPPORTED_BNC);
+ SUPPORTED_BNC | SUPPORTED_Pause | SUPPORTED_Asym_Pause);
/* Do we support autonegotiation? */
val = phy_read(phydev, MII_BMSR);
diff --git a/drivers/net/ppp/pppoe.c b/drivers/net/ppp/pppoe.c
index f3c6302..4ddae81 100644
--- a/drivers/net/ppp/pppoe.c
+++ b/drivers/net/ppp/pppoe.c
@@ -395,6 +395,8 @@
if (!__pppoe_xmit(sk_pppox(relay_po), skb))
goto abort_put;
+
+ sock_put(sk_pppox(relay_po));
} else {
if (sock_queue_rcv_skb(sk, skb))
goto abort_kfree;
diff --git a/drivers/net/usb/Kconfig b/drivers/net/usb/Kconfig
index 7f83504..cdde590 100644
--- a/drivers/net/usb/Kconfig
+++ b/drivers/net/usb/Kconfig
@@ -395,6 +395,10 @@
The protocol specification is incomplete, and is controlled by
(and for) Microsoft; it isn't an "Open" ecosystem or market.
+config USB_NET_CDC_SUBSET_ENABLE
+ tristate
+ depends on USB_NET_CDC_SUBSET
+
config USB_NET_CDC_SUBSET
tristate "Simple USB Network Links (CDC Ethernet subset)"
depends on USB_USBNET
@@ -413,6 +417,7 @@
config USB_ALI_M5632
bool "ALi M5632 based 'USB 2.0 Data Link' cables"
depends on USB_NET_CDC_SUBSET
+ select USB_NET_CDC_SUBSET_ENABLE
help
Choose this option if you're using a host-to-host cable
based on this design, which supports USB 2.0 high speed.
@@ -420,6 +425,7 @@
config USB_AN2720
bool "AnchorChips 2720 based cables (Xircom PGUNET, ...)"
depends on USB_NET_CDC_SUBSET
+ select USB_NET_CDC_SUBSET_ENABLE
help
Choose this option if you're using a host-to-host cable
based on this design. Note that AnchorChips is now a
@@ -428,6 +434,7 @@
config USB_BELKIN
bool "eTEK based host-to-host cables (Advance, Belkin, ...)"
depends on USB_NET_CDC_SUBSET
+ select USB_NET_CDC_SUBSET_ENABLE
default y
help
Choose this option if you're using a host-to-host cable
@@ -437,6 +444,7 @@
config USB_ARMLINUX
bool "Embedded ARM Linux links (iPaq, ...)"
depends on USB_NET_CDC_SUBSET
+ select USB_NET_CDC_SUBSET_ENABLE
default y
help
Choose this option to support the "usb-eth" networking driver
@@ -454,6 +462,7 @@
config USB_EPSON2888
bool "Epson 2888 based firmware (DEVELOPMENT)"
depends on USB_NET_CDC_SUBSET
+ select USB_NET_CDC_SUBSET_ENABLE
help
Choose this option to support the usb networking links used
by some sample firmware from Epson.
@@ -461,6 +470,7 @@
config USB_KC2190
bool "KT Technology KC2190 based cables (InstaNet)"
depends on USB_NET_CDC_SUBSET
+ select USB_NET_CDC_SUBSET_ENABLE
help
Choose this option if you're using a host-to-host cable
with one of these chips.
diff --git a/drivers/net/usb/Makefile b/drivers/net/usb/Makefile
index b5f0406..37fb46ae 100644
--- a/drivers/net/usb/Makefile
+++ b/drivers/net/usb/Makefile
@@ -23,7 +23,7 @@
obj-$(CONFIG_USB_NET_NET1080) += net1080.o
obj-$(CONFIG_USB_NET_PLUSB) += plusb.o
obj-$(CONFIG_USB_NET_RNDIS_HOST) += rndis_host.o
-obj-$(CONFIG_USB_NET_CDC_SUBSET) += cdc_subset.o
+obj-$(CONFIG_USB_NET_CDC_SUBSET_ENABLE) += cdc_subset.o
obj-$(CONFIG_USB_NET_ZAURUS) += zaurus.o
obj-$(CONFIG_USB_NET_MCS7830) += mcs7830.o
obj-$(CONFIG_USB_USBNET) += usbnet.o
diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
index 23e9880..570deef 100644
--- a/drivers/net/usb/qmi_wwan.c
+++ b/drivers/net/usb/qmi_wwan.c
@@ -637,6 +637,7 @@
/* 3. Combined interface devices matching on interface number */
{QMI_FIXED_INTF(0x0408, 0xea42, 4)}, /* Yota / Megafon M100-1 */
+ {QMI_FIXED_INTF(0x05c6, 0x6001, 3)}, /* 4G LTE usb-modem U901 */
{QMI_FIXED_INTF(0x05c6, 0x7000, 0)},
{QMI_FIXED_INTF(0x05c6, 0x7001, 1)},
{QMI_FIXED_INTF(0x05c6, 0x7002, 1)},
diff --git a/drivers/net/vmxnet3/vmxnet3_defs.h b/drivers/net/vmxnet3/vmxnet3_defs.h
index 221a530..72ba8ae 100644
--- a/drivers/net/vmxnet3/vmxnet3_defs.h
+++ b/drivers/net/vmxnet3/vmxnet3_defs.h
@@ -377,7 +377,7 @@
#define VMXNET3_TX_RING_MAX_SIZE 4096
#define VMXNET3_TC_RING_MAX_SIZE 4096
#define VMXNET3_RX_RING_MAX_SIZE 4096
-#define VMXNET3_RX_RING2_MAX_SIZE 2048
+#define VMXNET3_RX_RING2_MAX_SIZE 4096
#define VMXNET3_RC_RING_MAX_SIZE 8192
/* a list of reasons for queue stop */
diff --git a/drivers/net/vmxnet3/vmxnet3_int.h b/drivers/net/vmxnet3/vmxnet3_int.h
index bdb8a6c..729c344 100644
--- a/drivers/net/vmxnet3/vmxnet3_int.h
+++ b/drivers/net/vmxnet3/vmxnet3_int.h
@@ -69,10 +69,10 @@
/*
* Version numbers
*/
-#define VMXNET3_DRIVER_VERSION_STRING "1.4.5.0-k"
+#define VMXNET3_DRIVER_VERSION_STRING "1.4.6.0-k"
/* a 32-bit int, each byte encode a verion number in VMXNET3_DRIVER_VERSION */
-#define VMXNET3_DRIVER_VERSION_NUM 0x01040500
+#define VMXNET3_DRIVER_VERSION_NUM 0x01040600
#if defined(CONFIG_PCI_MSI)
/* RSS only makes sense if MSI-X is supported. */
diff --git a/drivers/net/vxlan.c b/drivers/net/vxlan.c
index 6543918..e6944b2 100644
--- a/drivers/net/vxlan.c
+++ b/drivers/net/vxlan.c
@@ -2171,9 +2171,11 @@
#endif
}
- if (vxlan->flags & VXLAN_F_COLLECT_METADATA &&
- info && info->mode & IP_TUNNEL_INFO_TX) {
- vxlan_xmit_one(skb, dev, NULL, false);
+ if (vxlan->flags & VXLAN_F_COLLECT_METADATA) {
+ if (info && info->mode & IP_TUNNEL_INFO_TX)
+ vxlan_xmit_one(skb, dev, NULL, false);
+ else
+ kfree_skb(skb);
return NETDEV_TX_OK;
}
@@ -2367,27 +2369,41 @@
{
}
+static int __vxlan_change_mtu(struct net_device *dev,
+ struct net_device *lowerdev,
+ struct vxlan_rdst *dst, int new_mtu, bool strict)
+{
+ int max_mtu = IP_MAX_MTU;
+
+ if (lowerdev)
+ max_mtu = lowerdev->mtu;
+
+ if (dst->remote_ip.sa.sa_family == AF_INET6)
+ max_mtu -= VXLAN6_HEADROOM;
+ else
+ max_mtu -= VXLAN_HEADROOM;
+
+ if (new_mtu < 68)
+ return -EINVAL;
+
+ if (new_mtu > max_mtu) {
+ if (strict)
+ return -EINVAL;
+
+ new_mtu = max_mtu;
+ }
+
+ dev->mtu = new_mtu;
+ return 0;
+}
+
static int vxlan_change_mtu(struct net_device *dev, int new_mtu)
{
struct vxlan_dev *vxlan = netdev_priv(dev);
struct vxlan_rdst *dst = &vxlan->default_dst;
- struct net_device *lowerdev;
- int max_mtu;
-
- lowerdev = __dev_get_by_index(vxlan->net, dst->remote_ifindex);
- if (lowerdev == NULL)
- return eth_change_mtu(dev, new_mtu);
-
- if (dst->remote_ip.sa.sa_family == AF_INET6)
- max_mtu = lowerdev->mtu - VXLAN6_HEADROOM;
- else
- max_mtu = lowerdev->mtu - VXLAN_HEADROOM;
-
- if (new_mtu < 68 || new_mtu > max_mtu)
- return -EINVAL;
-
- dev->mtu = new_mtu;
- return 0;
+ struct net_device *lowerdev = __dev_get_by_index(vxlan->net,
+ dst->remote_ifindex);
+ return __vxlan_change_mtu(dev, lowerdev, dst, new_mtu, true);
}
static int egress_ipv4_tun_info(struct net_device *dev, struct sk_buff *skb,
@@ -2523,6 +2539,7 @@
dev->hw_features |= NETIF_F_GSO_SOFTWARE;
dev->hw_features |= NETIF_F_HW_VLAN_CTAG_TX | NETIF_F_HW_VLAN_STAG_TX;
netif_keep_dst(dev);
+ dev->priv_flags &= ~IFF_TX_SKB_SHARING;
dev->priv_flags |= IFF_LIVE_ADDR_CHANGE | IFF_NO_QUEUE;
INIT_LIST_HEAD(&vxlan->next);
@@ -2765,6 +2782,7 @@
int err;
bool use_ipv6 = false;
__be16 default_port = vxlan->cfg.dst_port;
+ struct net_device *lowerdev = NULL;
vxlan->net = src_net;
@@ -2785,9 +2803,7 @@
}
if (conf->remote_ifindex) {
- struct net_device *lowerdev
- = __dev_get_by_index(src_net, conf->remote_ifindex);
-
+ lowerdev = __dev_get_by_index(src_net, conf->remote_ifindex);
dst->remote_ifindex = conf->remote_ifindex;
if (!lowerdev) {
@@ -2811,6 +2827,12 @@
needed_headroom = lowerdev->hard_header_len;
}
+ if (conf->mtu) {
+ err = __vxlan_change_mtu(dev, lowerdev, dst, conf->mtu, false);
+ if (err)
+ return err;
+ }
+
if (use_ipv6 || conf->flags & VXLAN_F_COLLECT_METADATA)
needed_headroom += VXLAN6_HEADROOM;
else
diff --git a/drivers/net/wan/dscc4.c b/drivers/net/wan/dscc4.c
index 7a72407..6292259 100644
--- a/drivers/net/wan/dscc4.c
+++ b/drivers/net/wan/dscc4.c
@@ -1626,7 +1626,7 @@
if (state & Xpr) {
void __iomem *scc_addr;
unsigned long ring;
- int i;
+ unsigned int i;
/*
* - the busy condition happens (sometimes);
diff --git a/drivers/net/wireless/intel/iwlwifi/Kconfig b/drivers/net/wireless/intel/iwlwifi/Kconfig
index 8660677..7438fbe 100644
--- a/drivers/net/wireless/intel/iwlwifi/Kconfig
+++ b/drivers/net/wireless/intel/iwlwifi/Kconfig
@@ -53,7 +53,6 @@
config IWLDVM
tristate "Intel Wireless WiFi DVM Firmware support"
- depends on m
help
This is the driver that supports the DVM firmware. The list
of the devices that use this firmware is available here:
diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-8000.c b/drivers/net/wireless/intel/iwlwifi/iwl-8000.c
index c84a029..bce9b3420 100644
--- a/drivers/net/wireless/intel/iwlwifi/iwl-8000.c
+++ b/drivers/net/wireless/intel/iwlwifi/iwl-8000.c
@@ -7,6 +7,7 @@
*
* Copyright(c) 2014 Intel Corporation. All rights reserved.
* Copyright(c) 2014 - 2015 Intel Mobile Communications GmbH
+ * Copyright(c) 2016 Intel Deutschland GmbH
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of version 2 of the GNU General Public License as
@@ -70,12 +71,15 @@
/* Highest firmware API version supported */
#define IWL8000_UCODE_API_MAX 20
+#define IWL8265_UCODE_API_MAX 20
/* Oldest version we won't warn about */
#define IWL8000_UCODE_API_OK 13
+#define IWL8265_UCODE_API_OK 20
/* Lowest firmware API version supported */
#define IWL8000_UCODE_API_MIN 13
+#define IWL8265_UCODE_API_MIN 20
/* NVM versions */
#define IWL8000_NVM_VERSION 0x0a1d
@@ -93,6 +97,10 @@
#define IWL8000_MODULE_FIRMWARE(api) \
IWL8000_FW_PRE "-" __stringify(api) ".ucode"
+#define IWL8265_FW_PRE "iwlwifi-8265-"
+#define IWL8265_MODULE_FIRMWARE(api) \
+ IWL8265_FW_PRE __stringify(api) ".ucode"
+
#define NVM_HW_SECTION_NUM_FAMILY_8000 10
#define DEFAULT_NVM_FILE_FAMILY_8000B "nvmData-8000B"
#define DEFAULT_NVM_FILE_FAMILY_8000C "nvmData-8000C"
@@ -144,10 +152,7 @@
.support_tx_backoff = true,
};
-#define IWL_DEVICE_8000 \
- .ucode_api_max = IWL8000_UCODE_API_MAX, \
- .ucode_api_ok = IWL8000_UCODE_API_OK, \
- .ucode_api_min = IWL8000_UCODE_API_MIN, \
+#define IWL_DEVICE_8000_COMMON \
.device_family = IWL_DEVICE_FAMILY_8000, \
.max_inst_size = IWL60_RTC_INST_SIZE, \
.max_data_size = IWL60_RTC_DATA_SIZE, \
@@ -167,10 +172,28 @@
.thermal_params = &iwl8000_tt_params, \
.apmg_not_supported = true
+#define IWL_DEVICE_8000 \
+ IWL_DEVICE_8000_COMMON, \
+ .ucode_api_max = IWL8000_UCODE_API_MAX, \
+ .ucode_api_ok = IWL8000_UCODE_API_OK, \
+ .ucode_api_min = IWL8000_UCODE_API_MIN \
+
+#define IWL_DEVICE_8260 \
+ IWL_DEVICE_8000_COMMON, \
+ .ucode_api_max = IWL8000_UCODE_API_MAX, \
+ .ucode_api_ok = IWL8000_UCODE_API_OK, \
+ .ucode_api_min = IWL8000_UCODE_API_MIN \
+
+#define IWL_DEVICE_8265 \
+ IWL_DEVICE_8000_COMMON, \
+ .ucode_api_max = IWL8265_UCODE_API_MAX, \
+ .ucode_api_ok = IWL8265_UCODE_API_OK, \
+ .ucode_api_min = IWL8265_UCODE_API_MIN \
+
const struct iwl_cfg iwl8260_2n_cfg = {
.name = "Intel(R) Dual Band Wireless N 8260",
.fw_name_pre = IWL8000_FW_PRE,
- IWL_DEVICE_8000,
+ IWL_DEVICE_8260,
.ht_params = &iwl8000_ht_params,
.nvm_ver = IWL8000_NVM_VERSION,
.nvm_calib_ver = IWL8000_TX_POWER_VERSION,
@@ -179,7 +202,7 @@
const struct iwl_cfg iwl8260_2ac_cfg = {
.name = "Intel(R) Dual Band Wireless AC 8260",
.fw_name_pre = IWL8000_FW_PRE,
- IWL_DEVICE_8000,
+ IWL_DEVICE_8260,
.ht_params = &iwl8000_ht_params,
.nvm_ver = IWL8000_NVM_VERSION,
.nvm_calib_ver = IWL8000_TX_POWER_VERSION,
@@ -188,8 +211,8 @@
const struct iwl_cfg iwl8265_2ac_cfg = {
.name = "Intel(R) Dual Band Wireless AC 8265",
- .fw_name_pre = IWL8000_FW_PRE,
- IWL_DEVICE_8000,
+ .fw_name_pre = IWL8265_FW_PRE,
+ IWL_DEVICE_8265,
.ht_params = &iwl8000_ht_params,
.nvm_ver = IWL8000_NVM_VERSION,
.nvm_calib_ver = IWL8000_TX_POWER_VERSION,
@@ -209,7 +232,7 @@
const struct iwl_cfg iwl8260_2ac_sdio_cfg = {
.name = "Intel(R) Dual Band Wireless-AC 8260",
.fw_name_pre = IWL8000_FW_PRE,
- IWL_DEVICE_8000,
+ IWL_DEVICE_8260,
.ht_params = &iwl8000_ht_params,
.nvm_ver = IWL8000_NVM_VERSION,
.nvm_calib_ver = IWL8000_TX_POWER_VERSION,
@@ -236,3 +259,4 @@
};
MODULE_FIRMWARE(IWL8000_MODULE_FIRMWARE(IWL8000_UCODE_API_OK));
+MODULE_FIRMWARE(IWL8265_MODULE_FIRMWARE(IWL8265_UCODE_API_OK));
diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-drv.c b/drivers/net/wireless/intel/iwlwifi/iwl-drv.c
index 7acb490..ab4c2a0 100644
--- a/drivers/net/wireless/intel/iwlwifi/iwl-drv.c
+++ b/drivers/net/wireless/intel/iwlwifi/iwl-drv.c
@@ -243,8 +243,10 @@
if (drv->trans->cfg->device_family == IWL_DEVICE_FAMILY_8000) {
char rev_step = 'A' + CSR_HW_REV_STEP(drv->trans->hw_rev);
- snprintf(drv->firmware_name, sizeof(drv->firmware_name),
- "%s%c-%s.ucode", name_pre, rev_step, tag);
+ if (rev_step != 'A')
+ snprintf(drv->firmware_name,
+ sizeof(drv->firmware_name), "%s%c-%s.ucode",
+ name_pre, rev_step, tag);
}
IWL_DEBUG_INFO(drv, "attempting to load firmware %s'%s'\n",
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/scan.c b/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
index 9a15642..ea1e177 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
@@ -1298,6 +1298,10 @@
return -EBUSY;
}
+ /* we don't support "match all" in the firmware */
+ if (!req->n_match_sets)
+ return -EOPNOTSUPP;
+
ret = iwl_mvm_check_running_scans(mvm, type);
if (ret)
return ret;
diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/internal.h b/drivers/net/wireless/intel/iwlwifi/pcie/internal.h
index cc3888e..73c9559 100644
--- a/drivers/net/wireless/intel/iwlwifi/pcie/internal.h
+++ b/drivers/net/wireless/intel/iwlwifi/pcie/internal.h
@@ -490,6 +490,15 @@
iwl_write32(trans, CSR_INT_MASK, trans_pcie->inta_mask);
}
+static inline void iwl_enable_fw_load_int(struct iwl_trans *trans)
+{
+ struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
+
+ IWL_DEBUG_ISR(trans, "Enabling FW load interrupt\n");
+ trans_pcie->inta_mask = CSR_INT_BIT_FH_TX;
+ iwl_write32(trans, CSR_INT_MASK, trans_pcie->inta_mask);
+}
+
static inline void iwl_enable_rfkill_int(struct iwl_trans *trans)
{
struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
index ccafbd8..152cf9a 100644
--- a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
+++ b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
@@ -1438,9 +1438,11 @@
inta & ~trans_pcie->inta_mask);
}
- /* Re-enable all interrupts */
- /* only Re-enable if disabled by irq */
- if (test_bit(STATUS_INT_ENABLED, &trans->status))
+ /* we are loading the firmware, enable FH_TX interrupt only */
+ if (handled & CSR_INT_BIT_FH_TX)
+ iwl_enable_fw_load_int(trans);
+ /* only Re-enable all interrupt if disabled by irq */
+ else if (test_bit(STATUS_INT_ENABLED, &trans->status))
iwl_enable_interrupts(trans);
/* Re-enable RF_KILL if it occurred */
else if (handled & CSR_INT_BIT_RF_KILL)
diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/trans.c b/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
index d60a467..5a854c6 100644
--- a/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
+++ b/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
@@ -1021,82 +1021,6 @@
&first_ucode_section);
}
-static int iwl_trans_pcie_start_fw(struct iwl_trans *trans,
- const struct fw_img *fw, bool run_in_rfkill)
-{
- struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
- bool hw_rfkill;
- int ret;
-
- mutex_lock(&trans_pcie->mutex);
-
- /* Someone called stop_device, don't try to start_fw */
- if (trans_pcie->is_down) {
- IWL_WARN(trans,
- "Can't start_fw since the HW hasn't been started\n");
- ret = EIO;
- goto out;
- }
-
- /* This may fail if AMT took ownership of the device */
- if (iwl_pcie_prepare_card_hw(trans)) {
- IWL_WARN(trans, "Exit HW not ready\n");
- ret = -EIO;
- goto out;
- }
-
- iwl_enable_rfkill_int(trans);
-
- /* If platform's RF_KILL switch is NOT set to KILL */
- hw_rfkill = iwl_is_rfkill_set(trans);
- if (hw_rfkill)
- set_bit(STATUS_RFKILL, &trans->status);
- else
- clear_bit(STATUS_RFKILL, &trans->status);
- iwl_trans_pcie_rf_kill(trans, hw_rfkill);
- if (hw_rfkill && !run_in_rfkill) {
- ret = -ERFKILL;
- goto out;
- }
-
- iwl_write32(trans, CSR_INT, 0xFFFFFFFF);
-
- ret = iwl_pcie_nic_init(trans);
- if (ret) {
- IWL_ERR(trans, "Unable to init nic\n");
- goto out;
- }
-
- /* make sure rfkill handshake bits are cleared */
- iwl_write32(trans, CSR_UCODE_DRV_GP1_CLR, CSR_UCODE_SW_BIT_RFKILL);
- iwl_write32(trans, CSR_UCODE_DRV_GP1_CLR,
- CSR_UCODE_DRV_GP1_BIT_CMD_BLOCKED);
-
- /* clear (again), then enable host interrupts */
- iwl_write32(trans, CSR_INT, 0xFFFFFFFF);
- iwl_enable_interrupts(trans);
-
- /* really make sure rfkill handshake bits are cleared */
- iwl_write32(trans, CSR_UCODE_DRV_GP1_CLR, CSR_UCODE_SW_BIT_RFKILL);
- iwl_write32(trans, CSR_UCODE_DRV_GP1_CLR, CSR_UCODE_SW_BIT_RFKILL);
-
- /* Load the given image to the HW */
- if (trans->cfg->device_family == IWL_DEVICE_FAMILY_8000)
- ret = iwl_pcie_load_given_ucode_8000(trans, fw);
- else
- ret = iwl_pcie_load_given_ucode(trans, fw);
-
-out:
- mutex_unlock(&trans_pcie->mutex);
- return ret;
-}
-
-static void iwl_trans_pcie_fw_alive(struct iwl_trans *trans, u32 scd_addr)
-{
- iwl_pcie_reset_ict(trans);
- iwl_pcie_tx_start(trans, scd_addr);
-}
-
static void _iwl_trans_pcie_stop_device(struct iwl_trans *trans, bool low_power)
{
struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
@@ -1127,7 +1051,8 @@
* already dead.
*/
if (test_and_clear_bit(STATUS_DEVICE_ENABLED, &trans->status)) {
- IWL_DEBUG_INFO(trans, "DEVICE_ENABLED bit was set and is now cleared\n");
+ IWL_DEBUG_INFO(trans,
+ "DEVICE_ENABLED bit was set and is now cleared\n");
iwl_pcie_tx_stop(trans);
iwl_pcie_rx_stop(trans);
@@ -1161,7 +1086,6 @@
iwl_disable_interrupts(trans);
spin_unlock(&trans_pcie->irq_lock);
-
/* clear all status bits */
clear_bit(STATUS_SYNC_HCMD_ACTIVE, &trans->status);
clear_bit(STATUS_INT_ENABLED, &trans->status);
@@ -1194,10 +1118,116 @@
if (hw_rfkill != was_hw_rfkill)
iwl_trans_pcie_rf_kill(trans, hw_rfkill);
- /* re-take ownership to prevent other users from stealing the deivce */
+ /* re-take ownership to prevent other users from stealing the device */
iwl_pcie_prepare_card_hw(trans);
}
+static int iwl_trans_pcie_start_fw(struct iwl_trans *trans,
+ const struct fw_img *fw, bool run_in_rfkill)
+{
+ struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
+ bool hw_rfkill;
+ int ret;
+
+ /* This may fail if AMT took ownership of the device */
+ if (iwl_pcie_prepare_card_hw(trans)) {
+ IWL_WARN(trans, "Exit HW not ready\n");
+ ret = -EIO;
+ goto out;
+ }
+
+ iwl_enable_rfkill_int(trans);
+
+ iwl_write32(trans, CSR_INT, 0xFFFFFFFF);
+
+ /*
+ * We enabled the RF-Kill interrupt and the handler may very
+ * well be running. Disable the interrupts to make sure no other
+ * interrupt can be fired.
+ */
+ iwl_disable_interrupts(trans);
+
+ /* Make sure it finished running */
+ synchronize_irq(trans_pcie->pci_dev->irq);
+
+ mutex_lock(&trans_pcie->mutex);
+
+ /* If platform's RF_KILL switch is NOT set to KILL */
+ hw_rfkill = iwl_is_rfkill_set(trans);
+ if (hw_rfkill)
+ set_bit(STATUS_RFKILL, &trans->status);
+ else
+ clear_bit(STATUS_RFKILL, &trans->status);
+ iwl_trans_pcie_rf_kill(trans, hw_rfkill);
+ if (hw_rfkill && !run_in_rfkill) {
+ ret = -ERFKILL;
+ goto out;
+ }
+
+ /* Someone called stop_device, don't try to start_fw */
+ if (trans_pcie->is_down) {
+ IWL_WARN(trans,
+ "Can't start_fw since the HW hasn't been started\n");
+ ret = -EIO;
+ goto out;
+ }
+
+ /* make sure rfkill handshake bits are cleared */
+ iwl_write32(trans, CSR_UCODE_DRV_GP1_CLR, CSR_UCODE_SW_BIT_RFKILL);
+ iwl_write32(trans, CSR_UCODE_DRV_GP1_CLR,
+ CSR_UCODE_DRV_GP1_BIT_CMD_BLOCKED);
+
+ /* clear (again), then enable host interrupts */
+ iwl_write32(trans, CSR_INT, 0xFFFFFFFF);
+
+ ret = iwl_pcie_nic_init(trans);
+ if (ret) {
+ IWL_ERR(trans, "Unable to init nic\n");
+ goto out;
+ }
+
+ /*
+ * Now, we load the firmware and don't want to be interrupted, even
+ * by the RF-Kill interrupt (hence mask all the interrupt besides the
+ * FH_TX interrupt which is needed to load the firmware). If the
+ * RF-Kill switch is toggled, we will find out after having loaded
+ * the firmware and return the proper value to the caller.
+ */
+ iwl_enable_fw_load_int(trans);
+
+ /* really make sure rfkill handshake bits are cleared */
+ iwl_write32(trans, CSR_UCODE_DRV_GP1_CLR, CSR_UCODE_SW_BIT_RFKILL);
+ iwl_write32(trans, CSR_UCODE_DRV_GP1_CLR, CSR_UCODE_SW_BIT_RFKILL);
+
+ /* Load the given image to the HW */
+ if (trans->cfg->device_family == IWL_DEVICE_FAMILY_8000)
+ ret = iwl_pcie_load_given_ucode_8000(trans, fw);
+ else
+ ret = iwl_pcie_load_given_ucode(trans, fw);
+ iwl_enable_interrupts(trans);
+
+ /* re-check RF-Kill state since we may have missed the interrupt */
+ hw_rfkill = iwl_is_rfkill_set(trans);
+ if (hw_rfkill)
+ set_bit(STATUS_RFKILL, &trans->status);
+ else
+ clear_bit(STATUS_RFKILL, &trans->status);
+
+ iwl_trans_pcie_rf_kill(trans, hw_rfkill);
+ if (hw_rfkill && !run_in_rfkill)
+ ret = -ERFKILL;
+
+out:
+ mutex_unlock(&trans_pcie->mutex);
+ return ret;
+}
+
+static void iwl_trans_pcie_fw_alive(struct iwl_trans *trans, u32 scd_addr)
+{
+ iwl_pcie_reset_ict(trans);
+ iwl_pcie_tx_start(trans, scd_addr);
+}
+
static void iwl_trans_pcie_stop_device(struct iwl_trans *trans, bool low_power)
{
struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
diff --git a/drivers/net/wireless/realtek/rtlwifi/rc.c b/drivers/net/wireless/realtek/rtlwifi/rc.c
index 74c14ce..28f7010 100644
--- a/drivers/net/wireless/realtek/rtlwifi/rc.c
+++ b/drivers/net/wireless/realtek/rtlwifi/rc.c
@@ -138,6 +138,11 @@
((wireless_mode == WIRELESS_MODE_N_5G) ||
(wireless_mode == WIRELESS_MODE_N_24G)))
rate->flags |= IEEE80211_TX_RC_MCS;
+ if (sta && sta->vht_cap.vht_supported &&
+ (wireless_mode == WIRELESS_MODE_AC_5G ||
+ wireless_mode == WIRELESS_MODE_AC_24G ||
+ wireless_mode == WIRELESS_MODE_AC_ONLY))
+ rate->flags |= IEEE80211_TX_RC_VHT_MCS;
}
}
diff --git a/drivers/net/wireless/ti/wlcore/io.c b/drivers/net/wireless/ti/wlcore/io.c
index 9ac118e..564ca75 100644
--- a/drivers/net/wireless/ti/wlcore/io.c
+++ b/drivers/net/wireless/ti/wlcore/io.c
@@ -175,14 +175,14 @@
if (ret < 0)
goto out;
+ /* We don't need the size of the last partition, as it is
+ * automatically calculated based on the total memory size and
+ * the sizes of the previous partitions.
+ */
ret = wlcore_raw_write32(wl, HW_PART3_START_ADDR, p->mem3.start);
if (ret < 0)
goto out;
- ret = wlcore_raw_write32(wl, HW_PART3_SIZE_ADDR, p->mem3.size);
- if (ret < 0)
- goto out;
-
out:
return ret;
}
diff --git a/drivers/net/wireless/ti/wlcore/io.h b/drivers/net/wireless/ti/wlcore/io.h
index 6c257b54..10cf374 100644
--- a/drivers/net/wireless/ti/wlcore/io.h
+++ b/drivers/net/wireless/ti/wlcore/io.h
@@ -36,8 +36,8 @@
#define HW_PART1_START_ADDR (HW_PARTITION_REGISTERS_ADDR + 12)
#define HW_PART2_SIZE_ADDR (HW_PARTITION_REGISTERS_ADDR + 16)
#define HW_PART2_START_ADDR (HW_PARTITION_REGISTERS_ADDR + 20)
-#define HW_PART3_SIZE_ADDR (HW_PARTITION_REGISTERS_ADDR + 24)
-#define HW_PART3_START_ADDR (HW_PARTITION_REGISTERS_ADDR + 28)
+#define HW_PART3_START_ADDR (HW_PARTITION_REGISTERS_ADDR + 24)
+
#define HW_ACCESS_REGISTER_SIZE 4
#define HW_ACCESS_PRAM_MAX_RANGE 0x3c000
diff --git a/drivers/nvdimm/bus.c b/drivers/nvdimm/bus.c
index 7e2c43f..5d28e94 100644
--- a/drivers/nvdimm/bus.c
+++ b/drivers/nvdimm/bus.c
@@ -382,18 +382,18 @@
[ND_CMD_ARS_CAP] = {
.in_num = 2,
.in_sizes = { 8, 8, },
+ .out_num = 4,
+ .out_sizes = { 4, 4, 4, 4, },
+ },
+ [ND_CMD_ARS_START] = {
+ .in_num = 5,
+ .in_sizes = { 8, 8, 2, 1, 5, },
.out_num = 2,
.out_sizes = { 4, 4, },
},
- [ND_CMD_ARS_START] = {
- .in_num = 4,
- .in_sizes = { 8, 8, 2, 6, },
- .out_num = 1,
- .out_sizes = { 4, },
- },
[ND_CMD_ARS_STATUS] = {
- .out_num = 2,
- .out_sizes = { 4, UINT_MAX, },
+ .out_num = 3,
+ .out_sizes = { 4, 4, UINT_MAX, },
},
};
@@ -442,8 +442,8 @@
return in_field[1];
else if (nvdimm && cmd == ND_CMD_VENDOR && idx == 2)
return out_field[1];
- else if (!nvdimm && cmd == ND_CMD_ARS_STATUS && idx == 1)
- return ND_CMD_ARS_STATUS_MAX;
+ else if (!nvdimm && cmd == ND_CMD_ARS_STATUS && idx == 2)
+ return out_field[1] - 8;
return UINT_MAX;
}
diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c
index 7edf316..8d0b546 100644
--- a/drivers/nvdimm/pmem.c
+++ b/drivers/nvdimm/pmem.c
@@ -41,7 +41,7 @@
phys_addr_t phys_addr;
/* when non-zero this device is hosting a 'pfn' instance */
phys_addr_t data_offset;
- unsigned long pfn_flags;
+ u64 pfn_flags;
void __pmem *virt_addr;
size_t size;
struct badblocks bb;
diff --git a/drivers/nvme/host/Kconfig b/drivers/nvme/host/Kconfig
index 5d62373..b586d84 100644
--- a/drivers/nvme/host/Kconfig
+++ b/drivers/nvme/host/Kconfig
@@ -17,5 +17,6 @@
and block devices nodes, as well a a translation for a small
number of selected SCSI commands to NVMe commands to the NVMe
driver. If you don't know what this means you probably want
- to say N here, and if you know what it means you probably
- want to say N as well.
+ to say N here, unless you run a distro that abuses the SCSI
+ emulation to provide stable device names for mount by id, like
+ some OpenSuSE and SLES versions.
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index c5bf001..3cd921e 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -1121,7 +1121,6 @@
ns->queue = blk_mq_init_queue(ctrl->tagset);
if (IS_ERR(ns->queue))
goto out_free_ns;
- queue_flag_set_unlocked(QUEUE_FLAG_NOMERGES, ns->queue);
queue_flag_set_unlocked(QUEUE_FLAG_NONROT, ns->queue);
ns->queue->queuedata = ns;
ns->ctrl = ctrl;
diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c
index 5cd3725..6bb15e4 100644
--- a/drivers/nvme/host/lightnvm.c
+++ b/drivers/nvme/host/lightnvm.c
@@ -146,9 +146,10 @@
};
};
+#define NVME_NVM_LP_MLC_PAIRS 886
struct nvme_nvm_lp_mlc {
__u16 num_pairs;
- __u8 pairs[886];
+ __u8 pairs[NVME_NVM_LP_MLC_PAIRS];
};
struct nvme_nvm_lp_tbl {
@@ -282,9 +283,14 @@
memcpy(dst->lptbl.id, src->lptbl.id, 8);
dst->lptbl.mlc.num_pairs =
le16_to_cpu(src->lptbl.mlc.num_pairs);
- /* 4 bits per pair */
+
+ if (dst->lptbl.mlc.num_pairs > NVME_NVM_LP_MLC_PAIRS) {
+ pr_err("nvm: number of MLC pairs not supported\n");
+ return -EINVAL;
+ }
+
memcpy(dst->lptbl.mlc.pairs, src->lptbl.mlc.pairs,
- dst->lptbl.mlc.num_pairs >> 1);
+ dst->lptbl.mlc.num_pairs);
}
}
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index 4fb5bb7..9664d07 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -139,9 +139,9 @@
u32 val = 0;
if (ctrl->ops->io_incapable(ctrl))
- return false;
+ return true;
if (ctrl->ops->reg_read32(ctrl, NVME_REG_CSTS, &val))
- return false;
+ return true;
return val & NVME_CSTS_CFS;
}
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 72ef832..a128672 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -678,6 +678,11 @@
blk_mq_start_request(req);
spin_lock_irq(&nvmeq->q_lock);
+ if (unlikely(nvmeq->cq_vector < 0)) {
+ ret = BLK_MQ_RQ_QUEUE_BUSY;
+ spin_unlock_irq(&nvmeq->q_lock);
+ goto out;
+ }
__nvme_submit_cmd(nvmeq, &cmnd);
nvme_process_cq(nvmeq);
spin_unlock_irq(&nvmeq->q_lock);
@@ -999,7 +1004,7 @@
if (!blk_mq_request_started(req))
return;
- dev_warn(nvmeq->q_dmadev,
+ dev_dbg_ratelimited(nvmeq->q_dmadev,
"Cancelling I/O %d QID %d\n", req->tag, nvmeq->qid);
status = NVME_SC_ABORT_REQ;
@@ -2111,16 +2116,12 @@
{
struct nvme_dev *dev = pci_get_drvdata(pdev);
- spin_lock(&dev_list_lock);
- list_del_init(&dev->node);
- spin_unlock(&dev_list_lock);
-
pci_set_drvdata(pdev, NULL);
- flush_work(&dev->reset_work);
flush_work(&dev->scan_work);
nvme_remove_namespaces(&dev->ctrl);
nvme_uninit_ctrl(&dev->ctrl);
nvme_dev_disable(dev, true);
+ flush_work(&dev->reset_work);
nvme_dev_remove_admin(dev);
nvme_free_queues(dev, 0);
nvme_release_cmb(dev);
diff --git a/drivers/nvmem/Kconfig b/drivers/nvmem/Kconfig
index bc4ea58..5bd18cc 100644
--- a/drivers/nvmem/Kconfig
+++ b/drivers/nvmem/Kconfig
@@ -25,6 +25,15 @@
This driver can also be built as a module. If so, the module
will be called nvmem-imx-ocotp.
+config NVMEM_LPC18XX_EEPROM
+ tristate "NXP LPC18XX EEPROM Memory Support"
+ depends on ARCH_LPC18XX || COMPILE_TEST
+ help
+ Say Y here to include support for NXP LPC18xx EEPROM memory found in
+ NXP LPC185x/3x and LPC435x/3x/2x/1x devices.
+ To compile this driver as a module, choose M here: the module
+ will be called nvmem_lpc18xx_eeprom.
+
config NVMEM_MXS_OCOTP
tristate "Freescale MXS On-Chip OTP Memory Support"
depends on ARCH_MXS || COMPILE_TEST
@@ -36,6 +45,17 @@
This driver can also be built as a module. If so, the module
will be called nvmem-mxs-ocotp.
+config MTK_EFUSE
+ tristate "Mediatek SoCs EFUSE support"
+ depends on ARCH_MEDIATEK || COMPILE_TEST
+ select REGMAP_MMIO
+ help
+ This is a driver to access hardware related data like sensor
+ calibration, HDMI impedance etc.
+
+ This driver can also be built as a module. If so, the module
+ will be called efuse-mtk.
+
config QCOM_QFPROM
tristate "QCOM QFPROM Support"
depends on ARCH_QCOM || COMPILE_TEST
diff --git a/drivers/nvmem/Makefile b/drivers/nvmem/Makefile
index 95dde3f..45ab1ae 100644
--- a/drivers/nvmem/Makefile
+++ b/drivers/nvmem/Makefile
@@ -8,8 +8,12 @@
# Devices
obj-$(CONFIG_NVMEM_IMX_OCOTP) += nvmem-imx-ocotp.o
nvmem-imx-ocotp-y := imx-ocotp.o
+obj-$(CONFIG_NVMEM_LPC18XX_EEPROM) += nvmem_lpc18xx_eeprom.o
+nvmem_lpc18xx_eeprom-y := lpc18xx_eeprom.o
obj-$(CONFIG_NVMEM_MXS_OCOTP) += nvmem-mxs-ocotp.o
nvmem-mxs-ocotp-y := mxs-ocotp.o
+obj-$(CONFIG_MTK_EFUSE) += nvmem_mtk-efuse.o
+nvmem_mtk-efuse-y := mtk-efuse.o
obj-$(CONFIG_QCOM_QFPROM) += nvmem_qfprom.o
nvmem_qfprom-y := qfprom.o
obj-$(CONFIG_ROCKCHIP_EFUSE) += nvmem_rockchip_efuse.o
diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
index 6fd4e5a..de14fae 100644
--- a/drivers/nvmem/core.c
+++ b/drivers/nvmem/core.c
@@ -70,6 +70,9 @@
if (pos >= nvmem->size)
return 0;
+ if (count < nvmem->word_size)
+ return -EINVAL;
+
if (pos + count > nvmem->size)
count = nvmem->size - pos;
@@ -95,6 +98,9 @@
if (pos >= nvmem->size)
return 0;
+ if (count < nvmem->word_size)
+ return -EINVAL;
+
if (pos + count > nvmem->size)
count = nvmem->size - pos;
@@ -288,9 +294,11 @@
return 0;
err:
- while (--i)
+ while (i--)
nvmem_cell_drop(cells[i]);
+ kfree(cells);
+
return rval;
}
diff --git a/drivers/nvmem/lpc18xx_eeprom.c b/drivers/nvmem/lpc18xx_eeprom.c
new file mode 100644
index 0000000..878fce7
--- /dev/null
+++ b/drivers/nvmem/lpc18xx_eeprom.c
@@ -0,0 +1,330 @@
+/*
+ * NXP LPC18xx/LPC43xx EEPROM memory NVMEM driver
+ *
+ * Copyright (c) 2015 Ariel D'Alessandro <ariel@vanguardiasur.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 as published by
+ * the Free Software Foundation.
+ */
+
+#include <linux/clk.h>
+#include <linux/device.h>
+#include <linux/delay.h>
+#include <linux/err.h>
+#include <linux/io.h>
+#include <linux/module.h>
+#include <linux/nvmem-provider.h>
+#include <linux/platform_device.h>
+#include <linux/regmap.h>
+#include <linux/reset.h>
+
+/* Registers */
+#define LPC18XX_EEPROM_AUTOPROG 0x00c
+#define LPC18XX_EEPROM_AUTOPROG_WORD 0x1
+
+#define LPC18XX_EEPROM_CLKDIV 0x014
+
+#define LPC18XX_EEPROM_PWRDWN 0x018
+#define LPC18XX_EEPROM_PWRDWN_NO 0x0
+#define LPC18XX_EEPROM_PWRDWN_YES 0x1
+
+#define LPC18XX_EEPROM_INTSTAT 0xfe0
+#define LPC18XX_EEPROM_INTSTAT_END_OF_PROG BIT(2)
+
+#define LPC18XX_EEPROM_INTSTATCLR 0xfe8
+#define LPC18XX_EEPROM_INTSTATCLR_PROG_CLR_ST BIT(2)
+
+/* Fixed page size (bytes) */
+#define LPC18XX_EEPROM_PAGE_SIZE 0x80
+
+/* EEPROM device requires a ~1500 kHz clock (min 800 kHz, max 1600 kHz) */
+#define LPC18XX_EEPROM_CLOCK_HZ 1500000
+
+/* EEPROM requires 3 ms of erase/program time between each writing */
+#define LPC18XX_EEPROM_PROGRAM_TIME 3
+
+struct lpc18xx_eeprom_dev {
+ struct clk *clk;
+ void __iomem *reg_base;
+ void __iomem *mem_base;
+ struct nvmem_device *nvmem;
+ unsigned reg_bytes;
+ unsigned val_bytes;
+};
+
+static struct regmap_config lpc18xx_regmap_config = {
+ .reg_bits = 32,
+ .reg_stride = 4,
+ .val_bits = 32,
+};
+
+static inline void lpc18xx_eeprom_writel(struct lpc18xx_eeprom_dev *eeprom,
+ u32 reg, u32 val)
+{
+ writel(val, eeprom->reg_base + reg);
+}
+
+static inline u32 lpc18xx_eeprom_readl(struct lpc18xx_eeprom_dev *eeprom,
+ u32 reg)
+{
+ return readl(eeprom->reg_base + reg);
+}
+
+static int lpc18xx_eeprom_busywait_until_prog(struct lpc18xx_eeprom_dev *eeprom)
+{
+ unsigned long end;
+ u32 val;
+
+ /* Wait until EEPROM program operation has finished */
+ end = jiffies + msecs_to_jiffies(LPC18XX_EEPROM_PROGRAM_TIME * 10);
+
+ while (time_is_after_jiffies(end)) {
+ val = lpc18xx_eeprom_readl(eeprom, LPC18XX_EEPROM_INTSTAT);
+
+ if (val & LPC18XX_EEPROM_INTSTAT_END_OF_PROG) {
+ lpc18xx_eeprom_writel(eeprom, LPC18XX_EEPROM_INTSTATCLR,
+ LPC18XX_EEPROM_INTSTATCLR_PROG_CLR_ST);
+ return 0;
+ }
+
+ usleep_range(LPC18XX_EEPROM_PROGRAM_TIME * USEC_PER_MSEC,
+ (LPC18XX_EEPROM_PROGRAM_TIME + 1) * USEC_PER_MSEC);
+ }
+
+ return -ETIMEDOUT;
+}
+
+static int lpc18xx_eeprom_gather_write(void *context, const void *reg,
+ size_t reg_size, const void *val,
+ size_t val_size)
+{
+ struct lpc18xx_eeprom_dev *eeprom = context;
+ unsigned int offset = *(u32 *)reg;
+ int ret;
+
+ if (offset % lpc18xx_regmap_config.reg_stride)
+ return -EINVAL;
+
+ lpc18xx_eeprom_writel(eeprom, LPC18XX_EEPROM_PWRDWN,
+ LPC18XX_EEPROM_PWRDWN_NO);
+
+ /* Wait 100 us while the EEPROM wakes up */
+ usleep_range(100, 200);
+
+ while (val_size) {
+ writel(*(u32 *)val, eeprom->mem_base + offset);
+ ret = lpc18xx_eeprom_busywait_until_prog(eeprom);
+ if (ret < 0)
+ return ret;
+
+ val_size -= eeprom->val_bytes;
+ val += eeprom->val_bytes;
+ offset += eeprom->val_bytes;
+ }
+
+ lpc18xx_eeprom_writel(eeprom, LPC18XX_EEPROM_PWRDWN,
+ LPC18XX_EEPROM_PWRDWN_YES);
+
+ return 0;
+}
+
+static int lpc18xx_eeprom_write(void *context, const void *data, size_t count)
+{
+ struct lpc18xx_eeprom_dev *eeprom = context;
+ unsigned int offset = eeprom->reg_bytes;
+
+ if (count <= offset)
+ return -EINVAL;
+
+ return lpc18xx_eeprom_gather_write(context, data, eeprom->reg_bytes,
+ data + offset, count - offset);
+}
+
+static int lpc18xx_eeprom_read(void *context, const void *reg, size_t reg_size,
+ void *val, size_t val_size)
+{
+ struct lpc18xx_eeprom_dev *eeprom = context;
+ unsigned int offset = *(u32 *)reg;
+
+ lpc18xx_eeprom_writel(eeprom, LPC18XX_EEPROM_PWRDWN,
+ LPC18XX_EEPROM_PWRDWN_NO);
+
+ /* Wait 100 us while the EEPROM wakes up */
+ usleep_range(100, 200);
+
+ while (val_size) {
+ *(u32 *)val = readl(eeprom->mem_base + offset);
+ val_size -= eeprom->val_bytes;
+ val += eeprom->val_bytes;
+ offset += eeprom->val_bytes;
+ }
+
+ lpc18xx_eeprom_writel(eeprom, LPC18XX_EEPROM_PWRDWN,
+ LPC18XX_EEPROM_PWRDWN_YES);
+
+ return 0;
+}
+
+static struct regmap_bus lpc18xx_eeprom_bus = {
+ .write = lpc18xx_eeprom_write,
+ .gather_write = lpc18xx_eeprom_gather_write,
+ .read = lpc18xx_eeprom_read,
+ .reg_format_endian_default = REGMAP_ENDIAN_NATIVE,
+ .val_format_endian_default = REGMAP_ENDIAN_NATIVE,
+};
+
+static bool lpc18xx_eeprom_writeable_reg(struct device *dev, unsigned int reg)
+{
+ /*
+ * The last page contains the EEPROM initialization data and is not
+ * writable.
+ */
+ return reg <= lpc18xx_regmap_config.max_register -
+ LPC18XX_EEPROM_PAGE_SIZE;
+}
+
+static bool lpc18xx_eeprom_readable_reg(struct device *dev, unsigned int reg)
+{
+ return reg <= lpc18xx_regmap_config.max_register;
+}
+
+static struct nvmem_config lpc18xx_nvmem_config = {
+ .name = "lpc18xx-eeprom",
+ .owner = THIS_MODULE,
+};
+
+static int lpc18xx_eeprom_probe(struct platform_device *pdev)
+{
+ struct lpc18xx_eeprom_dev *eeprom;
+ struct device *dev = &pdev->dev;
+ struct reset_control *rst;
+ unsigned long clk_rate;
+ struct regmap *regmap;
+ struct resource *res;
+ int ret;
+
+ eeprom = devm_kzalloc(dev, sizeof(*eeprom), GFP_KERNEL);
+ if (!eeprom)
+ return -ENOMEM;
+
+ res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "reg");
+ eeprom->reg_base = devm_ioremap_resource(dev, res);
+ if (IS_ERR(eeprom->reg_base))
+ return PTR_ERR(eeprom->reg_base);
+
+ res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "mem");
+ eeprom->mem_base = devm_ioremap_resource(dev, res);
+ if (IS_ERR(eeprom->mem_base))
+ return PTR_ERR(eeprom->mem_base);
+
+ eeprom->clk = devm_clk_get(&pdev->dev, "eeprom");
+ if (IS_ERR(eeprom->clk)) {
+ dev_err(&pdev->dev, "failed to get eeprom clock\n");
+ return PTR_ERR(eeprom->clk);
+ }
+
+ ret = clk_prepare_enable(eeprom->clk);
+ if (ret < 0) {
+ dev_err(dev, "failed to prepare/enable eeprom clk: %d\n", ret);
+ return ret;
+ }
+
+ rst = devm_reset_control_get(dev, NULL);
+ if (IS_ERR(rst)) {
+ dev_err(dev, "failed to get reset: %ld\n", PTR_ERR(rst));
+ ret = PTR_ERR(rst);
+ goto err_clk;
+ }
+
+ ret = reset_control_assert(rst);
+ if (ret < 0) {
+ dev_err(dev, "failed to assert reset: %d\n", ret);
+ goto err_clk;
+ }
+
+ eeprom->val_bytes = lpc18xx_regmap_config.val_bits / BITS_PER_BYTE;
+ eeprom->reg_bytes = lpc18xx_regmap_config.reg_bits / BITS_PER_BYTE;
+
+ /*
+ * Clock rate is generated by dividing the system bus clock by the
+ * division factor, contained in the divider register (minus 1 encoded).
+ */
+ clk_rate = clk_get_rate(eeprom->clk);
+ clk_rate = DIV_ROUND_UP(clk_rate, LPC18XX_EEPROM_CLOCK_HZ) - 1;
+ lpc18xx_eeprom_writel(eeprom, LPC18XX_EEPROM_CLKDIV, clk_rate);
+
+ /*
+ * Writing a single word to the page will start the erase/program cycle
+ * automatically
+ */
+ lpc18xx_eeprom_writel(eeprom, LPC18XX_EEPROM_AUTOPROG,
+ LPC18XX_EEPROM_AUTOPROG_WORD);
+
+ lpc18xx_eeprom_writel(eeprom, LPC18XX_EEPROM_PWRDWN,
+ LPC18XX_EEPROM_PWRDWN_YES);
+
+ lpc18xx_regmap_config.max_register = resource_size(res) - 1;
+ lpc18xx_regmap_config.writeable_reg = lpc18xx_eeprom_writeable_reg;
+ lpc18xx_regmap_config.readable_reg = lpc18xx_eeprom_readable_reg;
+
+ regmap = devm_regmap_init(dev, &lpc18xx_eeprom_bus, eeprom,
+ &lpc18xx_regmap_config);
+ if (IS_ERR(regmap)) {
+ dev_err(dev, "regmap init failed: %ld\n", PTR_ERR(regmap));
+ ret = PTR_ERR(regmap);
+ goto err_clk;
+ }
+
+ lpc18xx_nvmem_config.dev = dev;
+
+ eeprom->nvmem = nvmem_register(&lpc18xx_nvmem_config);
+ if (IS_ERR(eeprom->nvmem)) {
+ ret = PTR_ERR(eeprom->nvmem);
+ goto err_clk;
+ }
+
+ platform_set_drvdata(pdev, eeprom);
+
+ return 0;
+
+err_clk:
+ clk_disable_unprepare(eeprom->clk);
+
+ return ret;
+}
+
+static int lpc18xx_eeprom_remove(struct platform_device *pdev)
+{
+ struct lpc18xx_eeprom_dev *eeprom = platform_get_drvdata(pdev);
+ int ret;
+
+ ret = nvmem_unregister(eeprom->nvmem);
+ if (ret < 0)
+ return ret;
+
+ clk_disable_unprepare(eeprom->clk);
+
+ return 0;
+}
+
+static const struct of_device_id lpc18xx_eeprom_of_match[] = {
+ { .compatible = "nxp,lpc1857-eeprom" },
+ { },
+};
+MODULE_DEVICE_TABLE(of, lpc18xx_eeprom_of_match);
+
+static struct platform_driver lpc18xx_eeprom_driver = {
+ .probe = lpc18xx_eeprom_probe,
+ .remove = lpc18xx_eeprom_remove,
+ .driver = {
+ .name = "lpc18xx-eeprom",
+ .of_match_table = lpc18xx_eeprom_of_match,
+ },
+};
+
+module_platform_driver(lpc18xx_eeprom_driver);
+
+MODULE_AUTHOR("Ariel D'Alessandro <ariel@vanguardiasur.com.ar>");
+MODULE_DESCRIPTION("NXP LPC18xx EEPROM memory Driver");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/nvmem/mtk-efuse.c b/drivers/nvmem/mtk-efuse.c
new file mode 100644
index 0000000..7b35f5b
--- /dev/null
+++ b/drivers/nvmem/mtk-efuse.c
@@ -0,0 +1,89 @@
+/*
+ * Copyright (c) 2015 MediaTek Inc.
+ * Author: Andrew-CT Chen <andrew-ct.chen@mediatek.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/device.h>
+#include <linux/module.h>
+#include <linux/nvmem-provider.h>
+#include <linux/platform_device.h>
+#include <linux/regmap.h>
+
+static struct regmap_config mtk_regmap_config = {
+ .reg_bits = 32,
+ .val_bits = 32,
+ .reg_stride = 4,
+};
+
+static int mtk_efuse_probe(struct platform_device *pdev)
+{
+ struct device *dev = &pdev->dev;
+ struct resource *res;
+ struct nvmem_device *nvmem;
+ struct nvmem_config *econfig;
+ struct regmap *regmap;
+ void __iomem *base;
+
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ base = devm_ioremap_resource(dev, res);
+ if (IS_ERR(base))
+ return PTR_ERR(base);
+
+ econfig = devm_kzalloc(dev, sizeof(*econfig), GFP_KERNEL);
+ if (!econfig)
+ return -ENOMEM;
+
+ mtk_regmap_config.max_register = resource_size(res) - 1;
+
+ regmap = devm_regmap_init_mmio(dev, base, &mtk_regmap_config);
+ if (IS_ERR(regmap)) {
+ dev_err(dev, "regmap init failed\n");
+ return PTR_ERR(regmap);
+ }
+
+ econfig->dev = dev;
+ econfig->owner = THIS_MODULE;
+ nvmem = nvmem_register(econfig);
+ if (IS_ERR(nvmem))
+ return PTR_ERR(nvmem);
+
+ platform_set_drvdata(pdev, nvmem);
+
+ return 0;
+}
+
+static int mtk_efuse_remove(struct platform_device *pdev)
+{
+ struct nvmem_device *nvmem = platform_get_drvdata(pdev);
+
+ return nvmem_unregister(nvmem);
+}
+
+static const struct of_device_id mtk_efuse_of_match[] = {
+ { .compatible = "mediatek,mt8173-efuse",},
+ { .compatible = "mediatek,efuse",},
+ {/* sentinel */},
+};
+MODULE_DEVICE_TABLE(of, mtk_efuse_of_match);
+
+static struct platform_driver mtk_efuse_driver = {
+ .probe = mtk_efuse_probe,
+ .remove = mtk_efuse_remove,
+ .driver = {
+ .name = "mediatek,efuse",
+ .of_match_table = mtk_efuse_of_match,
+ },
+};
+module_platform_driver(mtk_efuse_driver);
+MODULE_AUTHOR("Andrew-CT Chen <andrew-ct.chen@mediatek.com>");
+MODULE_DESCRIPTION("Mediatek EFUSE driver");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/nvmem/qfprom.c b/drivers/nvmem/qfprom.c
index afb67e7..3829e5f 100644
--- a/drivers/nvmem/qfprom.c
+++ b/drivers/nvmem/qfprom.c
@@ -21,6 +21,7 @@
.reg_bits = 32,
.val_bits = 8,
.reg_stride = 1,
+ .val_format_endian = REGMAP_ENDIAN_LITTLE,
};
static struct nvmem_config econfig = {
diff --git a/drivers/nvmem/rockchip-efuse.c b/drivers/nvmem/rockchip-efuse.c
index f552134..a009795 100644
--- a/drivers/nvmem/rockchip-efuse.c
+++ b/drivers/nvmem/rockchip-efuse.c
@@ -14,16 +14,16 @@
* more details.
*/
-#include <linux/platform_device.h>
-#include <linux/nvmem-provider.h>
-#include <linux/slab.h>
-#include <linux/regmap.h>
+#include <linux/clk.h>
+#include <linux/delay.h>
#include <linux/device.h>
#include <linux/io.h>
#include <linux/module.h>
-#include <linux/delay.h>
+#include <linux/nvmem-provider.h>
+#include <linux/slab.h>
#include <linux/of.h>
-#include <linux/clk.h>
+#include <linux/platform_device.h>
+#include <linux/regmap.h>
#define EFUSE_A_SHIFT 6
#define EFUSE_A_MASK 0x3ff
@@ -35,10 +35,10 @@
#define REG_EFUSE_CTRL 0x0000
#define REG_EFUSE_DOUT 0x0004
-struct rockchip_efuse_context {
+struct rockchip_efuse_chip {
struct device *dev;
void __iomem *base;
- struct clk *efuse_clk;
+ struct clk *clk;
};
static int rockchip_efuse_write(void *context, const void *data, size_t count)
@@ -52,34 +52,32 @@
void *val, size_t val_size)
{
unsigned int offset = *(u32 *)reg;
- struct rockchip_efuse_context *_context = context;
- void __iomem *base = _context->base;
- struct clk *clk = _context->efuse_clk;
+ struct rockchip_efuse_chip *efuse = context;
u8 *buf = val;
int ret;
- ret = clk_prepare_enable(clk);
+ ret = clk_prepare_enable(efuse->clk);
if (ret < 0) {
- dev_err(_context->dev, "failed to prepare/enable efuse clk\n");
+ dev_err(efuse->dev, "failed to prepare/enable efuse clk\n");
return ret;
}
- writel(EFUSE_LOAD | EFUSE_PGENB, base + REG_EFUSE_CTRL);
+ writel(EFUSE_LOAD | EFUSE_PGENB, efuse->base + REG_EFUSE_CTRL);
udelay(1);
while (val_size) {
- writel(readl(base + REG_EFUSE_CTRL) &
+ writel(readl(efuse->base + REG_EFUSE_CTRL) &
(~(EFUSE_A_MASK << EFUSE_A_SHIFT)),
- base + REG_EFUSE_CTRL);
- writel(readl(base + REG_EFUSE_CTRL) |
+ efuse->base + REG_EFUSE_CTRL);
+ writel(readl(efuse->base + REG_EFUSE_CTRL) |
((offset & EFUSE_A_MASK) << EFUSE_A_SHIFT),
- base + REG_EFUSE_CTRL);
+ efuse->base + REG_EFUSE_CTRL);
udelay(1);
- writel(readl(base + REG_EFUSE_CTRL) |
- EFUSE_STROBE, base + REG_EFUSE_CTRL);
+ writel(readl(efuse->base + REG_EFUSE_CTRL) |
+ EFUSE_STROBE, efuse->base + REG_EFUSE_CTRL);
udelay(1);
- *buf++ = readb(base + REG_EFUSE_DOUT);
- writel(readl(base + REG_EFUSE_CTRL) &
- (~EFUSE_STROBE), base + REG_EFUSE_CTRL);
+ *buf++ = readb(efuse->base + REG_EFUSE_DOUT);
+ writel(readl(efuse->base + REG_EFUSE_CTRL) &
+ (~EFUSE_STROBE), efuse->base + REG_EFUSE_CTRL);
udelay(1);
val_size -= 1;
@@ -87,9 +85,9 @@
}
/* Switch to standby mode */
- writel(EFUSE_PGENB | EFUSE_CSB, base + REG_EFUSE_CTRL);
+ writel(EFUSE_PGENB | EFUSE_CSB, efuse->base + REG_EFUSE_CTRL);
- clk_disable_unprepare(clk);
+ clk_disable_unprepare(efuse->clk);
return 0;
}
@@ -114,48 +112,44 @@
};
static const struct of_device_id rockchip_efuse_match[] = {
- { .compatible = "rockchip,rockchip-efuse",},
+ { .compatible = "rockchip,rockchip-efuse", },
{ /* sentinel */},
};
MODULE_DEVICE_TABLE(of, rockchip_efuse_match);
static int rockchip_efuse_probe(struct platform_device *pdev)
{
- struct device *dev = &pdev->dev;
struct resource *res;
struct nvmem_device *nvmem;
struct regmap *regmap;
- void __iomem *base;
- struct clk *clk;
- struct rockchip_efuse_context *context;
+ struct rockchip_efuse_chip *efuse;
+
+ efuse = devm_kzalloc(&pdev->dev, sizeof(struct rockchip_efuse_chip),
+ GFP_KERNEL);
+ if (!efuse)
+ return -ENOMEM;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
- base = devm_ioremap_resource(dev, res);
- if (IS_ERR(base))
- return PTR_ERR(base);
+ efuse->base = devm_ioremap_resource(&pdev->dev, res);
+ if (IS_ERR(efuse->base))
+ return PTR_ERR(efuse->base);
- context = devm_kzalloc(dev, sizeof(struct rockchip_efuse_context),
- GFP_KERNEL);
- if (IS_ERR(context))
- return PTR_ERR(context);
+ efuse->clk = devm_clk_get(&pdev->dev, "pclk_efuse");
+ if (IS_ERR(efuse->clk))
+ return PTR_ERR(efuse->clk);
- clk = devm_clk_get(dev, "pclk_efuse");
- if (IS_ERR(clk))
- return PTR_ERR(clk);
-
- context->dev = dev;
- context->base = base;
- context->efuse_clk = clk;
+ efuse->dev = &pdev->dev;
rockchip_efuse_regmap_config.max_register = resource_size(res) - 1;
- regmap = devm_regmap_init(dev, &rockchip_efuse_bus,
- context, &rockchip_efuse_regmap_config);
+ regmap = devm_regmap_init(efuse->dev, &rockchip_efuse_bus,
+ efuse, &rockchip_efuse_regmap_config);
if (IS_ERR(regmap)) {
- dev_err(dev, "regmap init failed\n");
+ dev_err(efuse->dev, "regmap init failed\n");
return PTR_ERR(regmap);
}
- econfig.dev = dev;
+
+ econfig.dev = efuse->dev;
nvmem = nvmem_register(&econfig);
if (IS_ERR(nvmem))
return PTR_ERR(nvmem);
diff --git a/drivers/nvmem/sunxi_sid.c b/drivers/nvmem/sunxi_sid.c
index cfa3b85..bc88b40 100644
--- a/drivers/nvmem/sunxi_sid.c
+++ b/drivers/nvmem/sunxi_sid.c
@@ -13,10 +13,8 @@
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
- *
*/
-
#include <linux/device.h>
#include <linux/io.h>
#include <linux/module.h>
@@ -27,7 +25,6 @@
#include <linux/slab.h>
#include <linux/random.h>
-
static struct nvmem_config econfig = {
.name = "sunxi-sid",
.read_only = true,
@@ -55,8 +52,8 @@
}
static int sunxi_sid_read(void *context,
- const void *reg, size_t reg_size,
- void *val, size_t val_size)
+ const void *reg, size_t reg_size,
+ void *val, size_t val_size)
{
struct sunxi_sid *sid = context;
unsigned int offset = *(u32 *)reg;
@@ -130,7 +127,7 @@
if (IS_ERR(nvmem))
return PTR_ERR(nvmem);
- randomness = kzalloc(sizeof(u8) * size, GFP_KERNEL);
+ randomness = kzalloc(sizeof(u8) * (size), GFP_KERNEL);
if (!randomness) {
ret = -EINVAL;
goto err_unreg_nvmem;
diff --git a/drivers/of/irq.c b/drivers/of/irq.c
index 7ee21ae..e7bfc17 100644
--- a/drivers/of/irq.c
+++ b/drivers/of/irq.c
@@ -635,6 +635,13 @@
msi_base = be32_to_cpup(msi_map + 2);
rid_len = be32_to_cpup(msi_map + 3);
+ if (rid_base & ~map_mask) {
+ dev_err(parent_dev,
+ "Invalid msi-map translation - msi-map-mask (0x%x) ignores rid-base (0x%x)\n",
+ map_mask, rid_base);
+ return rid_out;
+ }
+
msi_controller_node = of_find_node_by_phandle(phandle);
matched = (masked_rid >= rid_base &&
@@ -654,7 +661,7 @@
if (!matched)
return rid_out;
- rid_out = masked_rid + msi_base;
+ rid_out = masked_rid - rid_base + msi_base;
dev_dbg(dev,
"msi-map at: %s, using mask %08x, rid-base: %08x, msi-base: %08x, length: %08x, rid: %08x -> %08x\n",
dev_name(parent_dev), map_mask, rid_base, msi_base,
diff --git a/drivers/of/of_mdio.c b/drivers/of/of_mdio.c
index 5648317..39c4be4 100644
--- a/drivers/of/of_mdio.c
+++ b/drivers/of/of_mdio.c
@@ -154,6 +154,7 @@
{ .compatible = "marvell,88E1111", },
{ .compatible = "marvell,88e1116", },
{ .compatible = "marvell,88e1118", },
+ { .compatible = "marvell,88e1145", },
{ .compatible = "marvell,88e1149r", },
{ .compatible = "marvell,88e1310", },
{ .compatible = "marvell,88E1510", },
diff --git a/drivers/pci/host/Kconfig b/drivers/pci/host/Kconfig
index 75a6054..d1cdd9c 100644
--- a/drivers/pci/host/Kconfig
+++ b/drivers/pci/host/Kconfig
@@ -14,6 +14,7 @@
config PCI_MVEBU
bool "Marvell EBU PCIe controller"
depends on ARCH_MVEBU || ARCH_DOVE
+ depends on ARM
depends on OF
config PCIE_DW
diff --git a/drivers/pci/host/pcie-iproc.c b/drivers/pci/host/pcie-iproc.c
index 5816bce..a576aee 100644
--- a/drivers/pci/host/pcie-iproc.c
+++ b/drivers/pci/host/pcie-iproc.c
@@ -64,7 +64,6 @@
#define OARR_SIZE_CFG BIT(OARR_SIZE_CFG_SHIFT)
#define MAX_NUM_OB_WINDOWS 2
-#define MAX_NUM_PAXC_PF 4
#define IPROC_PCIE_REG_INVALID 0xffff
@@ -170,20 +169,6 @@
writel(val, pcie->base + offset + (window * 8));
}
-static inline bool iproc_pcie_device_is_valid(struct iproc_pcie *pcie,
- unsigned int slot,
- unsigned int fn)
-{
- if (slot > 0)
- return false;
-
- /* PAXC can only support limited number of functions */
- if (pcie->type == IPROC_PCIE_PAXC && fn >= MAX_NUM_PAXC_PF)
- return false;
-
- return true;
-}
-
/**
* Note access to the configuration registers are protected at the higher layer
* by 'pci_lock' in drivers/pci/access.c
@@ -199,11 +184,11 @@
u32 val;
u16 offset;
- if (!iproc_pcie_device_is_valid(pcie, slot, fn))
- return NULL;
-
/* root complex access */
if (busno == 0) {
+ if (slot > 0 || fn > 0)
+ return NULL;
+
iproc_pcie_write_reg(pcie, IPROC_PCIE_CFG_IND_ADDR,
where & CFG_IND_ADDR_MASK);
offset = iproc_pcie_reg_offset(pcie, IPROC_PCIE_CFG_IND_DATA);
@@ -213,6 +198,14 @@
return (pcie->base + offset);
}
+ /*
+ * PAXC is connected to an internally emulated EP within the SoC. It
+ * allows only one device.
+ */
+ if (pcie->type == IPROC_PCIE_PAXC)
+ if (slot > 0)
+ return NULL;
+
/* EP device access */
val = (busno << CFG_ADDR_BUS_NUM_SHIFT) |
(slot << CFG_ADDR_DEV_NUM_SHIFT) |
diff --git a/drivers/pci/pcie/aer/aerdrv.c b/drivers/pci/pcie/aer/aerdrv.c
index 0bf82a2..48d21e0 100644
--- a/drivers/pci/pcie/aer/aerdrv.c
+++ b/drivers/pci/pcie/aer/aerdrv.c
@@ -262,7 +262,6 @@
rpc->rpd = dev;
INIT_WORK(&rpc->dpc_handler, aer_isr);
mutex_init(&rpc->rpc_mutex);
- init_waitqueue_head(&rpc->wait_release);
/* Use PCIe bus function to store rpc into PCIe device */
set_service_data(dev, rpc);
@@ -285,8 +284,7 @@
if (rpc->isr)
free_irq(dev->irq, dev);
- wait_event(rpc->wait_release, rpc->prod_idx == rpc->cons_idx);
-
+ flush_work(&rpc->dpc_handler);
aer_disable_rootport(rpc);
kfree(rpc);
set_service_data(dev, NULL);
diff --git a/drivers/pci/pcie/aer/aerdrv.h b/drivers/pci/pcie/aer/aerdrv.h
index 84420b7..945c939 100644
--- a/drivers/pci/pcie/aer/aerdrv.h
+++ b/drivers/pci/pcie/aer/aerdrv.h
@@ -72,7 +72,6 @@
* recovery on the same
* root port hierarchy
*/
- wait_queue_head_t wait_release;
};
struct aer_broadcast_data {
diff --git a/drivers/pci/pcie/aer/aerdrv_core.c b/drivers/pci/pcie/aer/aerdrv_core.c
index 7123925..521e39c 100644
--- a/drivers/pci/pcie/aer/aerdrv_core.c
+++ b/drivers/pci/pcie/aer/aerdrv_core.c
@@ -811,8 +811,6 @@
while (get_e_source(rpc, &e_src))
aer_isr_one_error(p_device, &e_src);
mutex_unlock(&rpc->rpc_mutex);
-
- wake_up(&rpc->wait_release);
}
/**
diff --git a/drivers/pci/xen-pcifront.c b/drivers/pci/xen-pcifront.c
index c777b97..5f70fee 100644
--- a/drivers/pci/xen-pcifront.c
+++ b/drivers/pci/xen-pcifront.c
@@ -53,7 +53,7 @@
};
struct pcifront_sd {
- int domain;
+ struct pci_sysdata sd;
struct pcifront_device *pdev;
};
@@ -67,7 +67,9 @@
unsigned int domain, unsigned int bus,
struct pcifront_device *pdev)
{
- sd->domain = domain;
+ /* Because we do not expose that information via XenBus. */
+ sd->sd.node = first_online_node;
+ sd->sd.domain = domain;
sd->pdev = pdev;
}
@@ -468,8 +470,8 @@
dev_info(&pdev->xdev->dev, "Creating PCI Frontend Bus %04x:%02x\n",
domain, bus);
- bus_entry = kmalloc(sizeof(*bus_entry), GFP_KERNEL);
- sd = kmalloc(sizeof(*sd), GFP_KERNEL);
+ bus_entry = kzalloc(sizeof(*bus_entry), GFP_KERNEL);
+ sd = kzalloc(sizeof(*sd), GFP_KERNEL);
if (!bus_entry || !sd) {
err = -ENOMEM;
goto err_out;
diff --git a/drivers/phy/Kconfig b/drivers/phy/Kconfig
index e7e117d..0124d17 100644
--- a/drivers/phy/Kconfig
+++ b/drivers/phy/Kconfig
@@ -224,6 +224,7 @@
config PHY_HI6220_USB
tristate "hi6220 USB PHY support"
+ depends on (ARCH_HISI && ARM64) || COMPILE_TEST
select GENERIC_PHY
select MFD_SYSCON
help
diff --git a/drivers/phy/phy-core.c b/drivers/phy/phy-core.c
index 8c7f27d..e7e574d 100644
--- a/drivers/phy/phy-core.c
+++ b/drivers/phy/phy-core.c
@@ -275,20 +275,21 @@
int phy_power_on(struct phy *phy)
{
- int ret;
+ int ret = 0;
if (!phy)
- return 0;
+ goto out;
if (phy->pwr) {
ret = regulator_enable(phy->pwr);
if (ret)
- return ret;
+ goto out;
}
ret = phy_pm_runtime_get_sync(phy);
if (ret < 0 && ret != -ENOTSUPP)
- return ret;
+ goto err_pm_sync;
+
ret = 0; /* Override possible ret == -ENOTSUPP */
mutex_lock(&phy->mutex);
@@ -296,19 +297,20 @@
ret = phy->ops->power_on(phy);
if (ret < 0) {
dev_err(&phy->dev, "phy poweron failed --> %d\n", ret);
- goto out;
+ goto err_pwr_on;
}
}
++phy->power_count;
mutex_unlock(&phy->mutex);
return 0;
-out:
+err_pwr_on:
mutex_unlock(&phy->mutex);
phy_pm_runtime_put_sync(phy);
+err_pm_sync:
if (phy->pwr)
regulator_disable(phy->pwr);
-
+out:
return ret;
}
EXPORT_SYMBOL_GPL(phy_power_on);
diff --git a/drivers/phy/phy-twl4030-usb.c b/drivers/phy/phy-twl4030-usb.c
index 4a3fc6e..840f3ea 100644
--- a/drivers/phy/phy-twl4030-usb.c
+++ b/drivers/phy/phy-twl4030-usb.c
@@ -715,6 +715,7 @@
pm_runtime_use_autosuspend(&pdev->dev);
pm_runtime_set_autosuspend_delay(&pdev->dev, 2000);
pm_runtime_enable(&pdev->dev);
+ pm_runtime_get_sync(&pdev->dev);
/* Our job is to use irqs and status from the power module
* to keep the transceiver disabled when nothing's connected.
@@ -750,6 +751,7 @@
struct twl4030_usb *twl = platform_get_drvdata(pdev);
int val;
+ usb_remove_phy(&twl->phy);
pm_runtime_get_sync(twl->dev);
cancel_delayed_work(&twl->id_workaround_work);
device_remove_file(twl->dev, &dev_attr_vbus);
@@ -757,6 +759,13 @@
/* set transceiver mode to power on defaults */
twl4030_usb_set_mode(twl, -1);
+ /* idle ulpi before powering off */
+ if (cable_present(twl->linkstat))
+ pm_runtime_put_noidle(twl->dev);
+ pm_runtime_mark_last_busy(twl->dev);
+ pm_runtime_put_sync_suspend(twl->dev);
+ pm_runtime_disable(twl->dev);
+
/* autogate 60MHz ULPI clock,
* clear dpll clock request for i2c access,
* disable 32KHz
@@ -771,11 +780,6 @@
/* disable complete OTG block */
twl4030_usb_clear_bits(twl, POWER_CTRL, POWER_CTRL_OTG_ENAB);
- if (cable_present(twl->linkstat))
- pm_runtime_put_noidle(twl->dev);
- pm_runtime_mark_last_busy(twl->dev);
- pm_runtime_put(twl->dev);
-
return 0;
}
diff --git a/drivers/pinctrl/mediatek/pinctrl-mtk-common.c b/drivers/pinctrl/mediatek/pinctrl-mtk-common.c
index 16d48a4..e96e86d 100644
--- a/drivers/pinctrl/mediatek/pinctrl-mtk-common.c
+++ b/drivers/pinctrl/mediatek/pinctrl-mtk-common.c
@@ -347,6 +347,7 @@
ret = mtk_pconf_set_pull_select(pctl, pin, true, false, arg);
break;
case PIN_CONFIG_INPUT_ENABLE:
+ mtk_pmx_gpio_set_direction(pctldev, NULL, pin, true);
ret = mtk_pconf_set_ies_smt(pctl, pin, arg, param);
break;
case PIN_CONFIG_OUTPUT:
@@ -354,6 +355,7 @@
ret = mtk_pmx_gpio_set_direction(pctldev, NULL, pin, false);
break;
case PIN_CONFIG_INPUT_SCHMITT_ENABLE:
+ mtk_pmx_gpio_set_direction(pctldev, NULL, pin, true);
ret = mtk_pconf_set_ies_smt(pctl, pin, arg, param);
break;
case PIN_CONFIG_DRIVE_STRENGTH:
diff --git a/drivers/pinctrl/mvebu/pinctrl-mvebu.c b/drivers/pinctrl/mvebu/pinctrl-mvebu.c
index e4d4738..3ef798f 100644
--- a/drivers/pinctrl/mvebu/pinctrl-mvebu.c
+++ b/drivers/pinctrl/mvebu/pinctrl-mvebu.c
@@ -666,16 +666,19 @@
struct mvebu_mpp_ctrl_setting *set = &mode->settings[0];
struct mvebu_pinctrl_group *grp;
unsigned num_settings;
+ unsigned supp_settings;
- for (num_settings = 0; ; set++) {
+ for (num_settings = 0, supp_settings = 0; ; set++) {
if (!set->name)
break;
+ num_settings++;
+
/* skip unsupported settings for this variant */
if (pctl->variant && !(pctl->variant & set->variant))
continue;
- num_settings++;
+ supp_settings++;
/* find gpio/gpo/gpi settings */
if (strcmp(set->name, "gpio") == 0)
@@ -688,7 +691,7 @@
}
/* skip modes with no settings for this variant */
- if (!num_settings)
+ if (!supp_settings)
continue;
grp = mvebu_pinctrl_find_group_by_pid(pctl, mode->pid);
diff --git a/drivers/pinctrl/nomadik/pinctrl-abx500.c b/drivers/pinctrl/nomadik/pinctrl-abx500.c
index 085e601..1f7469c 100644
--- a/drivers/pinctrl/nomadik/pinctrl-abx500.c
+++ b/drivers/pinctrl/nomadik/pinctrl-abx500.c
@@ -191,6 +191,7 @@
dev_err(pct->dev, "%s write failed (%d)\n", __func__, ret);
}
+#ifdef CONFIG_DEBUG_FS
static int abx500_get_pull_updown(struct abx500_pinctrl *pct, int offset,
enum abx500_gpio_pull_updown *pull_updown)
{
@@ -226,6 +227,7 @@
return ret;
}
+#endif
static int abx500_set_pull_updown(struct abx500_pinctrl *pct,
int offset, enum abx500_gpio_pull_updown val)
@@ -468,6 +470,7 @@
return ret;
}
+#ifdef CONFIG_DEBUG_FS
static int abx500_get_mode(struct pinctrl_dev *pctldev, struct gpio_chip *chip,
unsigned gpio)
{
@@ -553,8 +556,6 @@
return ret;
}
-#ifdef CONFIG_DEBUG_FS
-
#include <linux/seq_file.h>
static void abx500_gpio_dbg_show_one(struct seq_file *s,
diff --git a/drivers/pinctrl/pxa/pinctrl-pxa2xx.c b/drivers/pinctrl/pxa/pinctrl-pxa2xx.c
index d90e205..216f227 100644
--- a/drivers/pinctrl/pxa/pinctrl-pxa2xx.c
+++ b/drivers/pinctrl/pxa/pinctrl-pxa2xx.c
@@ -426,6 +426,7 @@
return 0;
}
+EXPORT_SYMBOL(pxa2xx_pinctrl_init);
int pxa2xx_pinctrl_exit(struct platform_device *pdev)
{
diff --git a/drivers/pinctrl/samsung/pinctrl-samsung.c b/drivers/pinctrl/samsung/pinctrl-samsung.c
index f67b1e9..5cc97f8 100644
--- a/drivers/pinctrl/samsung/pinctrl-samsung.c
+++ b/drivers/pinctrl/samsung/pinctrl-samsung.c
@@ -514,25 +514,35 @@
.pin_config_group_set = samsung_pinconf_group_set,
};
-/* gpiolib gpio_set callback function */
-static void samsung_gpio_set(struct gpio_chip *gc, unsigned offset, int value)
+/*
+ * The samsung_gpio_set_vlaue() should be called with "bank->slock" held
+ * to avoid race condition.
+ */
+static void samsung_gpio_set_value(struct gpio_chip *gc,
+ unsigned offset, int value)
{
struct samsung_pin_bank *bank = gpiochip_get_data(gc);
const struct samsung_pin_bank_type *type = bank->type;
- unsigned long flags;
void __iomem *reg;
u32 data;
reg = bank->drvdata->virt_base + bank->pctl_offset;
- spin_lock_irqsave(&bank->slock, flags);
-
data = readl(reg + type->reg_offset[PINCFG_TYPE_DAT]);
data &= ~(1 << offset);
if (value)
data |= 1 << offset;
writel(data, reg + type->reg_offset[PINCFG_TYPE_DAT]);
+}
+/* gpiolib gpio_set callback function */
+static void samsung_gpio_set(struct gpio_chip *gc, unsigned offset, int value)
+{
+ struct samsung_pin_bank *bank = gpiochip_get_data(gc);
+ unsigned long flags;
+
+ spin_lock_irqsave(&bank->slock, flags);
+ samsung_gpio_set_value(gc, offset, value);
spin_unlock_irqrestore(&bank->slock, flags);
}
@@ -553,6 +563,8 @@
}
/*
+ * The samsung_gpio_set_direction() should be called with "bank->slock" held
+ * to avoid race condition.
* The calls to gpio_direction_output() and gpio_direction_input()
* leads to this function call.
*/
@@ -564,7 +576,6 @@
struct samsung_pinctrl_drv_data *drvdata;
void __iomem *reg;
u32 data, mask, shift;
- unsigned long flags;
bank = gpiochip_get_data(gc);
type = bank->type;
@@ -581,31 +592,42 @@
reg += 4;
}
- spin_lock_irqsave(&bank->slock, flags);
-
data = readl(reg);
data &= ~(mask << shift);
if (!input)
data |= FUNC_OUTPUT << shift;
writel(data, reg);
- spin_unlock_irqrestore(&bank->slock, flags);
-
return 0;
}
/* gpiolib gpio_direction_input callback function. */
static int samsung_gpio_direction_input(struct gpio_chip *gc, unsigned offset)
{
- return samsung_gpio_set_direction(gc, offset, true);
+ struct samsung_pin_bank *bank = gpiochip_get_data(gc);
+ unsigned long flags;
+ int ret;
+
+ spin_lock_irqsave(&bank->slock, flags);
+ ret = samsung_gpio_set_direction(gc, offset, true);
+ spin_unlock_irqrestore(&bank->slock, flags);
+ return ret;
}
/* gpiolib gpio_direction_output callback function. */
static int samsung_gpio_direction_output(struct gpio_chip *gc, unsigned offset,
int value)
{
- samsung_gpio_set(gc, offset, value);
- return samsung_gpio_set_direction(gc, offset, false);
+ struct samsung_pin_bank *bank = gpiochip_get_data(gc);
+ unsigned long flags;
+ int ret;
+
+ spin_lock_irqsave(&bank->slock, flags);
+ samsung_gpio_set_value(gc, offset, value);
+ ret = samsung_gpio_set_direction(gc, offset, false);
+ spin_unlock_irqrestore(&bank->slock, flags);
+
+ return ret;
}
/*
diff --git a/drivers/pinctrl/sunxi/pinctrl-sun8i-h3.c b/drivers/pinctrl/sunxi/pinctrl-sun8i-h3.c
index 77d4cf0..11760bb 100644
--- a/drivers/pinctrl/sunxi/pinctrl-sun8i-h3.c
+++ b/drivers/pinctrl/sunxi/pinctrl-sun8i-h3.c
@@ -492,6 +492,7 @@
.pins = sun8i_h3_pins,
.npins = ARRAY_SIZE(sun8i_h3_pins),
.irq_banks = 2,
+ .irq_read_needs_mux = true
};
static int sun8i_h3_pinctrl_probe(struct platform_device *pdev)
diff --git a/drivers/platform/Kconfig b/drivers/platform/Kconfig
index 0adccbf..c11db8b 100644
--- a/drivers/platform/Kconfig
+++ b/drivers/platform/Kconfig
@@ -4,8 +4,7 @@
if MIPS
source "drivers/platform/mips/Kconfig"
endif
-if GOLDFISH
+
source "drivers/platform/goldfish/Kconfig"
-endif
source "drivers/platform/chrome/Kconfig"
diff --git a/drivers/platform/goldfish/Kconfig b/drivers/platform/goldfish/Kconfig
index 635ef25..50331e3 100644
--- a/drivers/platform/goldfish/Kconfig
+++ b/drivers/platform/goldfish/Kconfig
@@ -1,5 +1,23 @@
+menuconfig GOLDFISH
+ bool "Platform support for Goldfish virtual devices"
+ depends on X86_32 || X86_64 || ARM || ARM64 || MIPS
+ ---help---
+ Say Y here to get to see options for the Goldfish virtual platform.
+ This option alone does not add any kernel code.
+
+ Unless you are building for the Android Goldfish emulator say N here.
+
+if GOLDFISH
+
+config GOLDFISH_BUS
+ bool "Goldfish platform bus"
+ ---help---
+ This is a virtual bus to host Goldfish Android Virtual Devices.
+
config GOLDFISH_PIPE
tristate "Goldfish virtual device for QEMU pipes"
---help---
This is a virtual device to drive the QEMU pipe interface used by
the Goldfish Android Virtual Device.
+
+endif # GOLDFISH
diff --git a/drivers/platform/goldfish/Makefile b/drivers/platform/goldfish/Makefile
index a002239..d348712 100644
--- a/drivers/platform/goldfish/Makefile
+++ b/drivers/platform/goldfish/Makefile
@@ -1,5 +1,5 @@
#
# Makefile for Goldfish platform specific drivers
#
-obj-$(CONFIG_GOLDFISH) += pdev_bus.o
+obj-$(CONFIG_GOLDFISH_BUS) += pdev_bus.o
obj-$(CONFIG_GOLDFISH_PIPE) += goldfish_pipe.o
diff --git a/drivers/platform/goldfish/goldfish_pipe.c b/drivers/platform/goldfish/goldfish_pipe.c
index 9f6734ce..9973ceb 100644
--- a/drivers/platform/goldfish/goldfish_pipe.c
+++ b/drivers/platform/goldfish/goldfish_pipe.c
@@ -2,6 +2,7 @@
* Copyright (C) 2011 Google, Inc.
* Copyright (C) 2012 Intel, Inc.
* Copyright (C) 2013 Intel, Inc.
+ * Copyright (C) 2014 Linaro Limited
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
@@ -58,6 +59,8 @@
#include <linux/io.h>
#include <linux/goldfish.h>
#include <linux/dma-mapping.h>
+#include <linux/mm.h>
+#include <linux/acpi.h>
/*
* IMPORTANT: The following constants must match the ones used and defined
@@ -76,6 +79,7 @@
#define PIPE_REG_PARAMS_ADDR_LOW 0x18 /* read/write: batch data address */
#define PIPE_REG_PARAMS_ADDR_HIGH 0x1c /* read/write: batch data address */
#define PIPE_REG_ACCESS_PARAMS 0x20 /* write: batch access */
+#define PIPE_REG_VERSION 0x24 /* read: device version */
/* list of commands for PIPE_REG_COMMAND */
#define CMD_OPEN 1 /* open new channel */
@@ -91,12 +95,6 @@
#define CMD_WRITE_BUFFER 4 /* send a user buffer to the emulator */
#define CMD_WAKE_ON_WRITE 5 /* tell the emulator to wake us when writing
is possible */
-
-/* The following commands are related to read operations, they must be
- * listed in the same order than the corresponding write ones, since we
- * will use (CMD_READ_BUFFER - CMD_WRITE_BUFFER) as a special offset
- * in goldfish_pipe_read_write() below.
- */
#define CMD_READ_BUFFER 6 /* receive a user buffer from the emulator */
#define CMD_WAKE_ON_READ 7 /* tell the emulator to wake us when reading
* is possible */
@@ -131,6 +129,7 @@
unsigned char __iomem *base;
struct access_params *aps;
int irq;
+ u32 version;
};
static struct goldfish_pipe_dev pipe_dev[1];
@@ -263,19 +262,14 @@
return 0;
}
-/* This function is used for both reading from and writing to a given
- * pipe.
- */
static ssize_t goldfish_pipe_read_write(struct file *filp, char __user *buffer,
- size_t bufflen, int is_write)
+ size_t bufflen, int is_write)
{
unsigned long irq_flags;
struct goldfish_pipe *pipe = filp->private_data;
struct goldfish_pipe_dev *dev = pipe->dev;
- const int cmd_offset = is_write ? 0
- : (CMD_READ_BUFFER - CMD_WRITE_BUFFER);
unsigned long address, address_end;
- int ret = 0;
+ int count = 0, ret = -EINVAL;
/* If the emulator already closed the pipe, no need to go further */
if (test_bit(BIT_CLOSED_ON_HOST, &pipe->flags))
@@ -298,79 +292,107 @@
address_end = address + bufflen;
while (address < address_end) {
- unsigned long page_end = (address & PAGE_MASK) + PAGE_SIZE;
- unsigned long next = page_end < address_end ? page_end
- : address_end;
- unsigned long avail = next - address;
+ unsigned long page_end = (address & PAGE_MASK) + PAGE_SIZE;
+ unsigned long next = page_end < address_end ? page_end
+ : address_end;
+ unsigned long avail = next - address;
int status, wakeBit;
+ struct page *page;
- /* Ensure that the corresponding page is properly mapped */
- /* FIXME: this isn't safe or sufficient - use get_user_pages */
- if (is_write) {
- char c;
- /* Ensure that the page is mapped and readable */
- if (__get_user(c, (char __user *)address)) {
- if (!ret)
- ret = -EFAULT;
- break;
- }
+ /* Either vaddr or paddr depending on the device version */
+ unsigned long xaddr;
+
+ /*
+ * We grab the pages on a page-by-page basis in case user
+ * space gives us a potentially huge buffer but the read only
+ * returns a small amount, then there's no need to pin that
+ * much memory to the process.
+ */
+ down_read(¤t->mm->mmap_sem);
+ ret = get_user_pages(current, current->mm, address, 1,
+ !is_write, 0, &page, NULL);
+ up_read(¤t->mm->mmap_sem);
+ if (ret < 0)
+ break;
+
+ if (dev->version) {
+ /* Device version 1 or newer (qemu-android) expects the
+ * physical address.
+ */
+ xaddr = page_to_phys(page) | (address & ~PAGE_MASK);
} else {
- /* Ensure that the page is mapped and writable */
- if (__put_user(0, (char __user *)address)) {
- if (!ret)
- ret = -EFAULT;
- break;
- }
+ /* Device version 0 (classic emulator) expects the
+ * virtual address.
+ */
+ xaddr = address;
}
/* Now, try to transfer the bytes in the current page */
spin_lock_irqsave(&dev->lock, irq_flags);
- if (access_with_param(dev, CMD_WRITE_BUFFER + cmd_offset,
- address, avail, pipe, &status)) {
+ if (access_with_param(dev,
+ is_write ? CMD_WRITE_BUFFER : CMD_READ_BUFFER,
+ xaddr, avail, pipe, &status)) {
gf_write_ptr(pipe, dev->base + PIPE_REG_CHANNEL,
dev->base + PIPE_REG_CHANNEL_HIGH);
writel(avail, dev->base + PIPE_REG_SIZE);
- gf_write_ptr((void *)address,
+ gf_write_ptr((void *)xaddr,
dev->base + PIPE_REG_ADDRESS,
dev->base + PIPE_REG_ADDRESS_HIGH);
- writel(CMD_WRITE_BUFFER + cmd_offset,
+ writel(is_write ? CMD_WRITE_BUFFER : CMD_READ_BUFFER,
dev->base + PIPE_REG_COMMAND);
status = readl(dev->base + PIPE_REG_STATUS);
}
spin_unlock_irqrestore(&dev->lock, irq_flags);
+ if (status > 0 && !is_write)
+ set_page_dirty(page);
+ put_page(page);
+
if (status > 0) { /* Correct transfer */
- ret += status;
+ count += status;
address += status;
continue;
+ } else if (status == 0) { /* EOF */
+ ret = 0;
+ break;
+ } else if (status < 0 && count > 0) {
+ /*
+ * An error occurred and we already transferred
+ * something on one of the previous pages.
+ * Just return what we already copied and log this
+ * err.
+ *
+ * Note: This seems like an incorrect approach but
+ * cannot change it until we check if any user space
+ * ABI relies on this behavior.
+ */
+ if (status != PIPE_ERROR_AGAIN)
+ pr_info_ratelimited("goldfish_pipe: backend returned error %d on %s\n",
+ status, is_write ? "write" : "read");
+ ret = 0;
+ break;
}
- if (status == 0) /* EOF */
- break;
-
- /* An error occured. If we already transfered stuff, just
- * return with its count. We expect the next call to return
- * an error code */
- if (ret > 0)
- break;
-
- /* If the error is not PIPE_ERROR_AGAIN, or if we are not in
- * non-blocking mode, just return the error code.
- */
+ /*
+ * If the error is not PIPE_ERROR_AGAIN, or if we are not in
+ * non-blocking mode, just return the error code.
+ */
if (status != PIPE_ERROR_AGAIN ||
(filp->f_flags & O_NONBLOCK) != 0) {
ret = goldfish_pipe_error_convert(status);
break;
}
- /* We will have to wait until more data/space is available.
- * First, mark the pipe as waiting for a specific wake signal.
- */
+ /*
+ * The backend blocked the read/write, wait until the backend
+ * tells us it's ready to process more data.
+ */
wakeBit = is_write ? BIT_WAKE_ON_WRITE : BIT_WAKE_ON_READ;
set_bit(wakeBit, &pipe->flags);
/* Tell the emulator we're going to wait for a wake event */
- goldfish_cmd(pipe, CMD_WAKE_ON_WRITE + cmd_offset);
+ goldfish_cmd(pipe,
+ is_write ? CMD_WAKE_ON_WRITE : CMD_WAKE_ON_READ);
/* Unlock the pipe, then wait for the wake signal */
mutex_unlock(&pipe->lock);
@@ -388,12 +410,13 @@
/* Try to re-acquire the lock */
if (mutex_lock_interruptible(&pipe->lock))
return -ERESTARTSYS;
-
- /* Try the transfer again */
- continue;
}
mutex_unlock(&pipe->lock);
- return ret;
+
+ if (ret < 0)
+ return ret;
+ else
+ return count;
}
static ssize_t goldfish_pipe_read(struct file *filp, char __user *buffer,
@@ -446,10 +469,11 @@
unsigned long irq_flags;
int count = 0;
- /* We're going to read from the emulator a list of (channel,flags)
- * pairs corresponding to the wake events that occured on each
- * blocked pipe (i.e. channel).
- */
+ /*
+ * We're going to read from the emulator a list of (channel,flags)
+ * pairs corresponding to the wake events that occurred on each
+ * blocked pipe (i.e. channel).
+ */
spin_lock_irqsave(&dev->lock, irq_flags);
for (;;) {
/* First read the channel, 0 means the end of the list */
@@ -600,6 +624,12 @@
goto error;
}
setup_access_params_addr(pdev, dev);
+
+ /* Although the pipe device in the classic Android emulator does not
+ * recognize the 'version' register, it won't treat this as an error
+ * either and will simply return 0, which is fine.
+ */
+ dev->version = readl(dev->base + PIPE_REG_VERSION);
return 0;
error:
@@ -615,11 +645,26 @@
return 0;
}
+static const struct acpi_device_id goldfish_pipe_acpi_match[] = {
+ { "GFSH0003", 0 },
+ { },
+};
+MODULE_DEVICE_TABLE(acpi, goldfish_pipe_acpi_match);
+
+static const struct of_device_id goldfish_pipe_of_match[] = {
+ { .compatible = "google,android-pipe", },
+ {},
+};
+MODULE_DEVICE_TABLE(of, goldfish_pipe_of_match);
+
static struct platform_driver goldfish_pipe = {
.probe = goldfish_pipe_probe,
.remove = goldfish_pipe_remove,
.driver = {
- .name = "goldfish_pipe"
+ .name = "goldfish_pipe",
+ .owner = THIS_MODULE,
+ .of_match_table = goldfish_pipe_of_match,
+ .acpi_match_table = ACPI_PTR(goldfish_pipe_acpi_match),
}
};
diff --git a/drivers/platform/x86/intel-hid.c b/drivers/platform/x86/intel-hid.c
index 20f0ad9..e20f23e 100644
--- a/drivers/platform/x86/intel-hid.c
+++ b/drivers/platform/x86/intel-hid.c
@@ -41,8 +41,7 @@
{ KE_KEY, 4, { KEY_HOME } },
{ KE_KEY, 5, { KEY_END } },
{ KE_KEY, 6, { KEY_PAGEUP } },
- { KE_KEY, 4, { KEY_PAGEDOWN } },
- { KE_KEY, 4, { KEY_HOME } },
+ { KE_KEY, 7, { KEY_PAGEDOWN } },
{ KE_KEY, 8, { KEY_RFKILL } },
{ KE_KEY, 9, { KEY_POWER } },
{ KE_KEY, 11, { KEY_SLEEP } },
diff --git a/drivers/platform/x86/intel_scu_ipcutil.c b/drivers/platform/x86/intel_scu_ipcutil.c
index 02bc5a63..aa45424 100644
--- a/drivers/platform/x86/intel_scu_ipcutil.c
+++ b/drivers/platform/x86/intel_scu_ipcutil.c
@@ -49,7 +49,7 @@
static int scu_reg_access(u32 cmd, struct scu_ipc_data *data)
{
- int count = data->count;
+ unsigned int count = data->count;
if (count == 0 || count == 3 || count > 4)
return -EINVAL;
diff --git a/drivers/power/bq27xxx_battery_i2c.c b/drivers/power/bq27xxx_battery_i2c.c
index 9429e66..8eafc6f 100644
--- a/drivers/power/bq27xxx_battery_i2c.c
+++ b/drivers/power/bq27xxx_battery_i2c.c
@@ -21,6 +21,9 @@
#include <linux/power/bq27xxx_battery.h>
+static DEFINE_IDR(battery_id);
+static DEFINE_MUTEX(battery_mutex);
+
static irqreturn_t bq27xxx_battery_irq_handler_thread(int irq, void *data)
{
struct bq27xxx_device_info *di = data;
@@ -70,19 +73,33 @@
{
struct bq27xxx_device_info *di;
int ret;
+ char *name;
+ int num;
+
+ /* Get new ID for the new battery device */
+ mutex_lock(&battery_mutex);
+ num = idr_alloc(&battery_id, client, 0, 0, GFP_KERNEL);
+ mutex_unlock(&battery_mutex);
+ if (num < 0)
+ return num;
+
+ name = devm_kasprintf(&client->dev, GFP_KERNEL, "%s-%d", id->name, num);
+ if (!name)
+ goto err_mem;
di = devm_kzalloc(&client->dev, sizeof(*di), GFP_KERNEL);
if (!di)
- return -ENOMEM;
+ goto err_mem;
+ di->id = num;
di->dev = &client->dev;
di->chip = id->driver_data;
- di->name = id->name;
+ di->name = name;
di->bus.read = bq27xxx_battery_i2c_read;
ret = bq27xxx_battery_setup(di);
if (ret)
- return ret;
+ goto err_failed;
/* Schedule a polling after about 1 min */
schedule_delayed_work(&di->work, 60 * HZ);
@@ -103,6 +120,16 @@
}
return 0;
+
+err_mem:
+ ret = -ENOMEM;
+
+err_failed:
+ mutex_lock(&battery_mutex);
+ idr_remove(&battery_id, num);
+ mutex_unlock(&battery_mutex);
+
+ return ret;
}
static int bq27xxx_battery_i2c_remove(struct i2c_client *client)
@@ -111,6 +138,10 @@
bq27xxx_battery_teardown(di);
+ mutex_lock(&battery_mutex);
+ idr_remove(&battery_id, di->id);
+ mutex_unlock(&battery_mutex);
+
return 0;
}
diff --git a/drivers/s390/block/dasd.c b/drivers/s390/block/dasd.c
index 41605da..c78db05 100644
--- a/drivers/s390/block/dasd.c
+++ b/drivers/s390/block/dasd.c
@@ -3035,6 +3035,7 @@
max = block->base->discipline->max_blocks << block->s2b_shift;
}
queue_flag_set_unlocked(QUEUE_FLAG_NONROT, block->request_queue);
+ block->request_queue->limits.max_dev_sectors = max;
blk_queue_logical_block_size(block->request_queue,
block->bp_block);
blk_queue_max_hw_sectors(block->request_queue, max);
diff --git a/drivers/s390/block/dasd_alias.c b/drivers/s390/block/dasd_alias.c
index 184b1db..286782c 100644
--- a/drivers/s390/block/dasd_alias.c
+++ b/drivers/s390/block/dasd_alias.c
@@ -264,8 +264,10 @@
spin_unlock_irqrestore(&lcu->lock, flags);
cancel_work_sync(&lcu->suc_data.worker);
spin_lock_irqsave(&lcu->lock, flags);
- if (device == lcu->suc_data.device)
+ if (device == lcu->suc_data.device) {
+ dasd_put_device(device);
lcu->suc_data.device = NULL;
+ }
}
was_pending = 0;
if (device == lcu->ruac_data.device) {
@@ -273,8 +275,10 @@
was_pending = 1;
cancel_delayed_work_sync(&lcu->ruac_data.dwork);
spin_lock_irqsave(&lcu->lock, flags);
- if (device == lcu->ruac_data.device)
+ if (device == lcu->ruac_data.device) {
+ dasd_put_device(device);
lcu->ruac_data.device = NULL;
+ }
}
private->lcu = NULL;
spin_unlock_irqrestore(&lcu->lock, flags);
@@ -549,8 +553,10 @@
if ((rc && (rc != -EOPNOTSUPP)) || (lcu->flags & NEED_UAC_UPDATE)) {
DBF_DEV_EVENT(DBF_WARNING, device, "could not update"
" alias data in lcu (rc = %d), retry later", rc);
- schedule_delayed_work(&lcu->ruac_data.dwork, 30*HZ);
+ if (!schedule_delayed_work(&lcu->ruac_data.dwork, 30*HZ))
+ dasd_put_device(device);
} else {
+ dasd_put_device(device);
lcu->ruac_data.device = NULL;
lcu->flags &= ~UPDATE_PENDING;
}
@@ -593,8 +599,10 @@
*/
if (!usedev)
return -EINVAL;
+ dasd_get_device(usedev);
lcu->ruac_data.device = usedev;
- schedule_delayed_work(&lcu->ruac_data.dwork, 0);
+ if (!schedule_delayed_work(&lcu->ruac_data.dwork, 0))
+ dasd_put_device(usedev);
return 0;
}
@@ -723,7 +731,7 @@
ASCEBC((char *) &cqr->magic, 4);
ccw = cqr->cpaddr;
ccw->cmd_code = DASD_ECKD_CCW_RSCK;
- ccw->flags = 0 ;
+ ccw->flags = CCW_FLAG_SLI;
ccw->count = 16;
ccw->cda = (__u32)(addr_t) cqr->data;
((char *)cqr->data)[0] = reason;
@@ -930,6 +938,7 @@
/* 3. read new alias configuration */
_schedule_lcu_update(lcu, device);
lcu->suc_data.device = NULL;
+ dasd_put_device(device);
spin_unlock_irqrestore(&lcu->lock, flags);
}
@@ -989,6 +998,8 @@
}
lcu->suc_data.reason = reason;
lcu->suc_data.device = device;
+ dasd_get_device(device);
spin_unlock(&lcu->lock);
- schedule_work(&lcu->suc_data.worker);
+ if (!schedule_work(&lcu->suc_data.worker))
+ dasd_put_device(device);
};
diff --git a/drivers/scsi/device_handler/scsi_dh_rdac.c b/drivers/scsi/device_handler/scsi_dh_rdac.c
index 3613581..93880ed 100644
--- a/drivers/scsi/device_handler/scsi_dh_rdac.c
+++ b/drivers/scsi/device_handler/scsi_dh_rdac.c
@@ -562,7 +562,7 @@
/*
* Command Lock contention
*/
- err = SCSI_DH_RETRY;
+ err = SCSI_DH_IMM_RETRY;
break;
default:
break;
@@ -612,6 +612,8 @@
err = mode_select_handle_sense(sdev, h->sense);
if (err == SCSI_DH_RETRY && retry_cnt--)
goto retry;
+ if (err == SCSI_DH_IMM_RETRY)
+ goto retry;
}
if (err == SCSI_DH_OK) {
h->state = RDAC_STATE_ACTIVE;
diff --git a/drivers/scsi/hisi_sas/Kconfig b/drivers/scsi/hisi_sas/Kconfig
index b676618..d1dd161 100644
--- a/drivers/scsi/hisi_sas/Kconfig
+++ b/drivers/scsi/hisi_sas/Kconfig
@@ -1,6 +1,6 @@
config SCSI_HISI_SAS
tristate "HiSilicon SAS"
- depends on HAS_DMA
+ depends on HAS_DMA && HAS_IOMEM
depends on ARM64 || COMPILE_TEST
select SCSI_SAS_LIBSAS
select BLK_DEV_INTEGRITY
diff --git a/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c
index 057fdeb..eea24d7 100644
--- a/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c
+++ b/drivers/scsi/hisi_sas/hisi_sas_v1_hw.c
@@ -1289,13 +1289,10 @@
goto out;
}
- if (cmplt_hdr_data & CMPLT_HDR_ERR_RCRD_XFRD_MSK) {
- if (!(cmplt_hdr_data & CMPLT_HDR_CMD_CMPLT_MSK) ||
- !(cmplt_hdr_data & CMPLT_HDR_RSPNS_XFRD_MSK))
- ts->stat = SAS_DATA_OVERRUN;
- else
- slot_err_v1_hw(hisi_hba, task, slot);
+ if (cmplt_hdr_data & CMPLT_HDR_ERR_RCRD_XFRD_MSK &&
+ !(cmplt_hdr_data & CMPLT_HDR_RSPNS_XFRD_MSK)) {
+ slot_err_v1_hw(hisi_hba, task, slot);
goto out;
}
diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c
index 52a8765..692a757 100644
--- a/drivers/scsi/qla2xxx/qla_init.c
+++ b/drivers/scsi/qla2xxx/qla_init.c
@@ -2204,7 +2204,7 @@
/* Clear outstanding commands array. */
for (que = 0; que < ha->max_req_queues; que++) {
req = ha->req_q_map[que];
- if (!req)
+ if (!req || !test_bit(que, ha->req_qid_map))
continue;
req->out_ptr = (void *)(req->ring + req->length);
*req->out_ptr = 0;
@@ -2221,7 +2221,7 @@
for (que = 0; que < ha->max_rsp_queues; que++) {
rsp = ha->rsp_q_map[que];
- if (!rsp)
+ if (!rsp || !test_bit(que, ha->rsp_qid_map))
continue;
rsp->in_ptr = (void *)(rsp->ring + rsp->length);
*rsp->in_ptr = 0;
@@ -4981,7 +4981,7 @@
for (i = 1; i < ha->max_rsp_queues; i++) {
rsp = ha->rsp_q_map[i];
- if (rsp) {
+ if (rsp && test_bit(i, ha->rsp_qid_map)) {
rsp->options &= ~BIT_0;
ret = qla25xx_init_rsp_que(base_vha, rsp);
if (ret != QLA_SUCCESS)
@@ -4996,8 +4996,8 @@
}
for (i = 1; i < ha->max_req_queues; i++) {
req = ha->req_q_map[i];
- if (req) {
- /* Clear outstanding commands array. */
+ if (req && test_bit(i, ha->req_qid_map)) {
+ /* Clear outstanding commands array. */
req->options &= ~BIT_0;
ret = qla25xx_init_req_que(base_vha, req);
if (ret != QLA_SUCCESS)
diff --git a/drivers/scsi/qla2xxx/qla_isr.c b/drivers/scsi/qla2xxx/qla_isr.c
index d4d65eb..4af9547 100644
--- a/drivers/scsi/qla2xxx/qla_isr.c
+++ b/drivers/scsi/qla2xxx/qla_isr.c
@@ -3063,9 +3063,9 @@
"MSI-X: Failed to enable support "
"-- %d/%d\n Retry with %d vectors.\n",
ha->msix_count, ret, ret);
+ ha->msix_count = ret;
+ ha->max_rsp_queues = ha->msix_count - 1;
}
- ha->msix_count = ret;
- ha->max_rsp_queues = ha->msix_count - 1;
ha->msix_entries = kzalloc(sizeof(struct qla_msix_entry) *
ha->msix_count, GFP_KERNEL);
if (!ha->msix_entries) {
diff --git a/drivers/scsi/qla2xxx/qla_mid.c b/drivers/scsi/qla2xxx/qla_mid.c
index c5dd594..cf7ba52 100644
--- a/drivers/scsi/qla2xxx/qla_mid.c
+++ b/drivers/scsi/qla2xxx/qla_mid.c
@@ -600,7 +600,7 @@
/* Delete request queues */
for (cnt = 1; cnt < ha->max_req_queues; cnt++) {
req = ha->req_q_map[cnt];
- if (req) {
+ if (req && test_bit(cnt, ha->req_qid_map)) {
ret = qla25xx_delete_req_que(vha, req);
if (ret != QLA_SUCCESS) {
ql_log(ql_log_warn, vha, 0x00ea,
@@ -614,7 +614,7 @@
/* Delete response queues */
for (cnt = 1; cnt < ha->max_rsp_queues; cnt++) {
rsp = ha->rsp_q_map[cnt];
- if (rsp) {
+ if (rsp && test_bit(cnt, ha->rsp_qid_map)) {
ret = qla25xx_delete_rsp_que(vha, rsp);
if (ret != QLA_SUCCESS) {
ql_log(ql_log_warn, vha, 0x00eb,
diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
index f1788db..f6c7ce3 100644
--- a/drivers/scsi/qla2xxx/qla_os.c
+++ b/drivers/scsi/qla2xxx/qla_os.c
@@ -409,6 +409,9 @@
int cnt;
for (cnt = 0; cnt < ha->max_req_queues; cnt++) {
+ if (!test_bit(cnt, ha->req_qid_map))
+ continue;
+
req = ha->req_q_map[cnt];
qla2x00_free_req_que(ha, req);
}
@@ -416,6 +419,9 @@
ha->req_q_map = NULL;
for (cnt = 0; cnt < ha->max_rsp_queues; cnt++) {
+ if (!test_bit(cnt, ha->rsp_qid_map))
+ continue;
+
rsp = ha->rsp_q_map[cnt];
qla2x00_free_rsp_que(ha, rsp);
}
diff --git a/drivers/scsi/qla2xxx/qla_target.c b/drivers/scsi/qla2xxx/qla_target.c
index 8075a4c..ee967be 100644
--- a/drivers/scsi/qla2xxx/qla_target.c
+++ b/drivers/scsi/qla2xxx/qla_target.c
@@ -105,7 +105,7 @@
static int qlt_issue_task_mgmt(struct qla_tgt_sess *sess, uint32_t lun,
int fn, void *iocb, int flags);
static void qlt_send_term_exchange(struct scsi_qla_host *ha, struct qla_tgt_cmd
- *cmd, struct atio_from_isp *atio, int ha_locked);
+ *cmd, struct atio_from_isp *atio, int ha_locked, int ul_abort);
static void qlt_reject_free_srr_imm(struct scsi_qla_host *ha,
struct qla_tgt_srr_imm *imm, int ha_lock);
static void qlt_abort_cmd_on_host_reset(struct scsi_qla_host *vha,
@@ -1756,7 +1756,7 @@
qlt_send_notify_ack(vha, &mcmd->orig_iocb.imm_ntfy,
0, 0, 0, 0, 0, 0);
else {
- if (mcmd->se_cmd.se_tmr_req->function == TMR_ABORT_TASK)
+ if (mcmd->orig_iocb.atio.u.raw.entry_type == ABTS_RECV_24XX)
qlt_24xx_send_abts_resp(vha, &mcmd->orig_iocb.abts,
mcmd->fc_tm_rsp, false);
else
@@ -2665,7 +2665,7 @@
/* no need to terminate. FW already freed exchange. */
qlt_abort_cmd_on_host_reset(cmd->vha, cmd);
else
- qlt_send_term_exchange(vha, cmd, &cmd->atio, 1);
+ qlt_send_term_exchange(vha, cmd, &cmd->atio, 1, 0);
spin_unlock_irqrestore(&ha->hardware_lock, flags);
return 0;
}
@@ -3173,7 +3173,8 @@
}
static void qlt_send_term_exchange(struct scsi_qla_host *vha,
- struct qla_tgt_cmd *cmd, struct atio_from_isp *atio, int ha_locked)
+ struct qla_tgt_cmd *cmd, struct atio_from_isp *atio, int ha_locked,
+ int ul_abort)
{
unsigned long flags = 0;
int rc;
@@ -3193,8 +3194,7 @@
qlt_alloc_qfull_cmd(vha, atio, 0, 0);
done:
- if (cmd && (!cmd->aborted ||
- !cmd->cmd_sent_to_fw)) {
+ if (cmd && !ul_abort && !cmd->aborted) {
if (cmd->sg_mapped)
qlt_unmap_sg(vha, cmd);
vha->hw->tgt.tgt_ops->free_cmd(cmd);
@@ -3253,21 +3253,38 @@
}
-void qlt_abort_cmd(struct qla_tgt_cmd *cmd)
+int qlt_abort_cmd(struct qla_tgt_cmd *cmd)
{
struct qla_tgt *tgt = cmd->tgt;
struct scsi_qla_host *vha = tgt->vha;
struct se_cmd *se_cmd = &cmd->se_cmd;
+ unsigned long flags;
ql_dbg(ql_dbg_tgt_mgt, vha, 0xf014,
"qla_target(%d): terminating exchange for aborted cmd=%p "
"(se_cmd=%p, tag=%llu)", vha->vp_idx, cmd, &cmd->se_cmd,
se_cmd->tag);
+ spin_lock_irqsave(&cmd->cmd_lock, flags);
+ if (cmd->aborted) {
+ spin_unlock_irqrestore(&cmd->cmd_lock, flags);
+ /*
+ * It's normal to see 2 calls in this path:
+ * 1) XFER Rdy completion + CMD_T_ABORT
+ * 2) TCM TMR - drain_state_list
+ */
+ ql_dbg(ql_dbg_tgt_mgt, vha, 0xffff,
+ "multiple abort. %p transport_state %x, t_state %x,"
+ " se_cmd_flags %x \n", cmd, cmd->se_cmd.transport_state,
+ cmd->se_cmd.t_state,cmd->se_cmd.se_cmd_flags);
+ return EIO;
+ }
cmd->aborted = 1;
cmd->cmd_flags |= BIT_6;
+ spin_unlock_irqrestore(&cmd->cmd_lock, flags);
- qlt_send_term_exchange(vha, cmd, &cmd->atio, 0);
+ qlt_send_term_exchange(vha, cmd, &cmd->atio, 0, 1);
+ return 0;
}
EXPORT_SYMBOL(qlt_abort_cmd);
@@ -3282,6 +3299,9 @@
BUG_ON(cmd->cmd_in_wq);
+ if (cmd->sg_mapped)
+ qlt_unmap_sg(cmd->vha, cmd);
+
if (!cmd->q_full)
qlt_decr_num_pend_cmds(cmd->vha);
@@ -3399,7 +3419,7 @@
term = 1;
if (term)
- qlt_send_term_exchange(vha, cmd, &cmd->atio, 1);
+ qlt_send_term_exchange(vha, cmd, &cmd->atio, 1, 0);
return term;
}
@@ -3580,12 +3600,13 @@
case CTIO_PORT_LOGGED_OUT:
case CTIO_PORT_UNAVAILABLE:
{
- int logged_out = (status & 0xFFFF);
+ int logged_out =
+ (status & 0xFFFF) == CTIO_PORT_LOGGED_OUT;
+
ql_dbg(ql_dbg_tgt_mgt, vha, 0xf059,
"qla_target(%d): CTIO with %s status %x "
"received (state %x, se_cmd %p)\n", vha->vp_idx,
- (logged_out == CTIO_PORT_LOGGED_OUT) ?
- "PORT LOGGED OUT" : "PORT UNAVAILABLE",
+ logged_out ? "PORT LOGGED OUT" : "PORT UNAVAILABLE",
status, cmd->state, se_cmd);
if (logged_out && cmd->sess) {
@@ -3754,6 +3775,7 @@
goto out_term;
}
+ spin_lock_init(&cmd->cmd_lock);
cdb = &atio->u.isp24.fcp_cmnd.cdb[0];
cmd->se_cmd.tag = atio->u.isp24.exchange_addr;
cmd->unpacked_lun = scsilun_to_int(
@@ -3796,7 +3818,7 @@
*/
cmd->cmd_flags |= BIT_2;
spin_lock_irqsave(&ha->hardware_lock, flags);
- qlt_send_term_exchange(vha, NULL, &cmd->atio, 1);
+ qlt_send_term_exchange(vha, NULL, &cmd->atio, 1, 0);
qlt_decr_num_pend_cmds(vha);
percpu_ida_free(&sess->se_sess->sess_tag_pool, cmd->se_cmd.map_tag);
@@ -3918,7 +3940,7 @@
out_term:
spin_lock_irqsave(&ha->hardware_lock, flags);
- qlt_send_term_exchange(vha, NULL, &op->atio, 1);
+ qlt_send_term_exchange(vha, NULL, &op->atio, 1, 0);
spin_unlock_irqrestore(&ha->hardware_lock, flags);
kfree(op);
@@ -3982,7 +4004,8 @@
cmd->cmd_in_wq = 1;
cmd->cmd_flags |= BIT_0;
- cmd->se_cmd.cpuid = -1;
+ cmd->se_cmd.cpuid = ha->msix_count ?
+ ha->tgt.rspq_vector_cpuid : WORK_CPU_UNBOUND;
spin_lock(&vha->cmd_list_lock);
list_add_tail(&cmd->cmd_list, &vha->qla_cmd_list);
@@ -3990,7 +4013,6 @@
INIT_WORK(&cmd->work, qlt_do_work);
if (ha->msix_count) {
- cmd->se_cmd.cpuid = ha->tgt.rspq_vector_cpuid;
if (cmd->atio.u.isp24.fcp_cmnd.rddata)
queue_work_on(smp_processor_id(), qla_tgt_wq,
&cmd->work);
@@ -4771,7 +4793,7 @@
dump_stack();
} else {
cmd->cmd_flags |= BIT_9;
- qlt_send_term_exchange(vha, cmd, &cmd->atio, 1);
+ qlt_send_term_exchange(vha, cmd, &cmd->atio, 1, 0);
}
spin_unlock_irqrestore(&ha->hardware_lock, flags);
}
@@ -4950,7 +4972,7 @@
sctio, sctio->srr_id);
list_del(&sctio->srr_list_entry);
qlt_send_term_exchange(vha, sctio->cmd,
- &sctio->cmd->atio, 1);
+ &sctio->cmd->atio, 1, 0);
kfree(sctio);
}
}
@@ -5123,7 +5145,7 @@
atio->u.isp24.fcp_hdr.s_id);
spin_unlock_irqrestore(&ha->tgt.sess_lock, flags);
if (!sess) {
- qlt_send_term_exchange(vha, NULL, atio, 1);
+ qlt_send_term_exchange(vha, NULL, atio, 1, 0);
return 0;
}
/* Sending marker isn't necessary, since we called from ISR */
@@ -5406,7 +5428,7 @@
#if 1 /* With TERM EXCHANGE some FC cards refuse to boot */
qlt_send_busy(vha, atio, SAM_STAT_BUSY);
#else
- qlt_send_term_exchange(vha, NULL, atio, 1);
+ qlt_send_term_exchange(vha, NULL, atio, 1, 0);
#endif
if (!ha_locked)
@@ -5523,7 +5545,7 @@
#if 1 /* With TERM EXCHANGE some FC cards refuse to boot */
qlt_send_busy(vha, atio, 0);
#else
- qlt_send_term_exchange(vha, NULL, atio, 1);
+ qlt_send_term_exchange(vha, NULL, atio, 1, 0);
#endif
} else {
if (tgt->tgt_stop) {
@@ -5532,7 +5554,7 @@
"command to target, sending TERM "
"EXCHANGE for rsp\n");
qlt_send_term_exchange(vha, NULL,
- atio, 1);
+ atio, 1, 0);
} else {
ql_dbg(ql_dbg_tgt, vha, 0xe060,
"qla_target(%d): Unable to send "
@@ -5960,7 +5982,7 @@
return;
out_term:
- qlt_send_term_exchange(vha, NULL, &prm->tm_iocb2, 0);
+ qlt_send_term_exchange(vha, NULL, &prm->tm_iocb2, 1, 0);
if (sess)
ha->tgt.tgt_ops->put_sess(sess);
spin_unlock_irqrestore(&ha->tgt.sess_lock, flags);
diff --git a/drivers/scsi/qla2xxx/qla_target.h b/drivers/scsi/qla2xxx/qla_target.h
index 71b2865..22a6a76 100644
--- a/drivers/scsi/qla2xxx/qla_target.h
+++ b/drivers/scsi/qla2xxx/qla_target.h
@@ -943,6 +943,36 @@
qlt_plogi_ack_t *plogi_link[QLT_PLOGI_LINK_MAX];
};
+typedef enum {
+ /*
+ * BIT_0 - Atio Arrival / schedule to work
+ * BIT_1 - qlt_do_work
+ * BIT_2 - qlt_do work failed
+ * BIT_3 - xfer rdy/tcm_qla2xxx_write_pending
+ * BIT_4 - read respond/tcm_qla2xx_queue_data_in
+ * BIT_5 - status respond / tcm_qla2xx_queue_status
+ * BIT_6 - tcm request to abort/Term exchange.
+ * pre_xmit_response->qlt_send_term_exchange
+ * BIT_7 - SRR received (qlt_handle_srr->qlt_xmit_response)
+ * BIT_8 - SRR received (qlt_handle_srr->qlt_rdy_to_xfer)
+ * BIT_9 - SRR received (qla_handle_srr->qlt_send_term_exchange)
+ * BIT_10 - Data in - hanlde_data->tcm_qla2xxx_handle_data
+
+ * BIT_12 - good completion - qlt_ctio_do_completion -->free_cmd
+ * BIT_13 - Bad completion -
+ * qlt_ctio_do_completion --> qlt_term_ctio_exchange
+ * BIT_14 - Back end data received/sent.
+ * BIT_15 - SRR prepare ctio
+ * BIT_16 - complete free
+ * BIT_17 - flush - qlt_abort_cmd_on_host_reset
+ * BIT_18 - completion w/abort status
+ * BIT_19 - completion w/unknown status
+ * BIT_20 - tcm_qla2xxx_free_cmd
+ */
+ CMD_FLAG_DATA_WORK = BIT_11,
+ CMD_FLAG_DATA_WORK_FREE = BIT_21,
+} cmd_flags_t;
+
struct qla_tgt_cmd {
struct se_cmd se_cmd;
struct qla_tgt_sess *sess;
@@ -952,6 +982,7 @@
/* Sense buffer that will be mapped into outgoing status */
unsigned char sense_buffer[TRANSPORT_SENSE_BUFFER];
+ spinlock_t cmd_lock;
/* to save extra sess dereferences */
unsigned int conf_compl_supported:1;
unsigned int sg_mapped:1;
@@ -986,30 +1017,8 @@
uint64_t jiffies_at_alloc;
uint64_t jiffies_at_free;
- /* BIT_0 - Atio Arrival / schedule to work
- * BIT_1 - qlt_do_work
- * BIT_2 - qlt_do work failed
- * BIT_3 - xfer rdy/tcm_qla2xxx_write_pending
- * BIT_4 - read respond/tcm_qla2xx_queue_data_in
- * BIT_5 - status respond / tcm_qla2xx_queue_status
- * BIT_6 - tcm request to abort/Term exchange.
- * pre_xmit_response->qlt_send_term_exchange
- * BIT_7 - SRR received (qlt_handle_srr->qlt_xmit_response)
- * BIT_8 - SRR received (qlt_handle_srr->qlt_rdy_to_xfer)
- * BIT_9 - SRR received (qla_handle_srr->qlt_send_term_exchange)
- * BIT_10 - Data in - hanlde_data->tcm_qla2xxx_handle_data
- * BIT_11 - Data actually going to TCM : tcm_qla2xx_handle_data_work
- * BIT_12 - good completion - qlt_ctio_do_completion -->free_cmd
- * BIT_13 - Bad completion -
- * qlt_ctio_do_completion --> qlt_term_ctio_exchange
- * BIT_14 - Back end data received/sent.
- * BIT_15 - SRR prepare ctio
- * BIT_16 - complete free
- * BIT_17 - flush - qlt_abort_cmd_on_host_reset
- * BIT_18 - completion w/abort status
- * BIT_19 - completion w/unknown status
- */
- uint32_t cmd_flags;
+
+ cmd_flags_t cmd_flags;
};
struct qla_tgt_sess_work_param {
@@ -1148,7 +1157,7 @@
extern void qlt_response_pkt_all_vps(struct scsi_qla_host *, response_t *);
extern int qlt_rdy_to_xfer(struct qla_tgt_cmd *);
extern int qlt_xmit_response(struct qla_tgt_cmd *, int, uint8_t);
-extern void qlt_abort_cmd(struct qla_tgt_cmd *);
+extern int qlt_abort_cmd(struct qla_tgt_cmd *);
extern void qlt_xmit_tm_rsp(struct qla_tgt_mgmt_cmd *);
extern void qlt_free_mcmd(struct qla_tgt_mgmt_cmd *);
extern void qlt_free_cmd(struct qla_tgt_cmd *cmd);
diff --git a/drivers/scsi/qla2xxx/qla_tmpl.c b/drivers/scsi/qla2xxx/qla_tmpl.c
index ddbe2e7..c3e6225 100644
--- a/drivers/scsi/qla2xxx/qla_tmpl.c
+++ b/drivers/scsi/qla2xxx/qla_tmpl.c
@@ -395,6 +395,10 @@
if (ent->t263.queue_type == T263_QUEUE_TYPE_REQ) {
for (i = 0; i < vha->hw->max_req_queues; i++) {
struct req_que *req = vha->hw->req_q_map[i];
+
+ if (!test_bit(i, vha->hw->req_qid_map))
+ continue;
+
if (req || !buf) {
length = req ?
req->length : REQUEST_ENTRY_CNT_24XX;
@@ -408,6 +412,10 @@
} else if (ent->t263.queue_type == T263_QUEUE_TYPE_RSP) {
for (i = 0; i < vha->hw->max_rsp_queues; i++) {
struct rsp_que *rsp = vha->hw->rsp_q_map[i];
+
+ if (!test_bit(i, vha->hw->rsp_qid_map))
+ continue;
+
if (rsp || !buf) {
length = rsp ?
rsp->length : RESPONSE_ENTRY_CNT_MQ;
@@ -634,6 +642,10 @@
if (ent->t274.queue_type == T274_QUEUE_TYPE_REQ_SHAD) {
for (i = 0; i < vha->hw->max_req_queues; i++) {
struct req_que *req = vha->hw->req_q_map[i];
+
+ if (!test_bit(i, vha->hw->req_qid_map))
+ continue;
+
if (req || !buf) {
qla27xx_insert16(i, buf, len);
qla27xx_insert16(1, buf, len);
@@ -645,6 +657,10 @@
} else if (ent->t274.queue_type == T274_QUEUE_TYPE_RSP_SHAD) {
for (i = 0; i < vha->hw->max_rsp_queues; i++) {
struct rsp_que *rsp = vha->hw->rsp_q_map[i];
+
+ if (!test_bit(i, vha->hw->rsp_qid_map))
+ continue;
+
if (rsp || !buf) {
qla27xx_insert16(i, buf, len);
qla27xx_insert16(1, buf, len);
diff --git a/drivers/scsi/qla2xxx/tcm_qla2xxx.c b/drivers/scsi/qla2xxx/tcm_qla2xxx.c
index faf0a12..1808a01 100644
--- a/drivers/scsi/qla2xxx/tcm_qla2xxx.c
+++ b/drivers/scsi/qla2xxx/tcm_qla2xxx.c
@@ -298,6 +298,10 @@
{
cmd->vha->tgt_counters.core_qla_free_cmd++;
cmd->cmd_in_wq = 1;
+
+ BUG_ON(cmd->cmd_flags & BIT_20);
+ cmd->cmd_flags |= BIT_20;
+
INIT_WORK(&cmd->work, tcm_qla2xxx_complete_free);
queue_work_on(smp_processor_id(), tcm_qla2xxx_free_wq, &cmd->work);
}
@@ -374,6 +378,20 @@
{
struct qla_tgt_cmd *cmd = container_of(se_cmd,
struct qla_tgt_cmd, se_cmd);
+
+ if (cmd->aborted) {
+ /* Cmd can loop during Q-full. tcm_qla2xxx_aborted_task
+ * can get ahead of this cmd. tcm_qla2xxx_aborted_task
+ * already kick start the free.
+ */
+ pr_debug("write_pending aborted cmd[%p] refcount %d "
+ "transport_state %x, t_state %x, se_cmd_flags %x\n",
+ cmd,cmd->se_cmd.cmd_kref.refcount.counter,
+ cmd->se_cmd.transport_state,
+ cmd->se_cmd.t_state,
+ cmd->se_cmd.se_cmd_flags);
+ return 0;
+ }
cmd->cmd_flags |= BIT_3;
cmd->bufflen = se_cmd->data_length;
cmd->dma_data_direction = target_reverse_dma_direction(se_cmd);
@@ -405,7 +423,7 @@
se_cmd->t_state == TRANSPORT_COMPLETE_QF_WP) {
spin_unlock_irqrestore(&se_cmd->t_state_lock, flags);
wait_for_completion_timeout(&se_cmd->t_transport_stop_comp,
- 3 * HZ);
+ 50);
return 0;
}
spin_unlock_irqrestore(&se_cmd->t_state_lock, flags);
@@ -444,6 +462,9 @@
if (bidi)
flags |= TARGET_SCF_BIDI_OP;
+ if (se_cmd->cpuid != WORK_CPU_UNBOUND)
+ flags |= TARGET_SCF_USE_CPUID;
+
sess = cmd->sess;
if (!sess) {
pr_err("Unable to locate struct qla_tgt_sess from qla_tgt_cmd\n");
@@ -465,13 +486,25 @@
static void tcm_qla2xxx_handle_data_work(struct work_struct *work)
{
struct qla_tgt_cmd *cmd = container_of(work, struct qla_tgt_cmd, work);
+ unsigned long flags;
/*
* Ensure that the complete FCP WRITE payload has been received.
* Otherwise return an exception via CHECK_CONDITION status.
*/
cmd->cmd_in_wq = 0;
- cmd->cmd_flags |= BIT_11;
+
+ spin_lock_irqsave(&cmd->cmd_lock, flags);
+ cmd->cmd_flags |= CMD_FLAG_DATA_WORK;
+ if (cmd->aborted) {
+ cmd->cmd_flags |= CMD_FLAG_DATA_WORK_FREE;
+ spin_unlock_irqrestore(&cmd->cmd_lock, flags);
+
+ tcm_qla2xxx_free_cmd(cmd);
+ return;
+ }
+ spin_unlock_irqrestore(&cmd->cmd_lock, flags);
+
cmd->vha->tgt_counters.qla_core_ret_ctio++;
if (!cmd->write_data_transferred) {
/*
@@ -546,6 +579,20 @@
struct qla_tgt_cmd *cmd = container_of(se_cmd,
struct qla_tgt_cmd, se_cmd);
+ if (cmd->aborted) {
+ /* Cmd can loop during Q-full. tcm_qla2xxx_aborted_task
+ * can get ahead of this cmd. tcm_qla2xxx_aborted_task
+ * already kick start the free.
+ */
+ pr_debug("queue_data_in aborted cmd[%p] refcount %d "
+ "transport_state %x, t_state %x, se_cmd_flags %x\n",
+ cmd,cmd->se_cmd.cmd_kref.refcount.counter,
+ cmd->se_cmd.transport_state,
+ cmd->se_cmd.t_state,
+ cmd->se_cmd.se_cmd_flags);
+ return 0;
+ }
+
cmd->cmd_flags |= BIT_4;
cmd->bufflen = se_cmd->data_length;
cmd->dma_data_direction = target_reverse_dma_direction(se_cmd);
@@ -637,11 +684,34 @@
qlt_xmit_tm_rsp(mcmd);
}
+
+#define DATA_WORK_NOT_FREE(_flags) \
+ (( _flags & (CMD_FLAG_DATA_WORK|CMD_FLAG_DATA_WORK_FREE)) == \
+ CMD_FLAG_DATA_WORK)
static void tcm_qla2xxx_aborted_task(struct se_cmd *se_cmd)
{
struct qla_tgt_cmd *cmd = container_of(se_cmd,
struct qla_tgt_cmd, se_cmd);
- qlt_abort_cmd(cmd);
+ unsigned long flags;
+
+ if (qlt_abort_cmd(cmd))
+ return;
+
+ spin_lock_irqsave(&cmd->cmd_lock, flags);
+ if ((cmd->state == QLA_TGT_STATE_NEW)||
+ ((cmd->state == QLA_TGT_STATE_DATA_IN) &&
+ DATA_WORK_NOT_FREE(cmd->cmd_flags)) ) {
+
+ cmd->cmd_flags |= CMD_FLAG_DATA_WORK_FREE;
+ spin_unlock_irqrestore(&cmd->cmd_lock, flags);
+ /* Cmd have not reached firmware.
+ * Use this trigger to free it. */
+ tcm_qla2xxx_free_cmd(cmd);
+ return;
+ }
+ spin_unlock_irqrestore(&cmd->cmd_lock, flags);
+ return;
+
}
static void tcm_qla2xxx_clear_sess_lookup(struct tcm_qla2xxx_lport *,
diff --git a/drivers/scsi/scsi_devinfo.c b/drivers/scsi/scsi_devinfo.c
index 47b9d13..bbfbfd9 100644
--- a/drivers/scsi/scsi_devinfo.c
+++ b/drivers/scsi/scsi_devinfo.c
@@ -205,6 +205,8 @@
{"Intel", "Multi-Flex", NULL, BLIST_NO_RSOC},
{"iRiver", "iFP Mass Driver", NULL, BLIST_NOT_LOCKABLE | BLIST_INQUIRY_36},
{"LASOUND", "CDX7405", "3.10", BLIST_MAX5LUN | BLIST_SINGLELUN},
+ {"Marvell", "Console", NULL, BLIST_SKIP_VPD_PAGES},
+ {"Marvell", "91xx Config", "1.01", BLIST_SKIP_VPD_PAGES},
{"MATSHITA", "PD-1", NULL, BLIST_FORCELUN | BLIST_SINGLELUN},
{"MATSHITA", "DMC-LC5", NULL, BLIST_NOT_LOCKABLE | BLIST_INQUIRY_36},
{"MATSHITA", "DMC-LC40", NULL, BLIST_NOT_LOCKABLE | BLIST_INQUIRY_36},
diff --git a/drivers/scsi/scsi_sysfs.c b/drivers/scsi/scsi_sysfs.c
index 4f18a85..00bc7218 100644
--- a/drivers/scsi/scsi_sysfs.c
+++ b/drivers/scsi/scsi_sysfs.c
@@ -1272,16 +1272,18 @@
void scsi_remove_target(struct device *dev)
{
struct Scsi_Host *shost = dev_to_shost(dev->parent);
- struct scsi_target *starget;
+ struct scsi_target *starget, *last_target = NULL;
unsigned long flags;
restart:
spin_lock_irqsave(shost->host_lock, flags);
list_for_each_entry(starget, &shost->__targets, siblings) {
- if (starget->state == STARGET_DEL)
+ if (starget->state == STARGET_DEL ||
+ starget == last_target)
continue;
if (starget->dev.parent == dev || &starget->dev == dev) {
kref_get(&starget->reap_ref);
+ last_target = starget;
spin_unlock_irqrestore(shost->host_lock, flags);
__scsi_remove_target(starget);
scsi_target_reap(starget);
diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
index bb669d3..d749da7 100644
--- a/drivers/scsi/sd.c
+++ b/drivers/scsi/sd.c
@@ -761,7 +761,7 @@
break;
default:
- ret = BLKPREP_KILL;
+ ret = BLKPREP_INVALID;
goto out;
}
@@ -839,7 +839,7 @@
int ret;
if (sdkp->device->no_write_same)
- return BLKPREP_KILL;
+ return BLKPREP_INVALID;
BUG_ON(bio_offset(bio) || bio_iovec(bio).bv_len != sdp->sector_size);
diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c
index 55627d0..292c04e 100644
--- a/drivers/scsi/storvsc_drv.c
+++ b/drivers/scsi/storvsc_drv.c
@@ -42,6 +42,7 @@
#include <scsi/scsi_devinfo.h>
#include <scsi/scsi_dbg.h>
#include <scsi/scsi_transport_fc.h>
+#include <scsi/scsi_transport.h>
/*
* All wire protocol details (storage protocol between the guest and the host)
@@ -477,19 +478,18 @@
struct storvsc_scan_work {
struct work_struct work;
struct Scsi_Host *host;
- uint lun;
+ u8 lun;
+ u8 tgt_id;
};
static void storvsc_device_scan(struct work_struct *work)
{
struct storvsc_scan_work *wrk;
- uint lun;
struct scsi_device *sdev;
wrk = container_of(work, struct storvsc_scan_work, work);
- lun = wrk->lun;
- sdev = scsi_device_lookup(wrk->host, 0, 0, lun);
+ sdev = scsi_device_lookup(wrk->host, 0, wrk->tgt_id, wrk->lun);
if (!sdev)
goto done;
scsi_rescan_device(&sdev->sdev_gendev);
@@ -540,7 +540,7 @@
if (!scsi_host_get(wrk->host))
goto done;
- sdev = scsi_device_lookup(wrk->host, 0, 0, wrk->lun);
+ sdev = scsi_device_lookup(wrk->host, 0, wrk->tgt_id, wrk->lun);
if (sdev) {
scsi_remove_device(sdev);
@@ -940,6 +940,7 @@
wrk->host = host;
wrk->lun = vm_srb->lun;
+ wrk->tgt_id = vm_srb->target_id;
INIT_WORK(&wrk->work, process_err_fn);
schedule_work(&wrk->work);
}
@@ -1770,6 +1771,11 @@
fc_transport_template = fc_attach_transport(&fc_transport_functions);
if (!fc_transport_template)
return -ENODEV;
+
+ /*
+ * Install Hyper-V specific timeout handler.
+ */
+ fc_transport_template->eh_timed_out = storvsc_eh_timed_out;
#endif
ret = vmbus_driver_register(&storvsc_drv);
diff --git a/drivers/sh/pm_runtime.c b/drivers/sh/pm_runtime.c
index 91a00301..a9bac3b 100644
--- a/drivers/sh/pm_runtime.c
+++ b/drivers/sh/pm_runtime.c
@@ -34,7 +34,7 @@
static int __init sh_pm_runtime_init(void)
{
- if (IS_ENABLED(CONFIG_ARCH_SHMOBILE)) {
+ if (IS_ENABLED(CONFIG_OF) && IS_ENABLED(CONFIG_ARCH_SHMOBILE)) {
if (!of_find_compatible_node(NULL, NULL,
"renesas,cpg-mstp-clocks"))
return 0;
diff --git a/drivers/spi/spi-atmel.c b/drivers/spi/spi-atmel.c
index aebad36..8feac59 100644
--- a/drivers/spi/spi-atmel.c
+++ b/drivers/spi/spi-atmel.c
@@ -1571,6 +1571,7 @@
as->use_cs_gpios = true;
if (atmel_spi_is_v2(as) &&
+ pdev->dev.of_node &&
!of_get_property(pdev->dev.of_node, "cs-gpios", NULL)) {
as->use_cs_gpios = false;
master->num_chipselect = 4;
diff --git a/drivers/spi/spi-bcm2835aux.c b/drivers/spi/spi-bcm2835aux.c
index 7de6f84..ecc73c0 100644
--- a/drivers/spi/spi-bcm2835aux.c
+++ b/drivers/spi/spi-bcm2835aux.c
@@ -73,8 +73,8 @@
/* Bitfields in CNTL1 */
#define BCM2835_AUX_SPI_CNTL1_CSHIGH 0x00000700
-#define BCM2835_AUX_SPI_CNTL1_IDLE 0x00000080
-#define BCM2835_AUX_SPI_CNTL1_TXEMPTY 0x00000040
+#define BCM2835_AUX_SPI_CNTL1_TXEMPTY 0x00000080
+#define BCM2835_AUX_SPI_CNTL1_IDLE 0x00000040
#define BCM2835_AUX_SPI_CNTL1_MSBF_IN 0x00000002
#define BCM2835_AUX_SPI_CNTL1_KEEP_IN 0x00000001
diff --git a/drivers/spi/spi-fsl-espi.c b/drivers/spi/spi-fsl-espi.c
index 7fd6a4c..7cb0c19 100644
--- a/drivers/spi/spi-fsl-espi.c
+++ b/drivers/spi/spi-fsl-espi.c
@@ -84,7 +84,7 @@
/* SPCOM register values */
#define SPCOM_CS(x) ((x) << 30)
#define SPCOM_TRANLEN(x) ((x) << 0)
-#define SPCOM_TRANLEN_MAX 0xFFFF /* Max transaction length */
+#define SPCOM_TRANLEN_MAX 0x10000 /* Max transaction length */
#define AUTOSUSPEND_TIMEOUT 2000
@@ -233,7 +233,7 @@
reinit_completion(&mpc8xxx_spi->done);
/* Set SPCOM[CS] and SPCOM[TRANLEN] field */
- if ((t->len - 1) > SPCOM_TRANLEN_MAX) {
+ if (t->len > SPCOM_TRANLEN_MAX) {
dev_err(mpc8xxx_spi->dev, "Transaction length (%d)"
" beyond the SPCOM[TRANLEN] field\n", t->len);
return -EINVAL;
diff --git a/drivers/spi/spi-imx.c b/drivers/spi/spi-imx.c
index d98c33c..6a4ff27 100644
--- a/drivers/spi/spi-imx.c
+++ b/drivers/spi/spi-imx.c
@@ -929,7 +929,7 @@
tx->sgl, tx->nents, DMA_MEM_TO_DEV,
DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
if (!desc_tx)
- goto no_dma;
+ goto tx_nodma;
desc_tx->callback = spi_imx_dma_tx_callback;
desc_tx->callback_param = (void *)spi_imx;
@@ -941,7 +941,7 @@
rx->sgl, rx->nents, DMA_DEV_TO_MEM,
DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
if (!desc_rx)
- goto no_dma;
+ goto rx_nodma;
desc_rx->callback = spi_imx_dma_rx_callback;
desc_rx->callback_param = (void *)spi_imx;
@@ -1008,7 +1008,9 @@
return ret;
-no_dma:
+rx_nodma:
+ dmaengine_terminate_all(master->dma_tx);
+tx_nodma:
pr_warn_once("%s %s: DMA not available, falling back to PIO\n",
dev_driver_string(&master->dev),
dev_name(&master->dev));
diff --git a/drivers/spi/spi-loopback-test.c b/drivers/spi/spi-loopback-test.c
index 894616f6..cf4bb36 100644
--- a/drivers/spi/spi-loopback-test.c
+++ b/drivers/spi/spi-loopback-test.c
@@ -761,6 +761,7 @@
test.iterate_transfer_mask = 1;
/* count number of transfers with tx/rx_buf != NULL */
+ rx_count = tx_count = 0;
for (i = 0; i < test.transfer_count; i++) {
if (test.transfers[i].tx_buf)
tx_count++;
diff --git a/drivers/spi/spi-omap2-mcspi.c b/drivers/spi/spi-omap2-mcspi.c
index 7273820..0caa3c8 100644
--- a/drivers/spi/spi-omap2-mcspi.c
+++ b/drivers/spi/spi-omap2-mcspi.c
@@ -1490,6 +1490,8 @@
return status;
disable_pm:
+ pm_runtime_dont_use_autosuspend(&pdev->dev);
+ pm_runtime_put_sync(&pdev->dev);
pm_runtime_disable(&pdev->dev);
free_master:
spi_master_put(master);
@@ -1501,6 +1503,7 @@
struct spi_master *master = platform_get_drvdata(pdev);
struct omap2_mcspi *mcspi = spi_master_get_devdata(master);
+ pm_runtime_dont_use_autosuspend(mcspi->dev);
pm_runtime_put_sync(mcspi->dev);
pm_runtime_disable(&pdev->dev);
diff --git a/drivers/spmi/spmi-pmic-arb.c b/drivers/spmi/spmi-pmic-arb.c
index be822f7..aca282d 100644
--- a/drivers/spmi/spmi-pmic-arb.c
+++ b/drivers/spmi/spmi-pmic-arb.c
@@ -10,6 +10,7 @@
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
+#include <linux/bitmap.h>
#include <linux/delay.h>
#include <linux/err.h>
#include <linux/interrupt.h>
@@ -47,9 +48,9 @@
#define SPMI_MAPPING_BIT_IS_1_FLAG(X) (((X) >> 8) & 0x1)
#define SPMI_MAPPING_BIT_IS_1_RESULT(X) (((X) >> 0) & 0xFF)
-#define SPMI_MAPPING_TABLE_LEN 255
#define SPMI_MAPPING_TABLE_TREE_DEPTH 16 /* Maximum of 16-bits */
-#define PPID_TO_CHAN_TABLE_SZ BIT(12) /* PPID is 12bit chan is 1byte*/
+#define PMIC_ARB_MAX_PPID BIT(12) /* PPID is 12bit */
+#define PMIC_ARB_CHAN_VALID BIT(15)
/* Ownership Table */
#define SPMI_OWNERSHIP_TABLE_REG(N) (0x0700 + (4 * (N)))
@@ -85,9 +86,7 @@
};
/* Maximum number of support PMIC peripherals */
-#define PMIC_ARB_MAX_PERIPHS 256
-#define PMIC_ARB_MAX_CHNL 128
-#define PMIC_ARB_PERIPH_ID_VALID (1 << 15)
+#define PMIC_ARB_MAX_PERIPHS 512
#define PMIC_ARB_TIMEOUT_US 100
#define PMIC_ARB_MAX_TRANS_BYTES (8)
@@ -125,18 +124,22 @@
void __iomem *wr_base;
void __iomem *intr;
void __iomem *cnfg;
+ void __iomem *core;
+ resource_size_t core_size;
raw_spinlock_t lock;
u8 channel;
int irq;
u8 ee;
- u8 min_apid;
- u8 max_apid;
- u32 mapping_table[SPMI_MAPPING_TABLE_LEN];
+ u16 min_apid;
+ u16 max_apid;
+ u32 *mapping_table;
+ DECLARE_BITMAP(mapping_table_valid, PMIC_ARB_MAX_PERIPHS);
struct irq_domain *domain;
struct spmi_controller *spmic;
- u16 apid_to_ppid[256];
+ u16 *apid_to_ppid;
const struct pmic_arb_ver_ops *ver_ops;
- u8 *ppid_to_chan;
+ u16 *ppid_to_chan;
+ u16 last_channel;
};
/**
@@ -158,7 +161,8 @@
*/
struct pmic_arb_ver_ops {
/* spmi commands (read_cmd, write_cmd, cmd) functionality */
- u32 (*offset)(struct spmi_pmic_arb_dev *dev, u8 sid, u16 addr);
+ int (*offset)(struct spmi_pmic_arb_dev *dev, u8 sid, u16 addr,
+ u32 *offset);
u32 (*fmt_cmd)(u8 opc, u8 sid, u16 addr, u8 bc);
int (*non_data_cmd)(struct spmi_controller *ctrl, u8 opc, u8 sid);
/* Interrupts controller functionality (offset of PIC registers) */
@@ -212,7 +216,14 @@
struct spmi_pmic_arb_dev *dev = spmi_controller_get_drvdata(ctrl);
u32 status = 0;
u32 timeout = PMIC_ARB_TIMEOUT_US;
- u32 offset = dev->ver_ops->offset(dev, sid, addr) + PMIC_ARB_STATUS;
+ u32 offset;
+ int rc;
+
+ rc = dev->ver_ops->offset(dev, sid, addr, &offset);
+ if (rc)
+ return rc;
+
+ offset += PMIC_ARB_STATUS;
while (timeout--) {
status = readl_relaxed(base + offset);
@@ -257,7 +268,11 @@
unsigned long flags;
u32 cmd;
int rc;
- u32 offset = pmic_arb->ver_ops->offset(pmic_arb, sid, 0);
+ u32 offset;
+
+ rc = pmic_arb->ver_ops->offset(pmic_arb, sid, 0, &offset);
+ if (rc)
+ return rc;
cmd = ((opc | 0x40) << 27) | ((sid & 0xf) << 20);
@@ -297,7 +312,11 @@
u8 bc = len - 1;
u32 cmd;
int rc;
- u32 offset = pmic_arb->ver_ops->offset(pmic_arb, sid, addr);
+ u32 offset;
+
+ rc = pmic_arb->ver_ops->offset(pmic_arb, sid, addr, &offset);
+ if (rc)
+ return rc;
if (bc >= PMIC_ARB_MAX_TRANS_BYTES) {
dev_err(&ctrl->dev,
@@ -344,7 +363,11 @@
u8 bc = len - 1;
u32 cmd;
int rc;
- u32 offset = pmic_arb->ver_ops->offset(pmic_arb, sid, addr);
+ u32 offset;
+
+ rc = pmic_arb->ver_ops->offset(pmic_arb, sid, addr, &offset);
+ if (rc)
+ return rc;
if (bc >= PMIC_ARB_MAX_TRANS_BYTES) {
dev_err(&ctrl->dev,
@@ -614,6 +637,10 @@
u32 data;
for (i = 0; i < SPMI_MAPPING_TABLE_TREE_DEPTH; ++i) {
+ if (!test_and_set_bit(index, pa->mapping_table_valid))
+ mapping_table[index] = readl_relaxed(pa->cnfg +
+ SPMI_MAPPING_TABLE_REG(index));
+
data = mapping_table[index];
if (ppid & (1 << SPMI_MAPPING_BIT_INDEX(data))) {
@@ -701,18 +728,61 @@
}
/* v1 offset per ee */
-static u32 pmic_arb_offset_v1(struct spmi_pmic_arb_dev *pa, u8 sid, u16 addr)
+static int
+pmic_arb_offset_v1(struct spmi_pmic_arb_dev *pa, u8 sid, u16 addr, u32 *offset)
{
- return 0x800 + 0x80 * pa->channel;
+ *offset = 0x800 + 0x80 * pa->channel;
+ return 0;
}
+static u16 pmic_arb_find_chan(struct spmi_pmic_arb_dev *pa, u16 ppid)
+{
+ u32 regval, offset;
+ u16 chan;
+ u16 id;
+
+ /*
+ * PMIC_ARB_REG_CHNL is a table in HW mapping channel to ppid.
+ * ppid_to_chan is an in-memory invert of that table.
+ */
+ for (chan = pa->last_channel; ; chan++) {
+ offset = PMIC_ARB_REG_CHNL(chan);
+ if (offset >= pa->core_size)
+ break;
+
+ regval = readl_relaxed(pa->core + offset);
+ if (!regval)
+ continue;
+
+ id = (regval >> 8) & PMIC_ARB_PPID_MASK;
+ pa->ppid_to_chan[id] = chan | PMIC_ARB_CHAN_VALID;
+ if (id == ppid) {
+ chan |= PMIC_ARB_CHAN_VALID;
+ break;
+ }
+ }
+ pa->last_channel = chan & ~PMIC_ARB_CHAN_VALID;
+
+ return chan;
+}
+
+
/* v2 offset per ppid (chan) and per ee */
-static u32 pmic_arb_offset_v2(struct spmi_pmic_arb_dev *pa, u8 sid, u16 addr)
+static int
+pmic_arb_offset_v2(struct spmi_pmic_arb_dev *pa, u8 sid, u16 addr, u32 *offset)
{
u16 ppid = (sid << 8) | (addr >> 8);
- u8 chan = pa->ppid_to_chan[ppid];
+ u16 chan;
- return 0x1000 * pa->ee + 0x8000 * chan;
+ chan = pa->ppid_to_chan[ppid];
+ if (!(chan & PMIC_ARB_CHAN_VALID))
+ chan = pmic_arb_find_chan(pa, ppid);
+ if (!(chan & PMIC_ARB_CHAN_VALID))
+ return -ENODEV;
+ chan &= ~PMIC_ARB_CHAN_VALID;
+
+ *offset = 0x1000 * pa->ee + 0x8000 * chan;
+ return 0;
}
static u32 pmic_arb_fmt_cmd_v1(u8 opc, u8 sid, u16 addr, u8 bc)
@@ -797,7 +867,7 @@
struct resource *res;
void __iomem *core;
u32 channel, ee, hw_ver;
- int err, i;
+ int err;
bool is_v1;
ctrl = spmi_controller_alloc(&pdev->dev, sizeof(*pa));
@@ -808,6 +878,7 @@
pa->spmic = ctrl;
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "core");
+ pa->core_size = resource_size(res);
core = devm_ioremap_resource(&ctrl->dev, res);
if (IS_ERR(core)) {
err = PTR_ERR(core);
@@ -825,10 +896,7 @@
pa->wr_base = core;
pa->rd_base = core;
} else {
- u8 chan;
- u16 ppid;
- u32 regval;
-
+ pa->core = core;
pa->ver_ops = &pmic_arb_v2;
res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
@@ -847,24 +915,14 @@
goto err_put_ctrl;
}
- pa->ppid_to_chan = devm_kzalloc(&ctrl->dev,
- PPID_TO_CHAN_TABLE_SZ, GFP_KERNEL);
+ pa->ppid_to_chan = devm_kcalloc(&ctrl->dev,
+ PMIC_ARB_MAX_PPID,
+ sizeof(*pa->ppid_to_chan),
+ GFP_KERNEL);
if (!pa->ppid_to_chan) {
err = -ENOMEM;
goto err_put_ctrl;
}
- /*
- * PMIC_ARB_REG_CHNL is a table in HW mapping channel to ppid.
- * ppid_to_chan is an in-memory invert of that table.
- */
- for (chan = 0; chan < PMIC_ARB_MAX_CHNL; ++chan) {
- regval = readl_relaxed(core + PMIC_ARB_REG_CHNL(chan));
- if (!regval)
- continue;
-
- ppid = (regval >> 8) & 0xFFF;
- pa->ppid_to_chan[ppid] = chan;
- }
}
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "intr");
@@ -915,9 +973,20 @@
pa->ee = ee;
- for (i = 0; i < ARRAY_SIZE(pa->mapping_table); ++i)
- pa->mapping_table[i] = readl_relaxed(
- pa->cnfg + SPMI_MAPPING_TABLE_REG(i));
+ pa->apid_to_ppid = devm_kcalloc(&ctrl->dev, PMIC_ARB_MAX_PERIPHS,
+ sizeof(*pa->apid_to_ppid),
+ GFP_KERNEL);
+ if (!pa->apid_to_ppid) {
+ err = -ENOMEM;
+ goto err_put_ctrl;
+ }
+
+ pa->mapping_table = devm_kcalloc(&ctrl->dev, PMIC_ARB_MAX_PERIPHS - 1,
+ sizeof(*pa->mapping_table), GFP_KERNEL);
+ if (!pa->mapping_table) {
+ err = -ENOMEM;
+ goto err_put_ctrl;
+ }
/* Initialize max_apid/min_apid to the opposite bounds, during
* the irq domain translation, we are sure to update these */
diff --git a/drivers/staging/android/sync.c b/drivers/staging/android/sync.c
index 2ab0c20..3a8f210 100644
--- a/drivers/staging/android/sync.c
+++ b/drivers/staging/android/sync.c
@@ -424,7 +424,7 @@
if (!status)
return POLLIN;
- else if (status < 0)
+ if (status < 0)
return POLLERR;
return 0;
}
diff --git a/drivers/staging/comedi/comedi_fops.c b/drivers/staging/comedi/comedi_fops.c
index d57fade..d1cf6a1 100644
--- a/drivers/staging/comedi/comedi_fops.c
+++ b/drivers/staging/comedi/comedi_fops.c
@@ -686,13 +686,6 @@
return comedi_is_runflags_running(runflags);
}
-static bool comedi_is_subdevice_idle(struct comedi_subdevice *s)
-{
- unsigned runflags = comedi_get_subdevice_runflags(s);
-
- return !(runflags & COMEDI_SRF_BUSY_MASK);
-}
-
bool comedi_can_auto_free_spriv(struct comedi_subdevice *s)
{
unsigned runflags = __comedi_get_subdevice_runflags(s);
@@ -1111,6 +1104,9 @@
struct comedi_bufinfo bi;
struct comedi_subdevice *s;
struct comedi_async *async;
+ unsigned int runflags;
+ int retval = 0;
+ bool become_nonbusy = false;
if (copy_from_user(&bi, arg, sizeof(bi)))
return -EFAULT;
@@ -1122,48 +1118,56 @@
async = s->async;
- if (!async) {
- dev_dbg(dev->class_dev,
- "subdevice does not have async capability\n");
- bi.buf_write_ptr = 0;
- bi.buf_read_ptr = 0;
- bi.buf_write_count = 0;
- bi.buf_read_count = 0;
- bi.bytes_read = 0;
- bi.bytes_written = 0;
- goto copyback;
- }
- if (!s->busy) {
- bi.bytes_read = 0;
- bi.bytes_written = 0;
- goto copyback_position;
- }
- if (s->busy != file)
- return -EACCES;
+ if (!async || s->busy != file)
+ return -EINVAL;
- if (bi.bytes_read && !(async->cmd.flags & CMDF_WRITE)) {
- bi.bytes_read = comedi_buf_read_alloc(s, bi.bytes_read);
- comedi_buf_read_free(s, bi.bytes_read);
-
- if (comedi_is_subdevice_idle(s) &&
- comedi_buf_read_n_available(s) == 0) {
- do_become_nonbusy(dev, s);
+ runflags = comedi_get_subdevice_runflags(s);
+ if (!(async->cmd.flags & CMDF_WRITE)) {
+ /* command was set up in "read" direction */
+ if (bi.bytes_read) {
+ comedi_buf_read_alloc(s, bi.bytes_read);
+ bi.bytes_read = comedi_buf_read_free(s, bi.bytes_read);
}
+ /*
+ * If nothing left to read, and command has stopped, and
+ * {"read" position not updated or command stopped normally},
+ * then become non-busy.
+ */
+ if (comedi_buf_read_n_available(s) == 0 &&
+ !comedi_is_runflags_running(runflags) &&
+ (bi.bytes_read == 0 ||
+ !comedi_is_runflags_in_error(runflags))) {
+ become_nonbusy = true;
+ if (comedi_is_runflags_in_error(runflags))
+ retval = -EPIPE;
+ }
+ bi.bytes_written = 0;
+ } else {
+ /* command was set up in "write" direction */
+ if (!comedi_is_runflags_running(runflags)) {
+ bi.bytes_written = 0;
+ become_nonbusy = true;
+ if (comedi_is_runflags_in_error(runflags))
+ retval = -EPIPE;
+ } else if (bi.bytes_written) {
+ comedi_buf_write_alloc(s, bi.bytes_written);
+ bi.bytes_written =
+ comedi_buf_write_free(s, bi.bytes_written);
+ }
+ bi.bytes_read = 0;
}
- if (bi.bytes_written && (async->cmd.flags & CMDF_WRITE)) {
- bi.bytes_written =
- comedi_buf_write_alloc(s, bi.bytes_written);
- comedi_buf_write_free(s, bi.bytes_written);
- }
-
-copyback_position:
bi.buf_write_count = async->buf_write_count;
bi.buf_write_ptr = async->buf_write_ptr;
bi.buf_read_count = async->buf_read_count;
bi.buf_read_ptr = async->buf_read_ptr;
-copyback:
+ if (become_nonbusy)
+ do_become_nonbusy(dev, s);
+
+ if (retval)
+ return retval;
+
if (copy_to_user(arg, &bi, sizeof(bi)))
return -EFAULT;
diff --git a/drivers/staging/comedi/drivers/addi_apci_3xxx.c b/drivers/staging/comedi/drivers/addi_apci_3xxx.c
index 995096c..b6af3eb 100644
--- a/drivers/staging/comedi/drivers/addi_apci_3xxx.c
+++ b/drivers/staging/comedi/drivers/addi_apci_3xxx.c
@@ -496,7 +496,7 @@
switch (flags & CMDF_ROUND_MASK) {
case CMDF_ROUND_NEAREST:
default:
- timer = (*ns + base / 2) / base;
+ timer = DIV_ROUND_CLOSEST(*ns, base);
break;
case CMDF_ROUND_DOWN:
timer = *ns / base;
diff --git a/drivers/staging/comedi/drivers/amplc_pci230.c b/drivers/staging/comedi/drivers/amplc_pci230.c
index 4b39f69..d1fe88a 100644
--- a/drivers/staging/comedi/drivers/amplc_pci230.c
+++ b/drivers/staging/comedi/drivers/amplc_pci230.c
@@ -637,7 +637,7 @@
switch (flags & CMDF_ROUND_MASK) {
default:
case CMDF_ROUND_NEAREST:
- div += (rem + (timebase / 2)) / timebase;
+ div += DIV_ROUND_CLOSEST(rem, timebase);
break;
case CMDF_ROUND_DOWN:
break;
diff --git a/drivers/staging/comedi/drivers/cb_pcidas64.c b/drivers/staging/comedi/drivers/cb_pcidas64.c
index d33b8fe..e918d42 100644
--- a/drivers/staging/comedi/drivers/cb_pcidas64.c
+++ b/drivers/staging/comedi/drivers/cb_pcidas64.c
@@ -1376,7 +1376,7 @@
num_entries = fifo->max_segment_length;
/* 1 == 256 entries, 2 == 512 entries, etc */
- num_increments = (num_entries + increment_size / 2) / increment_size;
+ num_increments = DIV_ROUND_CLOSEST(num_entries, increment_size);
bits = (~(num_increments - 1)) & fifo->fifo_size_reg_mask;
devpriv->fifo_size_bits &= ~fifo->fifo_size_reg_mask;
@@ -2004,7 +2004,7 @@
break;
case CMDF_ROUND_NEAREST:
default:
- divisor = (ns + TIMER_BASE / 2) / TIMER_BASE;
+ divisor = DIV_ROUND_CLOSEST(ns, TIMER_BASE);
break;
}
return divisor;
diff --git a/drivers/staging/comedi/drivers/comedi_isadma.c b/drivers/staging/comedi/drivers/comedi_isadma.c
index 6ba71d1..68ef9b1 100644
--- a/drivers/staging/comedi/drivers/comedi_isadma.c
+++ b/drivers/staging/comedi/drivers/comedi_isadma.c
@@ -132,8 +132,7 @@
result = result1;
if (result >= desc->size || result == 0)
return 0;
- else
- return desc->size - result;
+ return desc->size - result;
}
EXPORT_SYMBOL_GPL(comedi_isadma_poll);
diff --git a/drivers/staging/comedi/drivers/dt282x.c b/drivers/staging/comedi/drivers/dt282x.c
index 5a536a0..6c26e09 100644
--- a/drivers/staging/comedi/drivers/dt282x.c
+++ b/drivers/staging/comedi/drivers/dt282x.c
@@ -371,7 +371,7 @@
switch (flags & CMDF_ROUND_MASK) {
case CMDF_ROUND_NEAREST:
default:
- divider = (*ns + base / 2) / base;
+ divider = DIV_ROUND_CLOSEST(*ns, base);
break;
case CMDF_ROUND_DOWN:
divider = (*ns) / base;
diff --git a/drivers/staging/comedi/drivers/dt3000.c b/drivers/staging/comedi/drivers/dt3000.c
index ab7a332..19e0b7b 100644
--- a/drivers/staging/comedi/drivers/dt3000.c
+++ b/drivers/staging/comedi/drivers/dt3000.c
@@ -361,7 +361,7 @@
switch (flags & CMDF_ROUND_MASK) {
case CMDF_ROUND_NEAREST:
default:
- divider = (*nanosec + base / 2) / base;
+ divider = DIV_ROUND_CLOSEST(*nanosec, base);
break;
case CMDF_ROUND_DOWN:
divider = (*nanosec) / base;
diff --git a/drivers/staging/comedi/drivers/ni_pcidio.c b/drivers/staging/comedi/drivers/ni_pcidio.c
index ac79099..67b9f22 100644
--- a/drivers/staging/comedi/drivers/ni_pcidio.c
+++ b/drivers/staging/comedi/drivers/ni_pcidio.c
@@ -525,7 +525,7 @@
switch (flags & CMDF_ROUND_MASK) {
case CMDF_ROUND_NEAREST:
default:
- divider = (*nanosec + base / 2) / base;
+ divider = DIV_ROUND_CLOSEST(*nanosec, base);
break;
case CMDF_ROUND_DOWN:
divider = (*nanosec) / base;
diff --git a/drivers/staging/emxx_udc/emxx_udc.c b/drivers/staging/emxx_udc/emxx_udc.c
index a1f624f..d38673c 100644
--- a/drivers/staging/emxx_udc/emxx_udc.c
+++ b/drivers/staging/emxx_udc/emxx_udc.c
@@ -21,7 +21,6 @@
#include <linux/ioport.h>
#include <linux/slab.h>
#include <linux/errno.h>
-#include <linux/init.h>
#include <linux/list.h>
#include <linux/interrupt.h>
#include <linux/proc_fs.h>
diff --git a/drivers/staging/fbtft/fbtft-core.c b/drivers/staging/fbtft/fbtft-core.c
index 3bcbabf..6c9cdd2 100644
--- a/drivers/staging/fbtft/fbtft-core.c
+++ b/drivers/staging/fbtft/fbtft-core.c
@@ -130,7 +130,8 @@
while (gpio->name[0]) {
flags = FBTFT_GPIO_NO_MATCH;
/* if driver provides match function, try it first,
- if no match use our own */
+ * if no match use our own
+ */
if (par->fbtftops.request_gpios_match)
flags = par->fbtftops.request_gpios_match(par, gpio);
if (flags == FBTFT_GPIO_NO_MATCH)
@@ -518,8 +519,7 @@
"%s: count=%zd, ppos=%llu\n", __func__, count, *ppos);
res = fb_sys_write(info, buf, count, ppos);
- /* TODO: only mark changed area
- update all for now */
+ /* TODO: only mark changed area update all for now */
par->fbtftops.mkdirty(info, -1, 0);
return res;
diff --git a/drivers/staging/fsl-mc/bus/Kconfig b/drivers/staging/fsl-mc/bus/Kconfig
index c498ac6..1f95933 100644
--- a/drivers/staging/fsl-mc/bus/Kconfig
+++ b/drivers/staging/fsl-mc/bus/Kconfig
@@ -7,7 +7,7 @@
#
config FSL_MC_BUS
- tristate "Freescale Management Complex (MC) bus driver"
+ bool "Freescale Management Complex (MC) bus driver"
depends on OF && ARM64
select GENERIC_MSI_IRQ_DOMAIN
help
diff --git a/drivers/staging/fsl-mc/bus/mc-allocator.c b/drivers/staging/fsl-mc/bus/mc-allocator.c
index c5fa628..86f8543 100644
--- a/drivers/staging/fsl-mc/bus/mc-allocator.c
+++ b/drivers/staging/fsl-mc/bus/mc-allocator.c
@@ -756,7 +756,7 @@
return fsl_mc_driver_register(&fsl_mc_allocator_driver);
}
-void __exit fsl_mc_allocator_driver_exit(void)
+void fsl_mc_allocator_driver_exit(void)
{
fsl_mc_driver_unregister(&fsl_mc_allocator_driver);
}
diff --git a/drivers/staging/fsl-mc/bus/mc-bus.c b/drivers/staging/fsl-mc/bus/mc-bus.c
index 9317561..5958e0f 100644
--- a/drivers/staging/fsl-mc/bus/mc-bus.c
+++ b/drivers/staging/fsl-mc/bus/mc-bus.c
@@ -248,8 +248,7 @@
fsl_mc_get_root_dprc(dev, &root_dprc_dev);
if (!root_dprc_dev)
return false;
- else
- return dev == root_dprc_dev;
+ return dev == root_dprc_dev;
}
static int get_dprc_icid(struct fsl_mc_io *mc_io,
@@ -795,7 +794,6 @@
static struct platform_driver fsl_mc_bus_driver = {
.driver = {
.name = "fsl_mc_bus",
- .owner = THIS_MODULE,
.pm = NULL,
.of_match_table = fsl_mc_bus_match_table,
},
diff --git a/drivers/staging/fsl-mc/include/mc-private.h b/drivers/staging/fsl-mc/include/mc-private.h
index be72a44..ee5f1d2 100644
--- a/drivers/staging/fsl-mc/include/mc-private.h
+++ b/drivers/staging/fsl-mc/include/mc-private.h
@@ -123,7 +123,7 @@
int __init fsl_mc_allocator_driver_init(void);
-void __exit fsl_mc_allocator_driver_exit(void);
+void fsl_mc_allocator_driver_exit(void);
int __must_check fsl_mc_resource_allocate(struct fsl_mc_bus *mc_bus,
enum fsl_mc_pool_type pool_type,
diff --git a/drivers/staging/gdm72xx/gdm_qos.c b/drivers/staging/gdm72xx/gdm_qos.c
index 8c99f91..9fc476f 100644
--- a/drivers/staging/gdm72xx/gdm_qos.c
+++ b/drivers/staging/gdm72xx/gdm_qos.c
@@ -261,7 +261,7 @@
struct list_head send_list;
int ret = 0;
- tcph = (struct tcphdr *)iph + iph->ihl*4;
+ tcph = (struct tcphdr *)iph + iph->ihl * 4;
if (ethh->h_proto == cpu_to_be16(ETH_P_IP)) {
if (qcb->qos_list_cnt && !qos_free_list.cnt) {
@@ -342,17 +342,17 @@
if (sub_cmd_evt == QOS_REPORT) {
spin_lock_irqsave(&qcb->qos_lock, flags);
for (i = 0; i < qcb->qos_list_cnt; i++) {
- sfid = ((buf[(i*5) + 6] << 24) & 0xff000000);
- sfid += ((buf[(i*5) + 7] << 16) & 0xff0000);
- sfid += ((buf[(i*5) + 8] << 8) & 0xff00);
- sfid += (buf[(i*5) + 9]);
+ sfid = ((buf[(i * 5) + 6] << 24) & 0xff000000);
+ sfid += ((buf[(i * 5) + 7] << 16) & 0xff0000);
+ sfid += ((buf[(i * 5) + 8] << 8) & 0xff00);
+ sfid += (buf[(i * 5) + 9]);
index = get_csr(qcb, sfid, 0);
if (index == -1) {
spin_unlock_irqrestore(&qcb->qos_lock, flags);
netdev_err(nic->netdev, "QoS ERROR: No SF\n");
return;
}
- qcb->csr[index].qos_buf_count = buf[(i*5) + 10];
+ qcb->csr[index].qos_buf_count = buf[(i * 5) + 10];
}
extract_qos_list(nic, &send_list);
diff --git a/drivers/staging/gdm72xx/gdm_sdio.c b/drivers/staging/gdm72xx/gdm_sdio.c
index 1f5a087..9449418 100644
--- a/drivers/staging/gdm72xx/gdm_sdio.c
+++ b/drivers/staging/gdm72xx/gdm_sdio.c
@@ -33,10 +33,10 @@
#define SDU_TX_BUF_SIZE 2048
#define TX_BUF_SIZE 2048
#define TX_CHUNK_SIZE (2048 - TYPE_A_HEADER_SIZE)
-#define RX_BUF_SIZE (25*1024)
+#define RX_BUF_SIZE (25 * 1024)
#define TX_HZ 2000
-#define TX_INTERVAL (NSEC_PER_SEC/TX_HZ)
+#define TX_INTERVAL (NSEC_PER_SEC / TX_HZ)
static struct sdio_tx *alloc_tx_struct(struct tx_cxt *tx)
{
diff --git a/drivers/staging/gdm72xx/gdm_usb.c b/drivers/staging/gdm72xx/gdm_usb.c
index 7f035b1..f81129d 100644
--- a/drivers/staging/gdm72xx/gdm_usb.c
+++ b/drivers/staging/gdm72xx/gdm_usb.c
@@ -29,7 +29,7 @@
#define TX_BUF_SIZE 2048
#if defined(CONFIG_WIMAX_GDM72XX_WIMAX2)
-#define RX_BUF_SIZE (128*1024) /* For packet aggregation */
+#define RX_BUF_SIZE (128 * 1024) /* For packet aggregation */
#else
#define RX_BUF_SIZE 2048
#endif
diff --git a/drivers/staging/gdm72xx/gdm_wimax.c b/drivers/staging/gdm72xx/gdm_wimax.c
index 1b3da2b..2ee6a39 100644
--- a/drivers/staging/gdm72xx/gdm_wimax.c
+++ b/drivers/staging/gdm72xx/gdm_wimax.c
@@ -26,11 +26,11 @@
#include "netlink_k.h"
#define gdm_wimax_send(n, d, l) \
- (n->phy_dev->send_func)(n->phy_dev->priv_dev, d, l, NULL, NULL)
+ n->phy_dev->send_func(n->phy_dev->priv_dev, d, l, NULL, NULL)
#define gdm_wimax_send_with_cb(n, d, l, c, b) \
- (n->phy_dev->send_func)(n->phy_dev->priv_dev, d, l, c, b)
+ n->phy_dev->send_func(n->phy_dev->priv_dev, d, l, c, b)
#define gdm_wimax_rcv_with_cb(n, c, b) \
- (n->phy_dev->rcv_func)(n->phy_dev->priv_dev, c, b)
+ n->phy_dev->rcv_func(n->phy_dev->priv_dev, c, b)
#define EVT_MAX_SIZE 2048
@@ -98,21 +98,14 @@
return e;
}
-static void put_event_entry(struct evt_entry *e)
-{
- BUG_ON(!e);
-
- list_add_tail(&e->list, &wm_event.freeq);
-}
-
static void gdm_wimax_event_rcv(struct net_device *dev, u16 type, void *msg,
int len)
{
struct nic *nic = netdev_priv(dev);
u8 *buf = msg;
- u16 hci_cmd = (buf[0]<<8) | buf[1];
- u16 hci_len = (buf[2]<<8) | buf[3];
+ u16 hci_cmd = (buf[0] << 8) | buf[1];
+ u16 hci_len = (buf[2] << 8) | buf[3];
netdev_dbg(dev, "H=>D: 0x%04x(%d)\n", hci_cmd, hci_len);
@@ -137,7 +130,7 @@
spin_lock_irqsave(&wm_event.evt_lock, flags);
list_del(&e->list);
- put_event_entry(e);
+ list_add_tail(&e->list, &wm_event.freeq);
}
spin_unlock_irqrestore(&wm_event.evt_lock, flags);
@@ -193,8 +186,8 @@
struct evt_entry *e;
unsigned long flags;
- u16 hci_cmd = ((u8)buf[0]<<8) | (u8)buf[1];
- u16 hci_len = ((u8)buf[2]<<8) | (u8)buf[3];
+ u16 hci_cmd = ((u8)buf[0] << 8) | (u8)buf[1];
+ u16 hci_len = ((u8)buf[2] << 8) | (u8)buf[3];
netdev_dbg(dev, "D=>H: 0x%04x(%d)\n", hci_cmd, hci_len);
@@ -328,7 +321,8 @@
hci->length = cpu_to_be16(sizeof(up_down));
hci->data[0] = up_down;
- gdm_wimax_event_send(dev, (char *)hci, HCI_HEADER_SIZE+sizeof(up_down));
+ gdm_wimax_event_send(dev, (char *)hci, HCI_HEADER_SIZE +
+ sizeof(up_down));
}
static int gdm_wimax_open(struct net_device *dev)
@@ -512,7 +506,7 @@
hci->cmd_evt = cpu_to_be16(WIMAX_GET_INFO);
hci->data[len++] = TLV_T(T_MAC_ADDRESS);
hci->length = cpu_to_be16(len);
- gdm_wimax_send(nic, hci, HCI_HEADER_SIZE+len);
+ gdm_wimax_send(nic, hci, HCI_HEADER_SIZE + len);
val = T_CAPABILITY_WIMAX | T_CAPABILITY_MULTI_CS;
#if defined(CONFIG_WIMAX_GDM72XX_QOS)
@@ -531,7 +525,7 @@
memcpy(&hci->data[len], &val_be32, TLV_L(T_CAPABILITY));
len += TLV_L(T_CAPABILITY);
hci->length = cpu_to_be16(len);
- gdm_wimax_send(nic, hci, HCI_HEADER_SIZE+len);
+ gdm_wimax_send(nic, hci, HCI_HEADER_SIZE + len);
netdev_info(dev, "GDM WiMax Set CAPABILITY: 0x%08X\n", val);
}
@@ -544,10 +538,10 @@
*T = buf[0];
if (buf[1] == 0x82) {
*L = be16_to_cpu(__U82U16(&buf[2]));
- next_pos = 1/*type*/+3/*len*/;
+ next_pos = 1/*type*/ + 3/*len*/;
} else {
*L = buf[1];
- next_pos = 1/*type*/+1/*len*/;
+ next_pos = 1/*type*/ + 1/*len*/;
}
*V = &buf[next_pos];
diff --git a/drivers/staging/gdm72xx/hci.h b/drivers/staging/gdm72xx/hci.h
index ab903d4..b40d5c3 100644
--- a/drivers/staging/gdm72xx/hci.h
+++ b/drivers/staging/gdm72xx/hci.h
@@ -17,7 +17,7 @@
#define HCI_HEADER_SIZE 4
#define HCI_VALUE_OFFS (HCI_HEADER_SIZE)
#define HCI_MAX_PACKET 2048
-#define HCI_MAX_PARAM (HCI_MAX_PACKET-HCI_HEADER_SIZE)
+#define HCI_MAX_PARAM (HCI_MAX_PACKET - HCI_HEADER_SIZE)
#define HCI_MAX_TLV 32
/* CMD-EVT */
diff --git a/drivers/staging/gdm72xx/netlink_k.c b/drivers/staging/gdm72xx/netlink_k.c
index cf0b47c..783770b 100644
--- a/drivers/staging/gdm72xx/netlink_k.c
+++ b/drivers/staging/gdm72xx/netlink_k.c
@@ -29,8 +29,8 @@
#define ND_NLMSG_SPACE(len) (nlmsg_total_size(len) + ND_IFINDEX_LEN)
#define ND_NLMSG_DATA(nlh) \
((void *)((char *)nlmsg_data(nlh) + ND_IFINDEX_LEN))
-#define ND_NLMSG_S_LEN(len) (len+ND_IFINDEX_LEN)
-#define ND_NLMSG_R_LEN(nlh) (nlh->nlmsg_len-ND_IFINDEX_LEN)
+#define ND_NLMSG_S_LEN(len) (len + ND_IFINDEX_LEN)
+#define ND_NLMSG_R_LEN(nlh) (nlh->nlmsg_len - ND_IFINDEX_LEN)
#define ND_NLMSG_IFIDX(nlh) nlmsg_data(nlh)
#define ND_MAX_MSG_LEN 8096
@@ -143,7 +143,7 @@
NETLINK_CB(skb).portid = 0;
NETLINK_CB(skb).dst_group = 0;
- ret = netlink_broadcast(sock, skb, 0, group+1, GFP_ATOMIC);
+ ret = netlink_broadcast(sock, skb, 0, group + 1, GFP_ATOMIC);
if (!ret)
return len;
diff --git a/drivers/staging/gdm72xx/sdio_boot.c b/drivers/staging/gdm72xx/sdio_boot.c
index ba94b5f..5e9b38f 100644
--- a/drivers/staging/gdm72xx/sdio_boot.c
+++ b/drivers/staging/gdm72xx/sdio_boot.c
@@ -96,7 +96,7 @@
buf[1] = (len >> 8) & 0xff;
buf[2] = (len >> 16) & 0xff;
- memcpy(buf+TYPE_A_HEADER_SIZE, firm->data + pos, len);
+ memcpy(buf + TYPE_A_HEADER_SIZE, firm->data + pos, len);
ret = sdio_memcpy_toio(func, 0, buf, len + TYPE_A_HEADER_SIZE);
if (ret < 0) {
dev_err(&func->dev,
diff --git a/drivers/staging/gdm72xx/usb_boot.c b/drivers/staging/gdm72xx/usb_boot.c
index 3082987..99a5c07 100644
--- a/drivers/staging/gdm72xx/usb_boot.c
+++ b/drivers/staging/gdm72xx/usb_boot.c
@@ -292,8 +292,8 @@
return -ENOMEM;
}
- strcpy(buf+pad_size, type_string);
- ret = gdm_wibro_send(usbdev, buf, strlen(type_string)+pad_size);
+ strcpy(buf + pad_size, type_string);
+ ret = gdm_wibro_send(usbdev, buf, strlen(type_string) + pad_size);
if (ret < 0)
goto out;
@@ -310,8 +310,8 @@
else
len = img_len; /* the last chunk of data */
- memcpy(buf+pad_size, firm->data + pos, len);
- ret = gdm_wibro_send(usbdev, buf, len+pad_size);
+ memcpy(buf + pad_size, firm->data + pos, len);
+ ret = gdm_wibro_send(usbdev, buf, len + pad_size);
if (ret < 0)
goto out;
@@ -319,7 +319,7 @@
img_len -= DOWNLOAD_CHUCK;
pos += DOWNLOAD_CHUCK;
- ret = em_wait_ack(usbdev, ((len+pad_size) % 512 == 0));
+ ret = em_wait_ack(usbdev, ((len + pad_size) % 512 == 0));
if (ret < 0)
goto out;
}
diff --git a/drivers/staging/goldfish/goldfish_audio.c b/drivers/staging/goldfish/goldfish_audio.c
index 891dfaa..364fdcd 100644
--- a/drivers/staging/goldfish/goldfish_audio.c
+++ b/drivers/staging/goldfish/goldfish_audio.c
@@ -63,7 +63,7 @@
#define AUDIO_READ(data, addr) (readl(data->reg_base + addr))
#define AUDIO_WRITE(data, addr, x) (writel(x, data->reg_base + addr))
#define AUDIO_WRITE64(data, addr, addr2, x) \
- (gf_write_dma_addr((x), data->reg_base + addr, data->reg_base+addr2))
+ (gf_write_dma_addr((x), data->reg_base + addr, data->reg_base + addr2))
/*
* temporary variable used between goldfish_audio_probe() and
diff --git a/drivers/staging/goldfish/goldfish_nand.c b/drivers/staging/goldfish/goldfish_nand.c
index 5c4f61c..76d60ee 100644
--- a/drivers/staging/goldfish/goldfish_nand.c
+++ b/drivers/staging/goldfish/goldfish_nand.c
@@ -27,6 +27,7 @@
#include <linux/mutex.h>
#include <linux/goldfish.h>
#include <asm/div64.h>
+#include <linux/dma-mapping.h>
#include "goldfish_nand_reg.h"
@@ -99,11 +100,11 @@
{
loff_t ofs = instr->addr;
u32 len = instr->len;
- u32 rem;
+ s32 rem;
if (ofs + len > mtd->size)
goto invalid_arg;
- rem = do_div(ofs, mtd->writesize);
+ ofs = div_s64_rem(ofs, mtd->writesize, &rem);
if (rem)
goto invalid_arg;
ofs *= (mtd->writesize + mtd->oobsize);
@@ -132,7 +133,7 @@
static int goldfish_nand_read_oob(struct mtd_info *mtd, loff_t ofs,
struct mtd_oob_ops *ops)
{
- u32 rem;
+ s32 rem;
if (ofs + ops->len > mtd->size)
goto invalid_arg;
@@ -141,7 +142,7 @@
if (ops->ooblen + ops->ooboffs > mtd->oobsize)
goto invalid_arg;
- rem = do_div(ofs, mtd->writesize);
+ ofs = div_s64_rem(ofs, mtd->writesize, &rem);
if (rem)
goto invalid_arg;
ofs *= (mtd->writesize + mtd->oobsize);
@@ -164,7 +165,7 @@
static int goldfish_nand_write_oob(struct mtd_info *mtd, loff_t ofs,
struct mtd_oob_ops *ops)
{
- u32 rem;
+ s32 rem;
if (ofs + ops->len > mtd->size)
goto invalid_arg;
@@ -173,7 +174,7 @@
if (ops->ooblen + ops->ooboffs > mtd->oobsize)
goto invalid_arg;
- rem = do_div(ofs, mtd->writesize);
+ ofs = div_s64_rem(ofs, mtd->writesize, &rem);
if (rem)
goto invalid_arg;
ofs *= (mtd->writesize + mtd->oobsize);
@@ -196,12 +197,12 @@
static int goldfish_nand_read(struct mtd_info *mtd, loff_t from, size_t len,
size_t *retlen, u_char *buf)
{
- u32 rem;
+ s32 rem;
if (from + len > mtd->size)
goto invalid_arg;
- rem = do_div(from, mtd->writesize);
+ from = div_s64_rem(from, mtd->writesize, &rem);
if (rem)
goto invalid_arg;
from *= (mtd->writesize + mtd->oobsize);
@@ -218,12 +219,12 @@
static int goldfish_nand_write(struct mtd_info *mtd, loff_t to, size_t len,
size_t *retlen, const u_char *buf)
{
- u32 rem;
+ s32 rem;
if (to + len > mtd->size)
goto invalid_arg;
- rem = do_div(to, mtd->writesize);
+ to = div_s64_rem(to, mtd->writesize, &rem);
if (rem)
goto invalid_arg;
to *= (mtd->writesize + mtd->oobsize);
@@ -239,12 +240,12 @@
static int goldfish_nand_block_isbad(struct mtd_info *mtd, loff_t ofs)
{
- u32 rem;
+ s32 rem;
if (ofs >= mtd->size)
goto invalid_arg;
- rem = do_div(ofs, mtd->erasesize);
+ ofs = div_s64_rem(ofs, mtd->writesize, &rem);
if (rem)
goto invalid_arg;
ofs *= mtd->erasesize / mtd->writesize;
@@ -260,12 +261,12 @@
static int goldfish_nand_block_markbad(struct mtd_info *mtd, loff_t ofs)
{
- u32 rem;
+ s32 rem;
if (ofs >= mtd->size)
goto invalid_arg;
- rem = do_div(ofs, mtd->erasesize);
+ ofs = div_s64_rem(ofs, mtd->writesize, &rem);
if (rem)
goto invalid_arg;
ofs *= mtd->erasesize / mtd->writesize;
@@ -284,17 +285,18 @@
static int nand_setup_cmd_params(struct platform_device *pdev,
struct goldfish_nand *nand)
{
- u64 paddr;
+ dma_addr_t dma_handle;
unsigned char __iomem *base = nand->base;
- nand->cmd_params = devm_kzalloc(&pdev->dev,
- sizeof(struct cmd_params), GFP_KERNEL);
- if (!nand->cmd_params)
+ nand->cmd_params = dmam_alloc_coherent(&pdev->dev,
+ sizeof(struct cmd_params),
+ &dma_handle, GFP_KERNEL);
+ if (!nand->cmd_params) {
+ dev_err(&pdev->dev, "allocate buffer failed\n");
return -ENOMEM;
-
- paddr = __pa(nand->cmd_params);
- writel((u32)(paddr >> 32), base + NAND_CMD_PARAMS_ADDR_HIGH);
- writel((u32)paddr, base + NAND_CMD_PARAMS_ADDR_LOW);
+ }
+ writel((u32)((u64)dma_handle >> 32), base + NAND_CMD_PARAMS_ADDR_HIGH);
+ writel((u32)dma_handle, base + NAND_CMD_PARAMS_ADDR_LOW);
return 0;
}
@@ -319,7 +321,7 @@
mtd->oobavail = mtd->oobsize;
mtd->erasesize = readl(base + NAND_DEV_ERASE_SIZE) /
(mtd->writesize + mtd->oobsize) * mtd->writesize;
- do_div(mtd->size, mtd->writesize + mtd->oobsize);
+ mtd->size = div_s64(mtd->size, mtd->writesize + mtd->oobsize);
mtd->size *= mtd->writesize;
dev_dbg(&pdev->dev,
"goldfish nand dev%d: size %llx, page %d, extra %d, erase %d\n",
diff --git a/drivers/staging/iio/accel/sca3000_core.c b/drivers/staging/iio/accel/sca3000_core.c
index 02e930c..a8f533a 100644
--- a/drivers/staging/iio/accel/sca3000_core.c
+++ b/drivers/staging/iio/accel/sca3000_core.c
@@ -216,8 +216,7 @@
ret = sca3000_read_data_short(st, SCA3000_REG_ADDR_CTRL_DATA, 1);
if (ret)
goto error_ret;
- else
- return st->rx[0];
+ return st->rx[0];
error_ret:
return ret;
}
diff --git a/drivers/staging/iio/accel/sca3000_ring.c b/drivers/staging/iio/accel/sca3000_ring.c
index 1920dc60..d1cb9b9 100644
--- a/drivers/staging/iio/accel/sca3000_ring.c
+++ b/drivers/staging/iio/accel/sca3000_ring.c
@@ -99,8 +99,7 @@
ret = sca3000_read_data_short(st, SCA3000_REG_ADDR_BUF_COUNT, 1);
if (ret)
goto error_ret;
- else
- num_available = st->rx[0];
+ num_available = st->rx[0];
/*
* num_available is the total number of samples available
* i.e. number of time points * number of channels.
diff --git a/drivers/staging/iio/adc/ad7816.c b/drivers/staging/iio/adc/ad7816.c
index 2226051..ac3735c 100644
--- a/drivers/staging/iio/adc/ad7816.c
+++ b/drivers/staging/iio/adc/ad7816.c
@@ -296,14 +296,14 @@
dev_err(dev, "Invalid oti channel id %d.\n", chip->channel_id);
return -EINVAL;
} else if (chip->channel_id == 0) {
- if (ret || value < AD7816_BOUND_VALUE_MIN ||
+ if (value < AD7816_BOUND_VALUE_MIN ||
value > AD7816_BOUND_VALUE_MAX)
return -EINVAL;
data = (u8)(value - AD7816_BOUND_VALUE_MIN +
AD7816_BOUND_VALUE_BASE);
} else {
- if (ret || value < AD7816_BOUND_VALUE_BASE || value > 255)
+ if (value < AD7816_BOUND_VALUE_BASE || value > 255)
return -EINVAL;
data = (u8)value;
diff --git a/drivers/staging/iio/light/isl29018.c b/drivers/staging/iio/light/isl29018.c
index 26a5412..76d9f74 100644
--- a/drivers/staging/iio/light/isl29018.c
+++ b/drivers/staging/iio/light/isl29018.c
@@ -183,7 +183,7 @@
/* Set mode */
status = regmap_write(chip->regmap, ISL29018_REG_ADD_COMMAND1,
- mode << COMMMAND1_OPMODE_SHIFT);
+ mode << COMMMAND1_OPMODE_SHIFT);
if (status) {
dev_err(dev,
"Error in setting operating mode err %d\n", status);
@@ -241,7 +241,7 @@
}
static int isl29018_read_proximity_ir(struct isl29018_chip *chip, int scheme,
- int *near_ir)
+ int *near_ir)
{
int status;
int prox_data = -1;
@@ -250,15 +250,15 @@
/* Do proximity sensing with required scheme */
status = regmap_update_bits(chip->regmap, ISL29018_REG_ADD_COMMANDII,
- COMMANDII_SCHEME_MASK,
- scheme << COMMANDII_SCHEME_SHIFT);
+ COMMANDII_SCHEME_MASK,
+ scheme << COMMANDII_SCHEME_SHIFT);
if (status) {
dev_err(dev, "Error in setting operating mode\n");
return status;
}
prox_data = isl29018_read_sensor_input(chip,
- COMMMAND1_OPMODE_PROX_ONCE);
+ COMMMAND1_OPMODE_PROX_ONCE);
if (prox_data < 0)
return prox_data;
@@ -281,7 +281,7 @@
}
static ssize_t show_scale_available(struct device *dev,
- struct device_attribute *attr, char *buf)
+ struct device_attribute *attr, char *buf)
{
struct iio_dev *indio_dev = dev_to_iio_dev(dev);
struct isl29018_chip *chip = iio_priv(indio_dev);
@@ -298,7 +298,7 @@
}
static ssize_t show_int_time_available(struct device *dev,
- struct device_attribute *attr, char *buf)
+ struct device_attribute *attr, char *buf)
{
struct iio_dev *indio_dev = dev_to_iio_dev(dev);
struct isl29018_chip *chip = iio_priv(indio_dev);
@@ -315,18 +315,22 @@
/* proximity scheme */
static ssize_t show_prox_infrared_suppression(struct device *dev,
- struct device_attribute *attr, char *buf)
+ struct device_attribute *attr,
+ char *buf)
{
struct iio_dev *indio_dev = dev_to_iio_dev(dev);
struct isl29018_chip *chip = iio_priv(indio_dev);
- /* return the "proximity scheme" i.e. if the chip does on chip
- infrared suppression (1 means perform on chip suppression) */
+ /*
+ * return the "proximity scheme" i.e. if the chip does on chip
+ * infrared suppression (1 means perform on chip suppression)
+ */
return sprintf(buf, "%d\n", chip->prox_scheme);
}
static ssize_t store_prox_infrared_suppression(struct device *dev,
- struct device_attribute *attr, const char *buf, size_t count)
+ struct device_attribute *attr,
+ const char *buf, size_t count)
{
struct iio_dev *indio_dev = dev_to_iio_dev(dev);
struct isl29018_chip *chip = iio_priv(indio_dev);
@@ -339,8 +343,10 @@
return -EINVAL;
}
- /* get the "proximity scheme" i.e. if the chip does on chip
- infrared suppression (1 means perform on chip suppression) */
+ /*
+ * get the "proximity scheme" i.e. if the chip does on chip
+ * infrared suppression (1 means perform on chip suppression)
+ */
mutex_lock(&chip->lock);
chip->prox_scheme = val;
mutex_unlock(&chip->lock);
@@ -414,7 +420,8 @@
break;
case IIO_PROXIMITY:
ret = isl29018_read_proximity_ir(chip,
- chip->prox_scheme, val);
+ chip->prox_scheme,
+ val);
break;
default:
break;
@@ -702,13 +709,13 @@
if (!id)
return NULL;
- *data = (int) id->driver_data;
+ *data = (int)id->driver_data;
return dev_name(dev);
}
static int isl29018_probe(struct i2c_client *client,
- const struct i2c_device_id *id)
+ const struct i2c_device_id *id)
{
struct isl29018_chip *chip;
struct iio_dev *indio_dev;
diff --git a/drivers/staging/iio/light/isl29028.c b/drivers/staging/iio/light/isl29028.c
index e1bca9c..6e2ba45 100644
--- a/drivers/staging/iio/light/isl29028.c
+++ b/drivers/staging/iio/light/isl29028.c
@@ -81,7 +81,7 @@
};
static int isl29028_set_proxim_sampling(struct isl29028_chip *chip,
- unsigned int sampling)
+ unsigned int sampling)
{
static unsigned int prox_period[] = {800, 400, 200, 100, 75, 50, 12, 0};
int sel;
@@ -103,7 +103,7 @@
if (enable)
val = CONFIGURE_PROX_EN;
ret = regmap_update_bits(chip->regmap, ISL29028_REG_CONFIGURE,
- CONFIGURE_PROX_EN_MASK, val);
+ CONFIGURE_PROX_EN_MASK, val);
if (ret < 0)
return ret;
@@ -122,24 +122,27 @@
}
static int isl29028_set_als_ir_mode(struct isl29028_chip *chip,
- enum als_ir_mode mode)
+ enum als_ir_mode mode)
{
int ret = 0;
switch (mode) {
case MODE_ALS:
ret = regmap_update_bits(chip->regmap, ISL29028_REG_CONFIGURE,
- CONFIGURE_ALS_IR_MODE_MASK, CONFIGURE_ALS_IR_MODE_ALS);
+ CONFIGURE_ALS_IR_MODE_MASK,
+ CONFIGURE_ALS_IR_MODE_ALS);
if (ret < 0)
return ret;
ret = regmap_update_bits(chip->regmap, ISL29028_REG_CONFIGURE,
- CONFIGURE_ALS_RANGE_MASK, CONFIGURE_ALS_RANGE_HIGH_LUX);
+ CONFIGURE_ALS_RANGE_MASK,
+ CONFIGURE_ALS_RANGE_HIGH_LUX);
break;
case MODE_IR:
ret = regmap_update_bits(chip->regmap, ISL29028_REG_CONFIGURE,
- CONFIGURE_ALS_IR_MODE_MASK, CONFIGURE_ALS_IR_MODE_IR);
+ CONFIGURE_ALS_IR_MODE_MASK,
+ CONFIGURE_ALS_IR_MODE_IR);
break;
case MODE_NONE:
@@ -152,7 +155,7 @@
/* Enable the ALS/IR */
ret = regmap_update_bits(chip->regmap, ISL29028_REG_CONFIGURE,
- CONFIGURE_ALS_EN_MASK, CONFIGURE_ALS_EN);
+ CONFIGURE_ALS_EN_MASK, CONFIGURE_ALS_EN);
if (ret < 0)
return ret;
@@ -193,7 +196,7 @@
ret = regmap_read(chip->regmap, ISL29028_REG_PROX_DATA, &data);
if (ret < 0) {
dev_err(chip->dev, "Error in reading register %d, error %d\n",
- ISL29028_REG_PROX_DATA, ret);
+ ISL29028_REG_PROX_DATA, ret);
return ret;
}
*prox = data;
@@ -264,7 +267,8 @@
/* Channel IO */
static int isl29028_write_raw(struct iio_dev *indio_dev,
- struct iio_chan_spec const *chan, int val, int val2, long mask)
+ struct iio_chan_spec const *chan,
+ int val, int val2, long mask)
{
struct isl29028_chip *chip = iio_priv(indio_dev);
int ret = -EINVAL;
@@ -323,7 +327,8 @@
}
static int isl29028_read_raw(struct iio_dev *indio_dev,
- struct iio_chan_spec const *chan, int *val, int *val2, long mask)
+ struct iio_chan_spec const *chan,
+ int *val, int *val2, long mask)
{
struct isl29028_chip *chip = iio_priv(indio_dev);
int ret = -EINVAL;
@@ -476,7 +481,7 @@
};
static int isl29028_probe(struct i2c_client *client,
- const struct i2c_device_id *id)
+ const struct i2c_device_id *id)
{
struct isl29028_chip *chip;
struct iio_dev *indio_dev;
diff --git a/drivers/staging/iio/light/tsl2583.c b/drivers/staging/iio/light/tsl2583.c
index 3100d96..05b4ad4 100644
--- a/drivers/staging/iio/light/tsl2583.c
+++ b/drivers/staging/iio/light/tsl2583.c
@@ -240,8 +240,10 @@
}
}
- /* clear status, really interrupt status (interrupts are off), but
- * we use the bit anyway - don't forget 0x80 - this is a command*/
+ /*
+ * clear status, really interrupt status (interrupts are off), but
+ * we use the bit anyway - don't forget 0x80 - this is a command
+ */
ret = i2c_smbus_write_byte(chip->client,
(TSL258X_CMD_REG | TSL258X_CMD_SPL_FN |
TSL258X_CMD_ALS_INT_CLR));
@@ -265,13 +267,14 @@
if (!ch0) {
/* have no data, so return LAST VALUE */
- ret = chip->als_cur_info.lux = 0;
+ ret = 0;
+ chip->als_cur_info.lux = 0;
goto out_unlock;
}
/* calculate ratio */
ratio = (ch1 << 15) / ch0;
/* convert to unscaled lux using the pointer to the table */
- for (p = (struct taos_lux *) taos_device_lux;
+ for (p = (struct taos_lux *)taos_device_lux;
p->ratio != 0 && p->ratio < ratio; p++)
;
@@ -290,7 +293,8 @@
/* note: lux is 31 bit max at this point */
if (ch1lux > ch0lux) {
dev_dbg(&chip->client->dev, "No Data - Return last value\n");
- ret = chip->als_cur_info.lux = 0;
+ ret = 0;
+ chip->als_cur_info.lux = 0;
goto out_unlock;
}
@@ -378,7 +382,7 @@
dev_err(&chip->client->dev, "taos_als_calibrate failed to get lux\n");
return lux_val;
}
- gain_trim_val = (unsigned int) (((chip->taos_settings.als_cal_target)
+ gain_trim_val = (unsigned int)(((chip->taos_settings.als_cal_target)
* chip->taos_settings.als_gain_trim) / lux_val);
if ((gain_trim_val < 250) || (gain_trim_val > 4000)) {
@@ -387,9 +391,9 @@
gain_trim_val);
return -ENODATA;
}
- chip->taos_settings.als_gain_trim = (int) gain_trim_val;
+ chip->taos_settings.als_gain_trim = (int)gain_trim_val;
- return (int) gain_trim_val;
+ return (int)gain_trim_val;
}
/*
@@ -429,8 +433,10 @@
chip->als_saturation = als_count * 922; /* 90% of full scale */
chip->als_time_scale = (als_time + 25) / 50;
- /* TSL258x Specific power-on / adc enable sequence
- * Power on the device 1st. */
+ /*
+ * TSL258x Specific power-on / adc enable sequence
+ * Power on the device 1st.
+ */
utmp = TSL258X_CNTL_PWR_ON;
ret = i2c_smbus_write_byte_data(chip->client,
TSL258X_CMD_REG | TSL258X_CNTRL, utmp);
@@ -439,8 +445,10 @@
return ret;
}
- /* Use the following shadow copy for our delay before enabling ADC.
- * Write all the registers. */
+ /*
+ * Use the following shadow copy for our delay before enabling ADC.
+ * Write all the registers.
+ */
for (i = 0, uP = chip->taos_config; i < TSL258X_REG_MAX; i++) {
ret = i2c_smbus_write_byte_data(chip->client,
TSL258X_CMD_REG + i,
@@ -453,8 +461,10 @@
}
usleep_range(3000, 3500);
- /* NOW enable the ADC
- * initialize the desired mode of operation */
+ /*
+ * NOW enable the ADC
+ * initialize the desired mode of operation
+ */
utmp = TSL258X_CNTL_PWR_ON | TSL258X_CNTL_ADC_ENBL;
ret = i2c_smbus_write_byte_data(chip->client,
TSL258X_CMD_REG | TSL258X_CNTRL,
@@ -482,7 +492,7 @@
/* Sysfs Interface Functions */
static ssize_t taos_power_state_show(struct device *dev,
- struct device_attribute *attr, char *buf)
+ struct device_attribute *attr, char *buf)
{
struct iio_dev *indio_dev = dev_to_iio_dev(dev);
struct tsl2583_chip *chip = iio_priv(indio_dev);
@@ -491,7 +501,8 @@
}
static ssize_t taos_power_state_store(struct device *dev,
- struct device_attribute *attr, const char *buf, size_t len)
+ struct device_attribute *attr,
+ const char *buf, size_t len)
{
struct iio_dev *indio_dev = dev_to_iio_dev(dev);
int value;
@@ -508,7 +519,7 @@
}
static ssize_t taos_gain_show(struct device *dev,
- struct device_attribute *attr, char *buf)
+ struct device_attribute *attr, char *buf)
{
struct iio_dev *indio_dev = dev_to_iio_dev(dev);
struct tsl2583_chip *chip = iio_priv(indio_dev);
@@ -533,7 +544,8 @@
}
static ssize_t taos_gain_store(struct device *dev,
- struct device_attribute *attr, const char *buf, size_t len)
+ struct device_attribute *attr,
+ const char *buf, size_t len)
{
struct iio_dev *indio_dev = dev_to_iio_dev(dev);
struct tsl2583_chip *chip = iio_priv(indio_dev);
@@ -564,13 +576,14 @@
}
static ssize_t taos_gain_available_show(struct device *dev,
- struct device_attribute *attr, char *buf)
+ struct device_attribute *attr,
+ char *buf)
{
return sprintf(buf, "%s\n", "1 8 16 111");
}
static ssize_t taos_als_time_show(struct device *dev,
- struct device_attribute *attr, char *buf)
+ struct device_attribute *attr, char *buf)
{
struct iio_dev *indio_dev = dev_to_iio_dev(dev);
struct tsl2583_chip *chip = iio_priv(indio_dev);
@@ -579,7 +592,8 @@
}
static ssize_t taos_als_time_store(struct device *dev,
- struct device_attribute *attr, const char *buf, size_t len)
+ struct device_attribute *attr,
+ const char *buf, size_t len)
{
struct iio_dev *indio_dev = dev_to_iio_dev(dev);
struct tsl2583_chip *chip = iio_priv(indio_dev);
@@ -600,14 +614,15 @@
}
static ssize_t taos_als_time_available_show(struct device *dev,
- struct device_attribute *attr, char *buf)
+ struct device_attribute *attr,
+ char *buf)
{
return sprintf(buf, "%s\n",
"50 100 150 200 250 300 350 400 450 500 550 600 650");
}
static ssize_t taos_als_trim_show(struct device *dev,
- struct device_attribute *attr, char *buf)
+ struct device_attribute *attr, char *buf)
{
struct iio_dev *indio_dev = dev_to_iio_dev(dev);
struct tsl2583_chip *chip = iio_priv(indio_dev);
@@ -616,7 +631,8 @@
}
static ssize_t taos_als_trim_store(struct device *dev,
- struct device_attribute *attr, const char *buf, size_t len)
+ struct device_attribute *attr,
+ const char *buf, size_t len)
{
struct iio_dev *indio_dev = dev_to_iio_dev(dev);
struct tsl2583_chip *chip = iio_priv(indio_dev);
@@ -632,7 +648,8 @@
}
static ssize_t taos_als_cal_target_show(struct device *dev,
- struct device_attribute *attr, char *buf)
+ struct device_attribute *attr,
+ char *buf)
{
struct iio_dev *indio_dev = dev_to_iio_dev(dev);
struct tsl2583_chip *chip = iio_priv(indio_dev);
@@ -641,7 +658,8 @@
}
static ssize_t taos_als_cal_target_store(struct device *dev,
- struct device_attribute *attr, const char *buf, size_t len)
+ struct device_attribute *attr,
+ const char *buf, size_t len)
{
struct iio_dev *indio_dev = dev_to_iio_dev(dev);
struct tsl2583_chip *chip = iio_priv(indio_dev);
@@ -657,7 +675,7 @@
}
static ssize_t taos_lux_show(struct device *dev, struct device_attribute *attr,
- char *buf)
+ char *buf)
{
int ret;
@@ -669,7 +687,8 @@
}
static ssize_t taos_do_calibrate(struct device *dev,
- struct device_attribute *attr, const char *buf, size_t len)
+ struct device_attribute *attr,
+ const char *buf, size_t len)
{
struct iio_dev *indio_dev = dev_to_iio_dev(dev);
int value;
@@ -684,7 +703,7 @@
}
static ssize_t taos_luxtable_show(struct device *dev,
- struct device_attribute *attr, char *buf)
+ struct device_attribute *attr, char *buf)
{
int i;
int offset = 0;
@@ -695,8 +714,10 @@
taos_device_lux[i].ch0,
taos_device_lux[i].ch1);
if (taos_device_lux[i].ratio == 0) {
- /* We just printed the first "0" entry.
- * Now get rid of the extra "," and break. */
+ /*
+ * We just printed the first "0" entry.
+ * Now get rid of the extra "," and break.
+ */
offset--;
break;
}
@@ -707,11 +728,12 @@
}
static ssize_t taos_luxtable_store(struct device *dev,
- struct device_attribute *attr, const char *buf, size_t len)
+ struct device_attribute *attr,
+ const char *buf, size_t len)
{
struct iio_dev *indio_dev = dev_to_iio_dev(dev);
struct tsl2583_chip *chip = iio_priv(indio_dev);
- int value[ARRAY_SIZE(taos_device_lux)*3 + 1];
+ int value[ARRAY_SIZE(taos_device_lux) * 3 + 1];
int n;
get_options(buf, ARRAY_SIZE(value), value);
@@ -809,7 +831,7 @@
struct iio_dev *indio_dev;
if (!i2c_check_functionality(clientp->adapter,
- I2C_FUNC_SMBUS_BYTE_DATA)) {
+ I2C_FUNC_SMBUS_BYTE_DATA)) {
dev_err(&clientp->dev, "taos_probe() - i2c smbus byte data func unsupported\n");
return -EOPNOTSUPP;
}
@@ -846,7 +868,7 @@
if (!taos_tsl258x_device(buf)) {
dev_info(&clientp->dev,
- "i2c device found but does not match expected id in taos_probe()\n");
+ "i2c device found but does not match expected id in taos_probe()\n");
return -EINVAL;
}
diff --git a/drivers/staging/iio/resolver/ad2s1200.c b/drivers/staging/iio/resolver/ad2s1200.c
index 595e711..82b2d88 100644
--- a/drivers/staging/iio/resolver/ad2s1200.c
+++ b/drivers/staging/iio/resolver/ad2s1200.c
@@ -31,7 +31,7 @@
/* input clock on serial interface */
#define AD2S1200_HZ 8192000
/* clock period in nano second */
-#define AD2S1200_TSCLK (1000000000/AD2S1200_HZ)
+#define AD2S1200_TSCLK (1000000000 / AD2S1200_HZ)
struct ad2s1200_state {
struct mutex lock;
@@ -42,10 +42,10 @@
};
static int ad2s1200_read_raw(struct iio_dev *indio_dev,
- struct iio_chan_spec const *chan,
- int *val,
- int *val2,
- long m)
+ struct iio_chan_spec const *chan,
+ int *val,
+ int *val2,
+ long m)
{
int ret = 0;
s16 vel;
@@ -113,7 +113,7 @@
DRV_NAME);
if (ret) {
dev_err(&spi->dev, "request gpio pin %d failed\n",
- pins[pn]);
+ pins[pn]);
return ret;
}
}
diff --git a/drivers/staging/iio/resolver/ad2s1210.c b/drivers/staging/iio/resolver/ad2s1210.c
index d97aa28..6b99263 100644
--- a/drivers/staging/iio/resolver/ad2s1210.c
+++ b/drivers/staging/iio/resolver/ad2s1210.c
@@ -67,7 +67,7 @@
/* default input clock on serial interface */
#define AD2S1210_DEF_CLKIN 8192000
/* clock period in nano second */
-#define AD2S1210_DEF_TCK (1000000000/AD2S1210_DEF_CLKIN)
+#define AD2S1210_DEF_TCK (1000000000 / AD2S1210_DEF_CLKIN)
#define AD2S1210_DEF_EXCIT 10000
enum ad2s1210_mode {
@@ -98,6 +98,7 @@
[MOD_VEL] = { 0, 1 },
[MOD_CONFIG] = { 1, 0 },
};
+
static inline void ad2s1210_set_mode(enum ad2s1210_mode mode,
struct ad2s1210_state *st)
{
@@ -123,7 +124,7 @@
/* read value from one of the registers */
static int ad2s1210_config_read(struct ad2s1210_state *st,
- unsigned char address)
+ unsigned char address)
{
struct spi_transfer xfer = {
.len = 2,
@@ -176,9 +177,9 @@
static inline void ad2s1210_set_resolution_pin(struct ad2s1210_state *st)
{
gpio_set_value(st->pdata->res[0],
- ad2s1210_res_pins[(st->resolution - 10)/2][0]);
+ ad2s1210_res_pins[(st->resolution - 10) / 2][0]);
gpio_set_value(st->pdata->res[1],
- ad2s1210_res_pins[(st->resolution - 10)/2][1]);
+ ad2s1210_res_pins[(st->resolution - 10) / 2][1]);
}
static inline int ad2s1210_soft_reset(struct ad2s1210_state *st)
@@ -282,8 +283,8 @@
}
static ssize_t ad2s1210_store_control(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t len)
+ struct device_attribute *attr,
+ const char *buf, size_t len)
{
struct ad2s1210_state *st = iio_priv(dev_to_iio_dev(dev));
unsigned char udata;
@@ -318,9 +319,9 @@
data = ad2s1210_read_resolution_pin(st);
if (data != st->resolution)
dev_warn(dev, "ad2s1210: resolution settings not match\n");
- } else
+ } else {
ad2s1210_set_resolution_pin(st);
-
+ }
ret = len;
st->hysteresis = !!(data & AD2S1210_ENABLE_HYSTERESIS);
@@ -330,7 +331,8 @@
}
static ssize_t ad2s1210_show_resolution(struct device *dev,
- struct device_attribute *attr, char *buf)
+ struct device_attribute *attr,
+ char *buf)
{
struct ad2s1210_state *st = iio_priv(dev_to_iio_dev(dev));
@@ -338,8 +340,8 @@
}
static ssize_t ad2s1210_store_resolution(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t len)
+ struct device_attribute *attr,
+ const char *buf, size_t len)
{
struct ad2s1210_state *st = iio_priv(dev_to_iio_dev(dev));
unsigned char data;
@@ -379,8 +381,9 @@
data = ad2s1210_read_resolution_pin(st);
if (data != st->resolution)
dev_warn(dev, "ad2s1210: resolution settings not match\n");
- } else
+ } else {
ad2s1210_set_resolution_pin(st);
+ }
ret = len;
error_ret:
mutex_unlock(&st->lock);
@@ -389,7 +392,7 @@
/* read the fault register since last sample */
static ssize_t ad2s1210_show_fault(struct device *dev,
- struct device_attribute *attr, char *buf)
+ struct device_attribute *attr, char *buf)
{
struct ad2s1210_state *st = iio_priv(dev_to_iio_dev(dev));
int ret;
@@ -441,7 +444,8 @@
}
static ssize_t ad2s1210_store_reg(struct device *dev,
- struct device_attribute *attr, const char *buf, size_t len)
+ struct device_attribute *attr,
+ const char *buf, size_t len)
{
struct ad2s1210_state *st = iio_priv(dev_to_iio_dev(dev));
unsigned char data;
@@ -497,7 +501,7 @@
switch (chan->type) {
case IIO_ANGL:
- pos = be16_to_cpup((__be16 *) st->rx);
+ pos = be16_to_cpup((__be16 *)st->rx);
if (st->hysteresis)
pos >>= 16 - st->resolution;
*val = pos;
@@ -505,7 +509,7 @@
break;
case IIO_ANGL_VEL:
negative = st->rx[0] & 0x80;
- vel = be16_to_cpup((__be16 *) st->rx);
+ vel = be16_to_cpup((__be16 *)st->rx);
vel >>= 16 - st->resolution;
if (vel & 0x8000) {
negative = (0xffff >> st->resolution) << st->resolution;
@@ -560,7 +564,6 @@
ad2s1210_show_reg, ad2s1210_store_reg,
AD2S1210_REG_LOT_LOW_THRD);
-
static const struct iio_chan_spec ad2s1210_channels[] = {
{
.type = IIO_ANGL,
@@ -672,7 +675,7 @@
struct ad2s1210_state *st;
int ret;
- if (spi->dev.platform_data == NULL)
+ if (!spi->dev.platform_data)
return -EINVAL;
indio_dev = devm_iio_device_alloc(&spi->dev, sizeof(*st));
diff --git a/drivers/staging/lustre/include/linux/libcfs/libcfs.h b/drivers/staging/lustre/include/linux/libcfs/libcfs.h
index dc9b88f..5c598c8d 100644
--- a/drivers/staging/lustre/include/linux/libcfs/libcfs.h
+++ b/drivers/staging/lustre/include/linux/libcfs/libcfs.h
@@ -51,8 +51,6 @@
#define LERRCHKSUM(hexnum) (((hexnum) & 0xf) ^ ((hexnum) >> 4 & 0xf) ^ \
((hexnum) >> 8 & 0xf))
-#define LUSTRE_SRV_LNET_PID LUSTRE_LNET_PID
-
#include <linux/list.h>
/* need both kernel and user-land acceptor */
diff --git a/drivers/staging/lustre/include/linux/libcfs/libcfs_cpu.h b/drivers/staging/lustre/include/linux/libcfs/libcfs_cpu.h
index 1530b04..9e62c59 100644
--- a/drivers/staging/lustre/include/linux/libcfs/libcfs_cpu.h
+++ b/drivers/staging/lustre/include/linux/libcfs/libcfs_cpu.h
@@ -13,11 +13,6 @@
* General Public License version 2 for more details (a copy is included
* in the LICENSE file that accompanied this code).
*
- * You should have received a copy of the GNU General Public License
- * version 2 along with this program; if not, write to the
- * Free Software Foundation, Inc., 59 Temple Place - Suite 330,
- * Boston, MA 021110-1307, USA
- *
* GPL HEADER END
*/
/*
diff --git a/drivers/staging/lustre/include/linux/libcfs/libcfs_ioctl.h b/drivers/staging/lustre/include/linux/libcfs/libcfs_ioctl.h
index e4463ad..f788631 100644
--- a/drivers/staging/lustre/include/linux/libcfs/libcfs_ioctl.h
+++ b/drivers/staging/lustre/include/linux/libcfs/libcfs_ioctl.h
@@ -41,11 +41,16 @@
#ifndef __LIBCFS_IOCTL_H__
#define __LIBCFS_IOCTL_H__
-#define LIBCFS_IOCTL_VERSION 0x0001000a
+#define LIBCFS_IOCTL_VERSION 0x0001000a
+#define LIBCFS_IOCTL_VERSION2 0x0001000b
-struct libcfs_ioctl_data {
+struct libcfs_ioctl_hdr {
__u32 ioc_len;
__u32 ioc_version;
+};
+
+struct libcfs_ioctl_data {
+ struct libcfs_ioctl_hdr ioc_hdr;
__u64 ioc_nid;
__u64 ioc_u64[1];
@@ -70,11 +75,6 @@
#define ioc_priority ioc_u32[0]
-struct libcfs_ioctl_hdr {
- __u32 ioc_len;
- __u32 ioc_version;
-};
-
struct libcfs_debug_ioctl_data {
struct libcfs_ioctl_hdr hdr;
unsigned int subs;
@@ -90,7 +90,7 @@
struct libcfs_ioctl_handler {
struct list_head item;
- int (*handle_ioctl)(unsigned int cmd, struct libcfs_ioctl_data *data);
+ int (*handle_ioctl)(unsigned int cmd, struct libcfs_ioctl_hdr *hdr);
};
#define DECLARE_IOCTL_HANDLER(ident, func) \
@@ -112,9 +112,6 @@
/* lnet ioctls */
#define IOC_LIBCFS_GET_NI _IOWR('e', 50, long)
#define IOC_LIBCFS_FAIL_NID _IOWR('e', 51, long)
-#define IOC_LIBCFS_ADD_ROUTE _IOWR('e', 52, long)
-#define IOC_LIBCFS_DEL_ROUTE _IOWR('e', 53, long)
-#define IOC_LIBCFS_GET_ROUTE _IOWR('e', 54, long)
#define IOC_LIBCFS_NOTIFY_ROUTER _IOWR('e', 55, long)
#define IOC_LIBCFS_UNCONFIGURE _IOWR('e', 56, long)
/* #define IOC_LIBCFS_PORTALS_COMPATIBILITY _IOWR('e', 57, long) */
@@ -137,7 +134,25 @@
#define IOC_LIBCFS_DEL_INTERFACE _IOWR('e', 79, long)
#define IOC_LIBCFS_GET_INTERFACE _IOWR('e', 80, long)
-#define IOC_LIBCFS_MAX_NR 80
+/*
+ * DLC Specific IOCTL numbers.
+ * In order to maintain backward compatibility with any possible external
+ * tools which might be accessing the IOCTL numbers, a new group of IOCTL
+ * number have been allocated.
+ */
+#define IOCTL_CONFIG_SIZE struct lnet_ioctl_config_data
+#define IOC_LIBCFS_ADD_ROUTE _IOWR(IOC_LIBCFS_TYPE, 81, IOCTL_CONFIG_SIZE)
+#define IOC_LIBCFS_DEL_ROUTE _IOWR(IOC_LIBCFS_TYPE, 82, IOCTL_CONFIG_SIZE)
+#define IOC_LIBCFS_GET_ROUTE _IOWR(IOC_LIBCFS_TYPE, 83, IOCTL_CONFIG_SIZE)
+#define IOC_LIBCFS_ADD_NET _IOWR(IOC_LIBCFS_TYPE, 84, IOCTL_CONFIG_SIZE)
+#define IOC_LIBCFS_DEL_NET _IOWR(IOC_LIBCFS_TYPE, 85, IOCTL_CONFIG_SIZE)
+#define IOC_LIBCFS_GET_NET _IOWR(IOC_LIBCFS_TYPE, 86, IOCTL_CONFIG_SIZE)
+#define IOC_LIBCFS_CONFIG_RTR _IOWR(IOC_LIBCFS_TYPE, 87, IOCTL_CONFIG_SIZE)
+#define IOC_LIBCFS_ADD_BUF _IOWR(IOC_LIBCFS_TYPE, 88, IOCTL_CONFIG_SIZE)
+#define IOC_LIBCFS_GET_BUF _IOWR(IOC_LIBCFS_TYPE, 89, IOCTL_CONFIG_SIZE)
+#define IOC_LIBCFS_GET_PEER_INFO _IOWR(IOC_LIBCFS_TYPE, 90, IOCTL_CONFIG_SIZE)
+#define IOC_LIBCFS_GET_LNET_STATS _IOWR(IOC_LIBCFS_TYPE, 91, IOCTL_CONFIG_SIZE)
+#define IOC_LIBCFS_MAX_NR 91
static inline int libcfs_ioctl_packlen(struct libcfs_ioctl_data *data)
{
@@ -148,9 +163,9 @@
return len;
}
-static inline int libcfs_ioctl_is_invalid(struct libcfs_ioctl_data *data)
+static inline bool libcfs_ioctl_is_invalid(struct libcfs_ioctl_data *data)
{
- if (data->ioc_len > (1<<30)) {
+ if (data->ioc_hdr.ioc_len > (1 << 30)) {
CERROR("LIBCFS ioctl: ioc_len larger than 1<<30\n");
return 1;
}
@@ -186,7 +201,7 @@
CERROR("LIBCFS ioctl: plen2 nonzero but no pbuf2 pointer\n");
return 1;
}
- if ((__u32)libcfs_ioctl_packlen(data) != data->ioc_len) {
+ if ((__u32)libcfs_ioctl_packlen(data) != data->ioc_hdr.ioc_len) {
CERROR("LIBCFS ioctl: packlen != ioc_len\n");
return 1;
}
@@ -206,7 +221,9 @@
int libcfs_register_ioctl(struct libcfs_ioctl_handler *hand);
int libcfs_deregister_ioctl(struct libcfs_ioctl_handler *hand);
-int libcfs_ioctl_getdata(char *buf, char *end, void __user *arg);
+int libcfs_ioctl_getdata_len(const struct libcfs_ioctl_hdr __user *arg,
+ __u32 *buf_len);
int libcfs_ioctl_popdata(void __user *arg, void *buf, int size);
+int libcfs_ioctl_data_adjust(struct libcfs_ioctl_data *data);
#endif /* __LIBCFS_IOCTL_H__ */
diff --git a/drivers/staging/lustre/include/linux/libcfs/linux/libcfs.h b/drivers/staging/lustre/include/linux/libcfs/linux/libcfs.h
index aac5900..d94b266 100644
--- a/drivers/staging/lustre/include/linux/libcfs/linux/libcfs.h
+++ b/drivers/staging/lustre/include/linux/libcfs/linux/libcfs.h
@@ -118,9 +118,6 @@
#define CDEBUG_STACK() (0L)
#endif /* __x86_64__ */
-/* initial pid */
-#define LUSTRE_LNET_PID 12345
-
#define __current_nesting_level() (0)
/**
diff --git a/drivers/staging/lustre/include/linux/libcfs/linux/linux-cpu.h b/drivers/staging/lustre/include/linux/libcfs/linux/linux-cpu.h
index 520209f..c04979a 100644
--- a/drivers/staging/lustre/include/linux/libcfs/linux/linux-cpu.h
+++ b/drivers/staging/lustre/include/linux/libcfs/linux/linux-cpu.h
@@ -13,11 +13,6 @@
* General Public License version 2 for more details (a copy is included
* in the LICENSE file that accompanied this code).
*
- * You should have received a copy of the GNU General Public License
- * version 2 along with this program; if not, write to the
- * Free Software Foundation, Inc., 59 Temple Place - Suite 330,
- * Boston, MA 021110-1307, USA
- *
* GPL HEADER END
*/
/*
diff --git a/drivers/staging/lustre/include/linux/lnet/lib-dlc.h b/drivers/staging/lustre/include/linux/lnet/lib-dlc.h
new file mode 100644
index 0000000..84a19e9
--- /dev/null
+++ b/drivers/staging/lustre/include/linux/lnet/lib-dlc.h
@@ -0,0 +1,122 @@
+/*
+ * LGPL HEADER START
+ *
+ * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library.
+ *
+ * LGPL HEADER END
+ *
+ */
+/*
+ * Copyright (c) 2014, Intel Corporation.
+ */
+/*
+ * Author: Amir Shehata <amir.shehata@intel.com>
+ */
+
+#ifndef LNET_DLC_H
+#define LNET_DLC_H
+
+#include "../libcfs/libcfs_ioctl.h"
+#include "types.h"
+
+#define MAX_NUM_SHOW_ENTRIES 32
+#define LNET_MAX_STR_LEN 128
+#define LNET_MAX_SHOW_NUM_CPT 128
+#define LNET_UNDEFINED_HOPS ((__u32) -1)
+
+struct lnet_ioctl_net_config {
+ char ni_interfaces[LNET_MAX_INTERFACES][LNET_MAX_STR_LEN];
+ __u32 ni_status;
+ __u32 ni_cpts[LNET_MAX_SHOW_NUM_CPT];
+};
+
+#define LNET_TINY_BUF_IDX 0
+#define LNET_SMALL_BUF_IDX 1
+#define LNET_LARGE_BUF_IDX 2
+
+/* # different router buffer pools */
+#define LNET_NRBPOOLS (LNET_LARGE_BUF_IDX + 1)
+
+struct lnet_ioctl_pool_cfg {
+ struct {
+ __u32 pl_npages;
+ __u32 pl_nbuffers;
+ __u32 pl_credits;
+ __u32 pl_mincredits;
+ } pl_pools[LNET_NRBPOOLS];
+ __u32 pl_routing;
+};
+
+struct lnet_ioctl_config_data {
+ struct libcfs_ioctl_hdr cfg_hdr;
+
+ __u32 cfg_net;
+ __u32 cfg_count;
+ __u64 cfg_nid;
+ __u32 cfg_ncpts;
+
+ union {
+ struct {
+ __u32 rtr_hop;
+ __u32 rtr_priority;
+ __u32 rtr_flags;
+ } cfg_route;
+ struct {
+ char net_intf[LNET_MAX_STR_LEN];
+ __s32 net_peer_timeout;
+ __s32 net_peer_tx_credits;
+ __s32 net_peer_rtr_credits;
+ __s32 net_max_tx_credits;
+ __u32 net_cksum_algo;
+ __u32 net_pad;
+ } cfg_net;
+ struct {
+ __u32 buf_enable;
+ __s32 buf_tiny;
+ __s32 buf_small;
+ __s32 buf_large;
+ } cfg_buffers;
+ } cfg_config_u;
+
+ char cfg_bulk[0];
+};
+
+struct lnet_ioctl_peer {
+ struct libcfs_ioctl_hdr pr_hdr;
+ __u32 pr_count;
+ __u32 pr_pad;
+ __u64 pr_nid;
+
+ union {
+ struct {
+ char cr_aliveness[LNET_MAX_STR_LEN];
+ __u32 cr_refcount;
+ __u32 cr_ni_peer_tx_credits;
+ __u32 cr_peer_tx_credits;
+ __u32 cr_peer_rtr_credits;
+ __u32 cr_peer_min_rtr_credits;
+ __u32 cr_peer_tx_qnob;
+ __u32 cr_ncpt;
+ } pr_peer_credits;
+ } pr_lnd_u;
+};
+
+struct lnet_ioctl_lnet_stats {
+ struct libcfs_ioctl_hdr st_hdr;
+ struct lnet_counters st_cntrs;
+};
+
+#endif /* LNET_DLC_H */
diff --git a/drivers/staging/lustre/include/linux/lnet/lib-lnet.h b/drivers/staging/lustre/include/linux/lnet/lib-lnet.h
index b0f80b4..a5f1aec 100644
--- a/drivers/staging/lustre/include/linux/lnet/lib-lnet.h
+++ b/drivers/staging/lustre/include/linux/lnet/lib-lnet.h
@@ -39,6 +39,7 @@
#include "api.h"
#include "lnet.h"
#include "lib-types.h"
+#include "lib-dlc.h"
extern lnet_t the_lnet; /* THE network */
@@ -64,6 +65,19 @@
/** exclusive lock */
#define LNET_LOCK_EX CFS_PERCPT_LOCK_EX
+static inline int lnet_is_route_alive(lnet_route_t *route)
+{
+ /* gateway is down */
+ if (!route->lr_gateway->lp_alive)
+ return 0;
+ /* no NI status, assume it's alive */
+ if ((route->lr_gateway->lp_ping_feats &
+ LNET_PING_FEAT_NI_STATUS) == 0)
+ return 1;
+ /* has NI status, check # down NIs */
+ return route->lr_downis == 0;
+}
+
static inline int lnet_is_wire_handle_none(lnet_handle_wire_t *wh)
{
return (wh->wh_interface_cookie == LNET_WIRE_HANDLE_COOKIE_NONE &&
@@ -408,6 +422,8 @@
}
void lnet_ni_free(lnet_ni_t *ni);
+lnet_ni_t *
+lnet_ni_alloc(__u32 net, struct cfs_expr_list *el, struct list_head *nilist);
static inline int
lnet_nid2peerhash(lnet_nid_t nid)
@@ -432,6 +448,8 @@
lnet_ni_t *lnet_net2ni_locked(__u32 net, int cpt);
lnet_ni_t *lnet_net2ni(__u32 net);
+extern int portal_rotor;
+
int lnet_init(void);
void lnet_fini(void);
@@ -445,11 +463,26 @@
void lnet_destroy_routes(void);
int lnet_get_route(int idx, __u32 *net, __u32 *hops,
lnet_nid_t *gateway, __u32 *alive, __u32 *priority);
+int lnet_get_net_config(int idx, __u32 *cpt_count, __u64 *nid,
+ int *peer_timeout, int *peer_tx_credits,
+ int *peer_rtr_cr, int *max_tx_credits,
+ struct lnet_ioctl_net_config *net_config);
+int lnet_get_rtr_pool_cfg(int idx, struct lnet_ioctl_pool_cfg *pool_cfg);
+
void lnet_router_debugfs_init(void);
void lnet_router_debugfs_fini(void);
int lnet_rtrpools_alloc(int im_a_router);
-void lnet_rtrpools_free(void);
+void lnet_destroy_rtrbuf(lnet_rtrbuf_t *rb, int npages);
+int lnet_rtrpools_adjust(int tiny, int small, int large);
+int lnet_rtrpools_enable(void);
+void lnet_rtrpools_disable(void);
+void lnet_rtrpools_free(int keep_pools);
lnet_remotenet_t *lnet_find_net_locked(__u32 net);
+int lnet_dyn_add_ni(lnet_pid_t requested_pid, char *nets,
+ __s32 peer_timeout, __s32 peer_cr, __s32 peer_buf_cr,
+ __s32 credits);
+int lnet_dyn_del_ni(__u32 net);
+int lnet_clear_lazy_portal(struct lnet_ni *ni, int portal, char *reason);
int lnet_islocalnid(lnet_nid_t nid);
int lnet_islocalnet(__u32 net);
@@ -468,6 +501,8 @@
int lnet_send(lnet_nid_t nid, lnet_msg_t *msg, lnet_nid_t rtr_nid);
void lnet_return_tx_credits_locked(lnet_msg_t *msg);
void lnet_return_rx_credits_locked(lnet_msg_t *msg);
+void lnet_schedule_blocked_locked(lnet_rtrbufpool_t *rbp);
+void lnet_drop_routed_msgs_locked(struct list_head *list, int cpt);
/* portals functions */
/* portals attributes */
@@ -662,20 +697,24 @@
void lnet_router_ni_update_locked(lnet_peer_t *gw, __u32 net);
void lnet_swap_pinginfo(lnet_ping_info_t *info);
-int lnet_ping_target_init(void);
-void lnet_ping_target_fini(void);
-
int lnet_parse_ip2nets(char **networksp, char *ip2nets);
int lnet_parse_routes(char *route_str, int *im_a_router);
int lnet_parse_networks(struct list_head *nilist, char *networks);
+int lnet_net_unique(__u32 net, struct list_head *nilist);
int lnet_nid2peer_locked(lnet_peer_t **lpp, lnet_nid_t nid, int cpt);
lnet_peer_t *lnet_find_peer_locked(struct lnet_peer_table *ptable,
lnet_nid_t nid);
-void lnet_peer_tables_cleanup(void);
+void lnet_peer_tables_cleanup(lnet_ni_t *ni);
void lnet_peer_tables_destroy(void);
int lnet_peer_tables_create(void);
void lnet_debug_peer(lnet_nid_t nid);
+int lnet_get_peer_info(__u32 peer_index, __u64 *nid,
+ char alivness[LNET_MAX_STR_LEN],
+ __u32 *cpt_iter, __u32 *refcount,
+ __u32 *ni_peer_tx_credits, __u32 *peer_tx_credits,
+ __u32 *peer_rtr_credits, __u32 *peer_min_rtr_credtis,
+ __u32 *peer_tx_qnob);
static inline void
lnet_peer_set_alive(lnet_peer_t *lp)
diff --git a/drivers/staging/lustre/include/linux/lnet/lib-types.h b/drivers/staging/lustre/include/linux/lnet/lib-types.h
index d769c35..07b8db1 100644
--- a/drivers/staging/lustre/include/linux/lnet/lib-types.h
+++ b/drivers/staging/lustre/include/linux/lnet/lib-types.h
@@ -38,7 +38,6 @@
#include <linux/kthread.h>
#include <linux/uio.h>
#include <linux/types.h>
-#include <net/sock.h>
#include "types.h"
@@ -285,6 +284,7 @@
#define LNET_PING_FEAT_INVAL (0) /* no feature */
#define LNET_PING_FEAT_BASE (1 << 0) /* just a ping */
#define LNET_PING_FEAT_NI_STATUS (1 << 1) /* return NI status */
+#define LNET_PING_FEAT_RTE_DISABLED (1 << 2) /* Routing enabled */
#define LNET_PING_FEAT_MASK (LNET_PING_FEAT_BASE | \
LNET_PING_FEAT_NI_STATUS)
@@ -351,6 +351,8 @@
struct lnet_peer_table {
int pt_version; /* /proc validity stamp */
int pt_number; /* # peers extant */
+ /* # zombies to go to deathrow (and not there yet) */
+ int pt_zombies;
struct list_head pt_deathrow; /* zombie peers */
struct list_head *pt_hash; /* NID->peer hash */
};
@@ -394,7 +396,10 @@
struct list_head rbp_msgs; /* messages blocking
for a buffer */
int rbp_npages; /* # pages in each buffer */
- int rbp_nbuffers; /* # buffers */
+ /* requested number of buffers */
+ int rbp_req_nbuffers;
+ /* # buffers actually allocated */
+ int rbp_nbuffers;
int rbp_credits; /* # free buffers /
blocked messages */
int rbp_mincredits; /* low water mark */
@@ -408,7 +413,12 @@
#define LNET_PEER_HASHSIZE 503 /* prime! */
-#define LNET_NRBPOOLS 3 /* # different router buffer pools */
+#define LNET_TINY_BUF_IDX 0
+#define LNET_SMALL_BUF_IDX 1
+#define LNET_LARGE_BUF_IDX 2
+
+/* # different router buffer pools */
+#define LNET_NRBPOOLS (LNET_LARGE_BUF_IDX + 1)
enum {
/* Didn't match anything */
@@ -569,8 +579,6 @@
/* dying LND instances */
struct list_head ln_nis_zombie;
lnet_ni_t *ln_loni; /* the loopback NI */
- /* NI to wait for events in */
- lnet_ni_t *ln_eq_waitni;
/* remote networks with routes to them */
struct list_head *ln_remote_nets_hash;
@@ -600,8 +608,6 @@
struct mutex ln_api_mutex;
struct mutex ln_lnd_mutex;
- int ln_init; /* lnet_init()
- called? */
/* Have I called LNetNIInit myself? */
int ln_niinit_self;
/* LNetNIInit/LNetNIFini counter */
@@ -616,12 +622,24 @@
/* registered LNDs */
struct list_head ln_lnds;
- /* space for network names */
- char *ln_network_tokens;
- int ln_network_tokens_nob;
/* test protocol compatibility flags */
int ln_testprotocompat;
+ /*
+ * 0 - load the NIs from the mod params
+ * 1 - do not load the NIs from the mod params
+ * Reverse logic to ensure that other calls to LNetNIInit
+ * need no change
+ */
+ bool ln_nis_from_mod_params;
+
+ /*
+ * waitq for router checker. As long as there are no routes in
+ * the list, the router checker will sleep on this queue. when
+ * routes are added the thread will wake up
+ */
+ wait_queue_head_t ln_rc_waitq;
+
} lnet_t;
#endif
diff --git a/drivers/staging/lustre/include/linux/lnet/lnetctl.h b/drivers/staging/lustre/include/linux/lnet/lnetctl.h
index bdd69b2..4b64f62 100644
--- a/drivers/staging/lustre/include/linux/lnet/lnetctl.h
+++ b/drivers/staging/lustre/include/linux/lnet/lnetctl.h
@@ -10,10 +10,6 @@
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
- * You should have received a copy of the GNU General Public License
- * along with Portals; if not, write to the Free Software
- * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
- *
* header for lnet ioctl
*/
#ifndef _LNETCTL_H_
diff --git a/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.c b/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.c
index 2e7b5ca..98570b3 100644
--- a/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.c
+++ b/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.c
@@ -339,8 +339,6 @@
return -ENOMEM;
}
- memset(peer, 0, sizeof(*peer)); /* zero flags etc */
-
peer->ibp_ni = ni;
peer->ibp_nid = nid;
peer->ibp_error = 0;
diff --git a/drivers/staging/lustre/lnet/klnds/socklnd/socklnd.c b/drivers/staging/lustre/lnet/klnds/socklnd/socklnd.c
index 49d716d..854814c 100644
--- a/drivers/staging/lustre/lnet/klnds/socklnd/socklnd.c
+++ b/drivers/staging/lustre/lnet/klnds/socklnd/socklnd.c
@@ -1842,7 +1842,10 @@
unsigned long now = cfs_time_current();
ksock_peer_t *peer = NULL;
rwlock_t *glock = &ksocknal_data.ksnd_global_lock;
- lnet_process_id_t id = {.nid = nid, .pid = LUSTRE_SRV_LNET_PID};
+ lnet_process_id_t id = {
+ .nid = nid,
+ .pid = LNET_PID_LUSTRE,
+ };
read_lock(glock);
@@ -2187,7 +2190,7 @@
case IOC_LIBCFS_ADD_PEER:
id.nid = data->ioc_nid;
- id.pid = LUSTRE_SRV_LNET_PID;
+ id.pid = LNET_PID_LUSTRE;
return ksocknal_add_peer(ni, id,
data->ioc_u32[0], /* IP */
data->ioc_u32[1]); /* port */
diff --git a/drivers/staging/lustre/lnet/klnds/socklnd/socklnd.h b/drivers/staging/lustre/lnet/klnds/socklnd/socklnd.h
index a4117ad..a176544 100644
--- a/drivers/staging/lustre/lnet/klnds/socklnd/socklnd.h
+++ b/drivers/staging/lustre/lnet/klnds/socklnd/socklnd.h
@@ -19,10 +19,6 @@
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
- * You should have received a copy of the GNU General Public License
- * along with Portals; if not, write to the Free Software
- * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
- *
*/
#ifndef _SOCKLND_SOCKLND_H_
diff --git a/drivers/staging/lustre/lnet/klnds/socklnd/socklnd_cb.c b/drivers/staging/lustre/lnet/klnds/socklnd/socklnd_cb.c
index 02da02d..efb7169 100644
--- a/drivers/staging/lustre/lnet/klnds/socklnd/socklnd_cb.c
+++ b/drivers/staging/lustre/lnet/klnds/socklnd/socklnd_cb.c
@@ -19,9 +19,6 @@
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
- * You should have received a copy of the GNU General Public License
- * along with Portals; if not, write to the Free Software
- * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*/
#include "socklnd.h"
diff --git a/drivers/staging/lustre/lnet/klnds/socklnd/socklnd_modparams.c b/drivers/staging/lustre/lnet/klnds/socklnd/socklnd_modparams.c
index 77ce597..6329cbe 100644
--- a/drivers/staging/lustre/lnet/klnds/socklnd/socklnd_modparams.c
+++ b/drivers/staging/lustre/lnet/klnds/socklnd/socklnd_modparams.c
@@ -14,9 +14,6 @@
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
- * You should have received a copy of the GNU General Public License
- * along with Portals; if not, write to the Free Software
- * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*/
#include "socklnd.h"
diff --git a/drivers/staging/lustre/lnet/klnds/socklnd/socklnd_proto.c b/drivers/staging/lustre/lnet/klnds/socklnd/socklnd_proto.c
index 70910ed..976a072 100644
--- a/drivers/staging/lustre/lnet/klnds/socklnd/socklnd_proto.c
+++ b/drivers/staging/lustre/lnet/klnds/socklnd/socklnd_proto.c
@@ -19,9 +19,6 @@
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
- * You should have received a copy of the GNU General Public License
- * along with Portals; if not, write to the Free Software
- * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*/
#include "socklnd.h"
diff --git a/drivers/staging/lustre/lnet/lnet/acceptor.c b/drivers/staging/lustre/lnet/lnet/acceptor.c
index e5f24ff..1452bb3 100644
--- a/drivers/staging/lustre/lnet/lnet/acceptor.c
+++ b/drivers/staging/lustre/lnet/lnet/acceptor.c
@@ -36,6 +36,7 @@
#define DEBUG_SUBSYSTEM S_LNET
#include <linux/completion.h>
+#include <net/sock.h>
#include "../../include/linux/lnet/lib-lnet.h"
static int accept_port = 988;
@@ -46,7 +47,9 @@
int pta_shutdown;
struct socket *pta_sock;
struct completion pta_signal;
-} lnet_acceptor_state;
+} lnet_acceptor_state = {
+ .pta_shutdown = 1
+};
int
lnet_acceptor_port(void)
@@ -204,8 +207,6 @@
}
EXPORT_SYMBOL(lnet_connect);
-/* Below is the code common for both kernel and MT user-space */
-
static int
lnet_accept(struct socket *sock, __u32 magic)
{
@@ -439,10 +440,15 @@
int
lnet_acceptor_start(void)
{
+ struct task_struct *task;
int rc;
long rc2;
long secure;
+ /* if acceptor is already running return immediately */
+ if (!lnet_acceptor_state.pta_shutdown)
+ return 0;
+
LASSERT(!lnet_acceptor_state.pta_sock);
rc = lnet_acceptor_get_tunables();
@@ -457,10 +463,10 @@
if (!lnet_count_acceptor_nis()) /* not required */
return 0;
- rc2 = PTR_ERR(kthread_run(lnet_acceptor,
- (void *)(ulong_ptr_t)secure,
- "acceptor_%03ld", secure));
- if (IS_ERR_VALUE(rc2)) {
+ task = kthread_run(lnet_acceptor, (void *)(ulong_ptr_t)secure,
+ "acceptor_%03ld", secure);
+ if (IS_ERR(task)) {
+ rc2 = PTR_ERR(task);
CERROR("Can't start acceptor thread: %ld\n", rc2);
return -ESRCH;
@@ -483,11 +489,17 @@
void
lnet_acceptor_stop(void)
{
- if (!lnet_acceptor_state.pta_sock) /* not running */
+ struct sock *sk;
+
+ if (lnet_acceptor_state.pta_shutdown) /* not running */
return;
lnet_acceptor_state.pta_shutdown = 1;
- wake_up_all(sk_sleep(lnet_acceptor_state.pta_sock->sk));
+
+ sk = lnet_acceptor_state.pta_sock->sk;
+
+ /* awake any sleepers using safe method */
+ sk->sk_state_change(sk);
/* block until acceptor signals exit */
wait_for_completion(&lnet_acceptor_state.pta_signal);
diff --git a/drivers/staging/lustre/lnet/lnet/api-ni.c b/drivers/staging/lustre/lnet/lnet/api-ni.c
index 58b30f1..3ecc96a 100644
--- a/drivers/staging/lustre/lnet/lnet/api-ni.c
+++ b/drivers/staging/lustre/lnet/lnet/api-ni.c
@@ -39,6 +39,7 @@
#include <linux/ktime.h>
#include "../../include/linux/lnet/lib-lnet.h"
+#include "../../include/linux/lnet/lib-dlc.h"
#define D_LNI D_CONSOLE
@@ -97,6 +98,7 @@
{
spin_lock_init(&the_lnet.ln_eq_wait_lock);
init_waitqueue_head(&the_lnet.ln_eq_waitq);
+ init_waitqueue_head(&the_lnet.ln_rc_waitq);
mutex_init(&the_lnet.ln_lnd_mutex);
mutex_init(&the_lnet.ln_api_mutex);
}
@@ -289,7 +291,6 @@
{
mutex_lock(&the_lnet.ln_lnd_mutex);
- LASSERT(the_lnet.ln_init);
LASSERT(libcfs_isknown_lnd(lnd->lnd_type));
LASSERT(!lnet_find_lnd_by_type(lnd->lnd_type));
@@ -307,7 +308,6 @@
{
mutex_lock(&the_lnet.ln_lnd_mutex);
- LASSERT(the_lnet.ln_init);
LASSERT(lnet_find_lnd_by_type(lnd->lnd_type) == lnd);
LASSERT(!lnd->lnd_refcount);
@@ -524,7 +524,7 @@
list_add(&lh->lh_hash_chain, &rec->rec_lh_hash[hash]);
}
-int lnet_unprepare(void);
+static int lnet_unprepare(void);
static int
lnet_prepare(lnet_pid_t requested_pid)
@@ -533,6 +533,11 @@
struct lnet_res_container **recs;
int rc = 0;
+ if (requested_pid == LNET_PID_ANY) {
+ /* Don't instantiate LNET just for me */
+ return -ENETDOWN;
+ }
+
LASSERT(!the_lnet.ln_refcount);
the_lnet.ln_routing = 0;
@@ -605,7 +610,7 @@
return rc;
}
-int
+static int
lnet_unprepare(void)
{
/*
@@ -638,7 +643,7 @@
lnet_msg_containers_destroy();
lnet_peer_tables_destroy();
- lnet_rtrpools_free();
+ lnet_rtrpools_free(0);
if (the_lnet.ln_counters) {
cfs_percpt_free(the_lnet.ln_counters);
@@ -819,6 +824,229 @@
return count;
}
+static lnet_ping_info_t *
+lnet_ping_info_create(int num_ni)
+{
+ lnet_ping_info_t *ping_info;
+ unsigned int infosz;
+
+ infosz = offsetof(lnet_ping_info_t, pi_ni[num_ni]);
+ LIBCFS_ALLOC(ping_info, infosz);
+ if (!ping_info) {
+ CERROR("Can't allocate ping info[%d]\n", num_ni);
+ return NULL;
+ }
+
+ ping_info->pi_nnis = num_ni;
+ ping_info->pi_pid = the_lnet.ln_pid;
+ ping_info->pi_magic = LNET_PROTO_PING_MAGIC;
+ ping_info->pi_features = LNET_PING_FEAT_NI_STATUS;
+
+ return ping_info;
+}
+
+static inline int
+lnet_get_ni_count(void)
+{
+ struct lnet_ni *ni;
+ int count = 0;
+
+ lnet_net_lock(0);
+
+ list_for_each_entry(ni, &the_lnet.ln_nis, ni_list)
+ count++;
+
+ lnet_net_unlock(0);
+
+ return count;
+}
+
+static inline void
+lnet_ping_info_free(lnet_ping_info_t *pinfo)
+{
+ LIBCFS_FREE(pinfo,
+ offsetof(lnet_ping_info_t,
+ pi_ni[pinfo->pi_nnis]));
+}
+
+static void
+lnet_ping_info_destroy(void)
+{
+ struct lnet_ni *ni;
+
+ lnet_net_lock(LNET_LOCK_EX);
+
+ list_for_each_entry(ni, &the_lnet.ln_nis, ni_list) {
+ lnet_ni_lock(ni);
+ ni->ni_status = NULL;
+ lnet_ni_unlock(ni);
+ }
+
+ lnet_ping_info_free(the_lnet.ln_ping_info);
+ the_lnet.ln_ping_info = NULL;
+
+ lnet_net_unlock(LNET_LOCK_EX);
+}
+
+static void
+lnet_ping_event_handler(lnet_event_t *event)
+{
+ lnet_ping_info_t *pinfo = event->md.user_ptr;
+
+ if (event->unlinked)
+ pinfo->pi_features = LNET_PING_FEAT_INVAL;
+}
+
+static int
+lnet_ping_info_setup(lnet_ping_info_t **ppinfo, lnet_handle_md_t *md_handle,
+ int ni_count, bool set_eq)
+{
+ lnet_process_id_t id = {LNET_NID_ANY, LNET_PID_ANY};
+ lnet_handle_me_t me_handle;
+ lnet_md_t md = { NULL };
+ int rc, rc2;
+
+ if (set_eq) {
+ rc = LNetEQAlloc(0, lnet_ping_event_handler,
+ &the_lnet.ln_ping_target_eq);
+ if (rc) {
+ CERROR("Can't allocate ping EQ: %d\n", rc);
+ return rc;
+ }
+ }
+
+ *ppinfo = lnet_ping_info_create(ni_count);
+ if (!*ppinfo) {
+ rc = -ENOMEM;
+ goto failed_0;
+ }
+
+ rc = LNetMEAttach(LNET_RESERVED_PORTAL, id,
+ LNET_PROTO_PING_MATCHBITS, 0,
+ LNET_UNLINK, LNET_INS_AFTER,
+ &me_handle);
+ if (rc) {
+ CERROR("Can't create ping ME: %d\n", rc);
+ goto failed_1;
+ }
+
+ /* initialize md content */
+ md.start = *ppinfo;
+ md.length = offsetof(lnet_ping_info_t,
+ pi_ni[(*ppinfo)->pi_nnis]);
+ md.threshold = LNET_MD_THRESH_INF;
+ md.max_size = 0;
+ md.options = LNET_MD_OP_GET | LNET_MD_TRUNCATE |
+ LNET_MD_MANAGE_REMOTE;
+ md.user_ptr = NULL;
+ md.eq_handle = the_lnet.ln_ping_target_eq;
+ md.user_ptr = *ppinfo;
+
+ rc = LNetMDAttach(me_handle, md, LNET_RETAIN, md_handle);
+ if (rc) {
+ CERROR("Can't attach ping MD: %d\n", rc);
+ goto failed_2;
+ }
+
+ return 0;
+
+failed_2:
+ rc2 = LNetMEUnlink(me_handle);
+ LASSERT(!rc2);
+failed_1:
+ lnet_ping_info_free(*ppinfo);
+ *ppinfo = NULL;
+failed_0:
+ if (set_eq)
+ LNetEQFree(the_lnet.ln_ping_target_eq);
+ return rc;
+}
+
+static void
+lnet_ping_md_unlink(lnet_ping_info_t *pinfo, lnet_handle_md_t *md_handle)
+{
+ sigset_t blocked = cfs_block_allsigs();
+
+ LNetMDUnlink(*md_handle);
+ LNetInvalidateHandle(md_handle);
+
+ /* NB md could be busy; this just starts the unlink */
+ while (pinfo->pi_features != LNET_PING_FEAT_INVAL) {
+ CDEBUG(D_NET, "Still waiting for ping MD to unlink\n");
+ set_current_state(TASK_UNINTERRUPTIBLE);
+ schedule_timeout(cfs_time_seconds(1));
+ }
+
+ cfs_restore_sigs(blocked);
+}
+
+static void
+lnet_ping_info_install_locked(lnet_ping_info_t *ping_info)
+{
+ lnet_ni_status_t *ns;
+ lnet_ni_t *ni;
+ int i = 0;
+
+ list_for_each_entry(ni, &the_lnet.ln_nis, ni_list) {
+ LASSERT(i < ping_info->pi_nnis);
+
+ ns = &ping_info->pi_ni[i];
+
+ ns->ns_nid = ni->ni_nid;
+
+ lnet_ni_lock(ni);
+ ns->ns_status = (ni->ni_status) ?
+ ni->ni_status->ns_status : LNET_NI_STATUS_UP;
+ ni->ni_status = ns;
+ lnet_ni_unlock(ni);
+
+ i++;
+ }
+}
+
+static void
+lnet_ping_target_update(lnet_ping_info_t *pinfo, lnet_handle_md_t md_handle)
+{
+ lnet_ping_info_t *old_pinfo = NULL;
+ lnet_handle_md_t old_md;
+
+ /* switch the NIs to point to the new ping info created */
+ lnet_net_lock(LNET_LOCK_EX);
+
+ if (!the_lnet.ln_routing)
+ pinfo->pi_features |= LNET_PING_FEAT_RTE_DISABLED;
+ lnet_ping_info_install_locked(pinfo);
+
+ if (the_lnet.ln_ping_info) {
+ old_pinfo = the_lnet.ln_ping_info;
+ old_md = the_lnet.ln_ping_target_md;
+ }
+ the_lnet.ln_ping_target_md = md_handle;
+ the_lnet.ln_ping_info = pinfo;
+
+ lnet_net_unlock(LNET_LOCK_EX);
+
+ if (old_pinfo) {
+ /* unlink the old ping info */
+ lnet_ping_md_unlink(old_pinfo, &old_md);
+ lnet_ping_info_free(old_pinfo);
+ }
+}
+
+static void
+lnet_ping_target_fini(void)
+{
+ int rc;
+
+ lnet_ping_md_unlink(the_lnet.ln_ping_info,
+ &the_lnet.ln_ping_target_md);
+
+ rc = LNetEQFree(the_lnet.ln_ping_target_eq);
+ LASSERT(!rc);
+
+ lnet_ping_info_destroy();
+}
+
static int
lnet_ni_tq_credits(lnet_ni_t *ni)
{
@@ -837,64 +1065,26 @@
}
static void
-lnet_shutdown_lndnis(void)
+lnet_ni_unlink_locked(lnet_ni_t *ni)
+{
+ if (!list_empty(&ni->ni_cptlist)) {
+ list_del_init(&ni->ni_cptlist);
+ lnet_ni_decref_locked(ni, 0);
+ }
+
+ /* move it to zombie list and nobody can find it anymore */
+ LASSERT(!list_empty(&ni->ni_list));
+ list_move(&ni->ni_list, &the_lnet.ln_nis_zombie);
+ lnet_ni_decref_locked(ni, 0); /* drop ln_nis' ref */
+}
+
+static void
+lnet_clear_zombies_nis_locked(void)
{
int i;
int islo;
lnet_ni_t *ni;
- /* NB called holding the global mutex */
-
- /* All quiet on the API front */
- LASSERT(!the_lnet.ln_shutdown);
- LASSERT(!the_lnet.ln_refcount);
- LASSERT(list_empty(&the_lnet.ln_nis_zombie));
-
- lnet_net_lock(LNET_LOCK_EX);
- the_lnet.ln_shutdown = 1; /* flag shutdown */
-
- /* Unlink NIs from the global table */
- while (!list_empty(&the_lnet.ln_nis)) {
- ni = list_entry(the_lnet.ln_nis.next,
- lnet_ni_t, ni_list);
- /* move it to zombie list and nobody can find it anymore */
- list_move(&ni->ni_list, &the_lnet.ln_nis_zombie);
- lnet_ni_decref_locked(ni, 0); /* drop ln_nis' ref */
-
- if (!list_empty(&ni->ni_cptlist)) {
- list_del_init(&ni->ni_cptlist);
- lnet_ni_decref_locked(ni, 0);
- }
- }
-
- /* Drop the cached eqwait NI. */
- if (the_lnet.ln_eq_waitni) {
- lnet_ni_decref_locked(the_lnet.ln_eq_waitni, 0);
- the_lnet.ln_eq_waitni = NULL;
- }
-
- /* Drop the cached loopback NI. */
- if (the_lnet.ln_loni) {
- lnet_ni_decref_locked(the_lnet.ln_loni, 0);
- the_lnet.ln_loni = NULL;
- }
-
- lnet_net_unlock(LNET_LOCK_EX);
-
- /*
- * Clear lazy portals and drop delayed messages which hold refs
- * on their lnet_msg_t::msg_rxpeer
- */
- for (i = 0; i < the_lnet.ln_nportals; i++)
- LNetClearLazyPortal(i);
-
- /*
- * Clear the peer table and wait for all peers to go (they hold refs on
- * their NIs)
- */
- lnet_peer_tables_cleanup();
-
- lnet_net_lock(LNET_LOCK_EX);
/*
* Now wait for the NI's I just nuked to show up on ln_zombie_nis
* and shut them down in guaranteed thread context
@@ -949,165 +1139,255 @@
lnet_net_lock(LNET_LOCK_EX);
}
+}
- the_lnet.ln_shutdown = 0;
+static void
+lnet_shutdown_lndnis(void)
+{
+ lnet_ni_t *ni;
+ int i;
+
+ /* NB called holding the global mutex */
+
+ /* All quiet on the API front */
+ LASSERT(!the_lnet.ln_shutdown);
+ LASSERT(!the_lnet.ln_refcount);
+ LASSERT(list_empty(&the_lnet.ln_nis_zombie));
+
+ lnet_net_lock(LNET_LOCK_EX);
+ the_lnet.ln_shutdown = 1; /* flag shutdown */
+
+ /* Unlink NIs from the global table */
+ while (!list_empty(&the_lnet.ln_nis)) {
+ ni = list_entry(the_lnet.ln_nis.next,
+ lnet_ni_t, ni_list);
+ lnet_ni_unlink_locked(ni);
+ }
+
+ /* Drop the cached loopback NI. */
+ if (the_lnet.ln_loni) {
+ lnet_ni_decref_locked(the_lnet.ln_loni, 0);
+ the_lnet.ln_loni = NULL;
+ }
+
lnet_net_unlock(LNET_LOCK_EX);
- if (the_lnet.ln_network_tokens) {
- LIBCFS_FREE(the_lnet.ln_network_tokens,
- the_lnet.ln_network_tokens_nob);
- the_lnet.ln_network_tokens = NULL;
- }
+ /*
+ * Clear lazy portals and drop delayed messages which hold refs
+ * on their lnet_msg_t::msg_rxpeer
+ */
+ for (i = 0; i < the_lnet.ln_nportals; i++)
+ LNetClearLazyPortal(i);
+
+ /*
+ * Clear the peer table and wait for all peers to go (they hold refs on
+ * their NIs)
+ */
+ lnet_peer_tables_cleanup(NULL);
+
+ lnet_net_lock(LNET_LOCK_EX);
+
+ lnet_clear_zombies_nis_locked();
+ the_lnet.ln_shutdown = 0;
+ lnet_net_unlock(LNET_LOCK_EX);
+}
+
+/* shutdown down the NI and release refcount */
+static void
+lnet_shutdown_lndni(struct lnet_ni *ni)
+{
+ int i;
+
+ lnet_net_lock(LNET_LOCK_EX);
+ lnet_ni_unlink_locked(ni);
+ lnet_net_unlock(LNET_LOCK_EX);
+
+ /* clear messages for this NI on the lazy portal */
+ for (i = 0; i < the_lnet.ln_nportals; i++)
+ lnet_clear_lazy_portal(ni, i, "Shutting down NI");
+
+ /* Do peer table cleanup for this ni */
+ lnet_peer_tables_cleanup(ni);
+
+ lnet_net_lock(LNET_LOCK_EX);
+ lnet_clear_zombies_nis_locked();
+ lnet_net_unlock(LNET_LOCK_EX);
}
static int
-lnet_startup_lndnis(void)
+lnet_startup_lndni(struct lnet_ni *ni, __s32 peer_timeout,
+ __s32 peer_cr, __s32 peer_buf_cr, __s32 credits)
{
+ int rc = -EINVAL;
+ int lnd_type;
lnd_t *lnd;
- struct lnet_ni *ni;
struct lnet_tx_queue *tq;
- struct list_head nilist;
int i;
- int rc = 0;
- __u32 lnd_type;
- int nicount = 0;
- char *nets = lnet_get_networks();
- INIT_LIST_HEAD(&nilist);
+ lnd_type = LNET_NETTYP(LNET_NIDNET(ni->ni_nid));
- if (!nets)
- goto failed;
+ LASSERT(libcfs_isknown_lnd(lnd_type));
- rc = lnet_parse_networks(&nilist, nets);
- if (rc)
- goto failed;
+ if (lnd_type == CIBLND || lnd_type == OPENIBLND ||
+ lnd_type == IIBLND || lnd_type == VIBLND) {
+ CERROR("LND %s obsoleted\n", libcfs_lnd2str(lnd_type));
+ goto failed0;
+ }
- while (!list_empty(&nilist)) {
- ni = list_entry(nilist.next, lnet_ni_t, ni_list);
- lnd_type = LNET_NETTYP(LNET_NIDNET(ni->ni_nid));
-
- LASSERT(libcfs_isknown_lnd(lnd_type));
-
- if (lnd_type == CIBLND ||
- lnd_type == OPENIBLND ||
- lnd_type == IIBLND ||
- lnd_type == VIBLND) {
- CERROR("LND %s obsoleted\n",
- libcfs_lnd2str(lnd_type));
- goto failed;
+ /* Make sure this new NI is unique. */
+ lnet_net_lock(LNET_LOCK_EX);
+ rc = lnet_net_unique(LNET_NIDNET(ni->ni_nid), &the_lnet.ln_nis);
+ lnet_net_unlock(LNET_LOCK_EX);
+ if (!rc) {
+ if (lnd_type == LOLND) {
+ lnet_ni_free(ni);
+ return 0;
}
- mutex_lock(&the_lnet.ln_lnd_mutex);
- lnd = lnet_find_lnd_by_type(lnd_type);
+ CERROR("Net %s is not unique\n",
+ libcfs_net2str(LNET_NIDNET(ni->ni_nid)));
+ rc = -EEXIST;
+ goto failed0;
+ }
+ mutex_lock(&the_lnet.ln_lnd_mutex);
+ lnd = lnet_find_lnd_by_type(lnd_type);
+
+ if (!lnd) {
+ mutex_unlock(&the_lnet.ln_lnd_mutex);
+ rc = request_module("%s", libcfs_lnd2modname(lnd_type));
+ mutex_lock(&the_lnet.ln_lnd_mutex);
+
+ lnd = lnet_find_lnd_by_type(lnd_type);
if (!lnd) {
mutex_unlock(&the_lnet.ln_lnd_mutex);
- rc = request_module("%s",
- libcfs_lnd2modname(lnd_type));
- mutex_lock(&the_lnet.ln_lnd_mutex);
-
- lnd = lnet_find_lnd_by_type(lnd_type);
- if (!lnd) {
- mutex_unlock(&the_lnet.ln_lnd_mutex);
- CERROR("Can't load LND %s, module %s, rc=%d\n",
- libcfs_lnd2str(lnd_type),
- libcfs_lnd2modname(lnd_type), rc);
- goto failed;
- }
+ CERROR("Can't load LND %s, module %s, rc=%d\n",
+ libcfs_lnd2str(lnd_type),
+ libcfs_lnd2modname(lnd_type), rc);
+ rc = -EINVAL;
+ goto failed0;
}
+ }
+ lnet_net_lock(LNET_LOCK_EX);
+ lnd->lnd_refcount++;
+ lnet_net_unlock(LNET_LOCK_EX);
+
+ ni->ni_lnd = lnd;
+
+ rc = lnd->lnd_startup(ni);
+
+ mutex_unlock(&the_lnet.ln_lnd_mutex);
+
+ if (rc) {
+ LCONSOLE_ERROR_MSG(0x105, "Error %d starting up LNI %s\n",
+ rc, libcfs_lnd2str(lnd->lnd_type));
lnet_net_lock(LNET_LOCK_EX);
- lnd->lnd_refcount++;
+ lnd->lnd_refcount--;
lnet_net_unlock(LNET_LOCK_EX);
+ goto failed0;
+ }
- ni->ni_lnd = lnd;
+ /*
+ * If given some LND tunable parameters, parse those now to
+ * override the values in the NI structure.
+ */
+ if (peer_buf_cr >= 0)
+ ni->ni_peerrtrcredits = peer_buf_cr;
+ if (peer_timeout >= 0)
+ ni->ni_peertimeout = peer_timeout;
+ /*
+ * TODO
+ * Note: For now, don't allow the user to change
+ * peertxcredits as this number is used in the
+ * IB LND to control queue depth.
+ * if (peer_cr != -1)
+ * ni->ni_peertxcredits = peer_cr;
+ */
+ if (credits >= 0)
+ ni->ni_maxtxcredits = credits;
- rc = lnd->lnd_startup(ni);
+ LASSERT(ni->ni_peertimeout <= 0 || lnd->lnd_query);
- mutex_unlock(&the_lnet.ln_lnd_mutex);
-
- if (rc) {
- LCONSOLE_ERROR_MSG(0x105, "Error %d starting up LNI %s\n",
- rc, libcfs_lnd2str(lnd->lnd_type));
- lnet_net_lock(LNET_LOCK_EX);
- lnd->lnd_refcount--;
- lnet_net_unlock(LNET_LOCK_EX);
- goto failed;
- }
-
- LASSERT(ni->ni_peertimeout <= 0 || lnd->lnd_query);
-
- list_del(&ni->ni_list);
-
- lnet_net_lock(LNET_LOCK_EX);
- /* refcount for ln_nis */
+ lnet_net_lock(LNET_LOCK_EX);
+ /* refcount for ln_nis */
+ lnet_ni_addref_locked(ni, 0);
+ list_add_tail(&ni->ni_list, &the_lnet.ln_nis);
+ if (ni->ni_cpts) {
lnet_ni_addref_locked(ni, 0);
- list_add_tail(&ni->ni_list, &the_lnet.ln_nis);
- if (ni->ni_cpts) {
- list_add_tail(&ni->ni_cptlist,
- &the_lnet.ln_nis_cpt);
- lnet_ni_addref_locked(ni, 0);
- }
-
- lnet_net_unlock(LNET_LOCK_EX);
-
- if (lnd->lnd_type == LOLND) {
- lnet_ni_addref(ni);
- LASSERT(!the_lnet.ln_loni);
- the_lnet.ln_loni = ni;
- continue;
- }
-
- if (!ni->ni_peertxcredits || !ni->ni_maxtxcredits) {
- LCONSOLE_ERROR_MSG(0x107, "LNI %s has no %scredits\n",
- libcfs_lnd2str(lnd->lnd_type),
- !ni->ni_peertxcredits ?
- "" : "per-peer ");
- goto failed;
- }
-
- cfs_percpt_for_each(tq, i, ni->ni_tx_queues) {
- tq->tq_credits_min =
- tq->tq_credits_max =
- tq->tq_credits = lnet_ni_tq_credits(ni);
- }
-
- CDEBUG(D_LNI, "Added LNI %s [%d/%d/%d/%d]\n",
- libcfs_nid2str(ni->ni_nid), ni->ni_peertxcredits,
- lnet_ni_tq_credits(ni) * LNET_CPT_NUMBER,
- ni->ni_peerrtrcredits, ni->ni_peertimeout);
-
- nicount++;
+ list_add_tail(&ni->ni_cptlist, &the_lnet.ln_nis_cpt);
}
- if (the_lnet.ln_eq_waitni && nicount > 1) {
- lnd_type = the_lnet.ln_eq_waitni->ni_lnd->lnd_type;
- LCONSOLE_ERROR_MSG(0x109, "LND %s can only run single-network\n",
- libcfs_lnd2str(lnd_type));
- goto failed;
+ lnet_net_unlock(LNET_LOCK_EX);
+
+ if (lnd->lnd_type == LOLND) {
+ lnet_ni_addref(ni);
+ LASSERT(!the_lnet.ln_loni);
+ the_lnet.ln_loni = ni;
+ return 0;
}
+ if (!ni->ni_peertxcredits || !ni->ni_maxtxcredits) {
+ LCONSOLE_ERROR_MSG(0x107, "LNI %s has no %scredits\n",
+ libcfs_lnd2str(lnd->lnd_type),
+ !ni->ni_peertxcredits ?
+ "" : "per-peer ");
+ /*
+ * shutdown the NI since if we get here then it must've already
+ * been started
+ */
+ lnet_shutdown_lndni(ni);
+ return -EINVAL;
+ }
+
+ cfs_percpt_for_each(tq, i, ni->ni_tx_queues) {
+ tq->tq_credits_min =
+ tq->tq_credits_max =
+ tq->tq_credits = lnet_ni_tq_credits(ni);
+ }
+
+ CDEBUG(D_LNI, "Added LNI %s [%d/%d/%d/%d]\n",
+ libcfs_nid2str(ni->ni_nid), ni->ni_peertxcredits,
+ lnet_ni_tq_credits(ni) * LNET_CPT_NUMBER,
+ ni->ni_peerrtrcredits, ni->ni_peertimeout);
+
return 0;
+failed0:
+ lnet_ni_free(ni);
+ return rc;
+}
- failed:
- lnet_shutdown_lndnis();
+static int
+lnet_startup_lndnis(struct list_head *nilist)
+{
+ struct lnet_ni *ni;
+ int rc;
+ int ni_count = 0;
- while (!list_empty(&nilist)) {
- ni = list_entry(nilist.next, lnet_ni_t, ni_list);
+ while (!list_empty(nilist)) {
+ ni = list_entry(nilist->next, lnet_ni_t, ni_list);
list_del(&ni->ni_list);
- lnet_ni_free(ni);
+ rc = lnet_startup_lndni(ni, -1, -1, -1, -1);
+
+ if (rc < 0)
+ goto failed;
+
+ ni_count++;
}
- return -ENETDOWN;
+ return ni_count;
+failed:
+ lnet_shutdown_lndnis();
+
+ return rc;
}
/**
* Initialize LNet library.
*
- * Only userspace program needs to call this function - it's automatically
- * called in the kernel at module loading time. Caller has to call lnet_fini()
- * after a call to lnet_init(), if and only if the latter returned 0. It must
- * be called exactly once.
+ * Automatically called at module loading time. Caller has to call
+ * lnet_exit() after a call to lnet_init(), if and only if the
+ * latter returned 0. It must be called exactly once.
*
* \return 0 on success, and -ve on failures.
*/
@@ -1117,7 +1397,6 @@
int rc;
lnet_assert_wire_constants();
- LASSERT(!the_lnet.ln_init);
memset(&the_lnet, 0, sizeof(the_lnet));
@@ -1143,7 +1422,6 @@
}
the_lnet.ln_refcount = 0;
- the_lnet.ln_init = 1;
LNetInvalidateHandle(&the_lnet.ln_rc_eqh);
INIT_LIST_HEAD(&the_lnet.ln_lnds);
INIT_LIST_HEAD(&the_lnet.ln_rcd_zombie);
@@ -1173,30 +1451,23 @@
/**
* Finalize LNet library.
*
- * Only userspace program needs to call this function. It can be called
- * at most once.
- *
* \pre lnet_init() called with success.
* \pre All LNet users called LNetNIFini() for matching LNetNIInit() calls.
*/
void
lnet_fini(void)
{
- LASSERT(the_lnet.ln_init);
LASSERT(!the_lnet.ln_refcount);
while (!list_empty(&the_lnet.ln_lnds))
lnet_unregister_lnd(list_entry(the_lnet.ln_lnds.next,
lnd_t, lnd_list));
lnet_destroy_locks();
-
- the_lnet.ln_init = 0;
}
/**
* Set LNet PID and start LNet interfaces, routing, and forwarding.
*
- * Userspace program should call this after a successful call to lnet_init().
* Users must call this function at least once before any other functions.
* For each successful call there must be a corresponding call to
* LNetNIFini(). For subsequent calls to LNetNIInit(), \a requested_pid is
@@ -1214,79 +1485,113 @@
{
int im_a_router = 0;
int rc;
+ int ni_count;
+ lnet_ping_info_t *pinfo;
+ lnet_handle_md_t md_handle;
+ struct list_head net_head;
+
+ INIT_LIST_HEAD(&net_head);
mutex_lock(&the_lnet.ln_api_mutex);
- LASSERT(the_lnet.ln_init);
CDEBUG(D_OTHER, "refs %d\n", the_lnet.ln_refcount);
if (the_lnet.ln_refcount > 0) {
rc = the_lnet.ln_refcount++;
- goto out;
- }
-
- if (requested_pid == LNET_PID_ANY) {
- /* Don't instantiate LNET just for me */
- rc = -ENETDOWN;
- goto failed0;
+ mutex_unlock(&the_lnet.ln_api_mutex);
+ return rc;
}
rc = lnet_prepare(requested_pid);
- if (rc)
- goto failed0;
+ if (rc) {
+ mutex_unlock(&the_lnet.ln_api_mutex);
+ return rc;
+ }
- rc = lnet_startup_lndnis();
- if (rc)
- goto failed1;
+ /* Add in the loopback network */
+ if (!lnet_ni_alloc(LNET_MKNET(LOLND, 0), NULL, &net_head)) {
+ rc = -ENOMEM;
+ goto err_empty_list;
+ }
- rc = lnet_parse_routes(lnet_get_routes(), &im_a_router);
- if (rc)
- goto failed2;
+ /*
+ * If LNet is being initialized via DLC it is possible
+ * that the user requests not to load module parameters (ones which
+ * are supported by DLC) on initialization. Therefore, make sure not
+ * to load networks, routes and forwarding from module parameters
+ * in this case. On cleanup in case of failure only clean up
+ * routes if it has been loaded
+ */
+ if (!the_lnet.ln_nis_from_mod_params) {
+ rc = lnet_parse_networks(&net_head, lnet_get_networks());
+ if (rc < 0)
+ goto err_empty_list;
+ }
- rc = lnet_check_routes();
- if (rc)
- goto failed2;
+ ni_count = lnet_startup_lndnis(&net_head);
+ if (ni_count < 0) {
+ rc = ni_count;
+ goto err_empty_list;
+ }
- rc = lnet_rtrpools_alloc(im_a_router);
- if (rc)
- goto failed2;
+ if (!the_lnet.ln_nis_from_mod_params) {
+ rc = lnet_parse_routes(lnet_get_routes(), &im_a_router);
+ if (rc)
+ goto err_shutdown_lndnis;
+
+ rc = lnet_check_routes();
+ if (rc)
+ goto err_destory_routes;
+
+ rc = lnet_rtrpools_alloc(im_a_router);
+ if (rc)
+ goto err_destory_routes;
+ }
rc = lnet_acceptor_start();
if (rc)
- goto failed2;
+ goto err_destory_routes;
the_lnet.ln_refcount = 1;
/* Now I may use my own API functions... */
- /*
- * NB router checker needs the_lnet.ln_ping_info in
- * lnet_router_checker -> lnet_update_ni_status_locked
- */
- rc = lnet_ping_target_init();
+ rc = lnet_ping_info_setup(&pinfo, &md_handle, ni_count, true);
if (rc)
- goto failed3;
+ goto err_acceptor_stop;
+
+ lnet_ping_target_update(pinfo, md_handle);
rc = lnet_router_checker_start();
if (rc)
- goto failed4;
+ goto err_stop_ping;
lnet_router_debugfs_init();
- goto out;
- failed4:
+ mutex_unlock(&the_lnet.ln_api_mutex);
+
+ return 0;
+
+err_stop_ping:
lnet_ping_target_fini();
- failed3:
+err_acceptor_stop:
the_lnet.ln_refcount = 0;
lnet_acceptor_stop();
- failed2:
- lnet_destroy_routes();
+err_destory_routes:
+ if (!the_lnet.ln_nis_from_mod_params)
+ lnet_destroy_routes();
+err_shutdown_lndnis:
lnet_shutdown_lndnis();
- failed1:
+err_empty_list:
lnet_unprepare();
- failed0:
LASSERT(rc < 0);
- out:
mutex_unlock(&the_lnet.ln_api_mutex);
+ while (!list_empty(&net_head)) {
+ struct lnet_ni *ni;
+
+ ni = list_entry(net_head.next, struct lnet_ni, ni_list);
+ list_del_init(&ni->ni_list);
+ lnet_ni_free(ni);
+ }
return rc;
}
EXPORT_SYMBOL(LNetNIInit);
@@ -1305,7 +1610,6 @@
{
mutex_lock(&the_lnet.ln_api_mutex);
- LASSERT(the_lnet.ln_init);
LASSERT(the_lnet.ln_refcount > 0);
if (the_lnet.ln_refcount != 1) {
@@ -1332,6 +1636,220 @@
EXPORT_SYMBOL(LNetNIFini);
/**
+ * Grabs the ni data from the ni structure and fills the out
+ * parameters
+ *
+ * \param[in] ni network interface structure
+ * \param[out] cpt_count the number of cpts the ni is on
+ * \param[out] nid Network Interface ID
+ * \param[out] peer_timeout NI peer timeout
+ * \param[out] peer_tx_crdits NI peer transmit credits
+ * \param[out] peer_rtr_credits NI peer router credits
+ * \param[out] max_tx_credits NI max transmit credit
+ * \param[out] net_config Network configuration
+ */
+static void
+lnet_fill_ni_info(struct lnet_ni *ni, __u32 *cpt_count, __u64 *nid,
+ int *peer_timeout, int *peer_tx_credits,
+ int *peer_rtr_credits, int *max_tx_credits,
+ struct lnet_ioctl_net_config *net_config)
+{
+ int i;
+
+ if (!ni)
+ return;
+
+ if (!net_config)
+ return;
+
+ BUILD_BUG_ON(ARRAY_SIZE(ni->ni_interfaces) !=
+ ARRAY_SIZE(net_config->ni_interfaces));
+
+ for (i = 0; i < ARRAY_SIZE(ni->ni_interfaces); i++) {
+ if (!ni->ni_interfaces[i])
+ break;
+
+ strncpy(net_config->ni_interfaces[i],
+ ni->ni_interfaces[i],
+ sizeof(net_config->ni_interfaces[i]));
+ }
+
+ *nid = ni->ni_nid;
+ *peer_timeout = ni->ni_peertimeout;
+ *peer_tx_credits = ni->ni_peertxcredits;
+ *peer_rtr_credits = ni->ni_peerrtrcredits;
+ *max_tx_credits = ni->ni_maxtxcredits;
+
+ net_config->ni_status = ni->ni_status->ns_status;
+
+ if (ni->ni_cpts) {
+ int num_cpts = min(ni->ni_ncpts, LNET_MAX_SHOW_NUM_CPT);
+
+ for (i = 0; i < num_cpts; i++)
+ net_config->ni_cpts[i] = ni->ni_cpts[i];
+
+ *cpt_count = num_cpts;
+ }
+}
+
+int
+lnet_get_net_config(int idx, __u32 *cpt_count, __u64 *nid, int *peer_timeout,
+ int *peer_tx_credits, int *peer_rtr_credits,
+ int *max_tx_credits,
+ struct lnet_ioctl_net_config *net_config)
+{
+ struct lnet_ni *ni;
+ struct list_head *tmp;
+ int cpt, i = 0;
+ int rc = -ENOENT;
+
+ cpt = lnet_net_lock_current();
+
+ list_for_each(tmp, &the_lnet.ln_nis) {
+ if (i++ != idx)
+ continue;
+
+ ni = list_entry(tmp, lnet_ni_t, ni_list);
+ lnet_ni_lock(ni);
+ lnet_fill_ni_info(ni, cpt_count, nid, peer_timeout,
+ peer_tx_credits, peer_rtr_credits,
+ max_tx_credits, net_config);
+ lnet_ni_unlock(ni);
+ rc = 0;
+ break;
+ }
+
+ lnet_net_unlock(cpt);
+ return rc;
+}
+
+int
+lnet_dyn_add_ni(lnet_pid_t requested_pid, char *nets,
+ __s32 peer_timeout, __s32 peer_cr, __s32 peer_buf_cr,
+ __s32 credits)
+{
+ lnet_ping_info_t *pinfo;
+ lnet_handle_md_t md_handle;
+ struct lnet_ni *ni;
+ struct list_head net_head;
+ lnet_remotenet_t *rnet;
+ int rc;
+
+ INIT_LIST_HEAD(&net_head);
+
+ /* Create a ni structure for the network string */
+ rc = lnet_parse_networks(&net_head, nets);
+ if (rc <= 0)
+ return !rc ? -EINVAL : rc;
+
+ mutex_lock(&the_lnet.ln_api_mutex);
+
+ if (rc > 1) {
+ rc = -EINVAL; /* only add one interface per call */
+ goto failed0;
+ }
+
+ ni = list_entry(net_head.next, struct lnet_ni, ni_list);
+
+ lnet_net_lock(LNET_LOCK_EX);
+ rnet = lnet_find_net_locked(LNET_NIDNET(ni->ni_nid));
+ lnet_net_unlock(LNET_LOCK_EX);
+ /*
+ * make sure that the net added doesn't invalidate the current
+ * configuration LNet is keeping
+ */
+ if (rnet) {
+ CERROR("Adding net %s will invalidate routing configuration\n",
+ nets);
+ rc = -EUSERS;
+ goto failed0;
+ }
+
+ rc = lnet_ping_info_setup(&pinfo, &md_handle, 1 + lnet_get_ni_count(),
+ false);
+ if (rc)
+ goto failed0;
+
+ list_del_init(&ni->ni_list);
+
+ rc = lnet_startup_lndni(ni, peer_timeout, peer_cr,
+ peer_buf_cr, credits);
+ if (rc)
+ goto failed1;
+
+ if (ni->ni_lnd->lnd_accept) {
+ rc = lnet_acceptor_start();
+ if (rc < 0) {
+ /* shutdown the ni that we just started */
+ CERROR("Failed to start up acceptor thread\n");
+ lnet_shutdown_lndni(ni);
+ goto failed1;
+ }
+ }
+
+ lnet_ping_target_update(pinfo, md_handle);
+ mutex_unlock(&the_lnet.ln_api_mutex);
+
+ return 0;
+
+failed1:
+ lnet_ping_md_unlink(pinfo, &md_handle);
+ lnet_ping_info_free(pinfo);
+failed0:
+ mutex_unlock(&the_lnet.ln_api_mutex);
+ while (!list_empty(&net_head)) {
+ ni = list_entry(net_head.next, struct lnet_ni, ni_list);
+ list_del_init(&ni->ni_list);
+ lnet_ni_free(ni);
+ }
+ return rc;
+}
+
+int
+lnet_dyn_del_ni(__u32 net)
+{
+ lnet_ni_t *ni;
+ lnet_ping_info_t *pinfo;
+ lnet_handle_md_t md_handle;
+ int rc;
+
+ /* don't allow userspace to shutdown the LOLND */
+ if (LNET_NETTYP(net) == LOLND)
+ return -EINVAL;
+
+ mutex_lock(&the_lnet.ln_api_mutex);
+ /* create and link a new ping info, before removing the old one */
+ rc = lnet_ping_info_setup(&pinfo, &md_handle,
+ lnet_get_ni_count() - 1, false);
+ if (rc)
+ goto out;
+
+ ni = lnet_net2ni(net);
+ if (!ni) {
+ rc = -EINVAL;
+ goto failed;
+ }
+
+ /* decrement the reference counter taken by lnet_net2ni() */
+ lnet_ni_decref_locked(ni, 0);
+
+ lnet_shutdown_lndni(ni);
+
+ if (!lnet_count_acceptor_nis())
+ lnet_acceptor_stop();
+
+ lnet_ping_target_update(pinfo, md_handle);
+ goto out;
+failed:
+ lnet_ping_md_unlink(pinfo, &md_handle);
+ lnet_ping_info_free(pinfo);
+out:
+ mutex_unlock(&the_lnet.ln_api_mutex);
+
+ return rc;
+}
+
+/**
* LNet ioctl handler.
*
*/
@@ -1339,14 +1857,12 @@
LNetCtl(unsigned int cmd, void *arg)
{
struct libcfs_ioctl_data *data = arg;
+ struct lnet_ioctl_config_data *config;
lnet_process_id_t id = {0};
lnet_ni_t *ni;
int rc;
unsigned long secs_passed;
- LASSERT(the_lnet.ln_init);
- LASSERT(the_lnet.ln_refcount > 0);
-
switch (cmd) {
case IOC_LIBCFS_GET_NI:
rc = LNetGetId(data->ioc_count, &id);
@@ -1357,18 +1873,143 @@
return lnet_fail_nid(data->ioc_nid, data->ioc_count);
case IOC_LIBCFS_ADD_ROUTE:
- rc = lnet_add_route(data->ioc_net, data->ioc_count,
- data->ioc_nid, data->ioc_priority);
- return (rc) ? rc : lnet_check_routes();
+ config = arg;
+
+ if (config->cfg_hdr.ioc_len < sizeof(*config))
+ return -EINVAL;
+
+ mutex_lock(&the_lnet.ln_api_mutex);
+ rc = lnet_add_route(config->cfg_net,
+ config->cfg_config_u.cfg_route.rtr_hop,
+ config->cfg_nid,
+ config->cfg_config_u.cfg_route.rtr_priority);
+ if (!rc) {
+ rc = lnet_check_routes();
+ if (rc)
+ lnet_del_route(config->cfg_net,
+ config->cfg_nid);
+ }
+ mutex_unlock(&the_lnet.ln_api_mutex);
+ return rc;
case IOC_LIBCFS_DEL_ROUTE:
- return lnet_del_route(data->ioc_net, data->ioc_nid);
+ config = arg;
+
+ if (config->cfg_hdr.ioc_len < sizeof(*config))
+ return -EINVAL;
+
+ mutex_lock(&the_lnet.ln_api_mutex);
+ rc = lnet_del_route(config->cfg_net, config->cfg_nid);
+ mutex_unlock(&the_lnet.ln_api_mutex);
+ return rc;
case IOC_LIBCFS_GET_ROUTE:
- return lnet_get_route(data->ioc_count,
- &data->ioc_net, &data->ioc_count,
- &data->ioc_nid, &data->ioc_flags,
- &data->ioc_priority);
+ config = arg;
+
+ if (config->cfg_hdr.ioc_len < sizeof(*config))
+ return -EINVAL;
+
+ return lnet_get_route(config->cfg_count,
+ &config->cfg_net,
+ &config->cfg_config_u.cfg_route.rtr_hop,
+ &config->cfg_nid,
+ &config->cfg_config_u.cfg_route.rtr_flags,
+ &config->cfg_config_u.cfg_route.rtr_priority);
+
+ case IOC_LIBCFS_GET_NET: {
+ struct lnet_ioctl_net_config *net_config;
+ size_t total = sizeof(*config) + sizeof(*net_config);
+
+ config = arg;
+
+ if (config->cfg_hdr.ioc_len < total)
+ return -EINVAL;
+
+ net_config = (struct lnet_ioctl_net_config *)
+ config->cfg_bulk;
+ if (!net_config)
+ return -EINVAL;
+
+ return lnet_get_net_config(config->cfg_count,
+ &config->cfg_ncpts,
+ &config->cfg_nid,
+ &config->cfg_config_u.cfg_net.net_peer_timeout,
+ &config->cfg_config_u.cfg_net.net_peer_tx_credits,
+ &config->cfg_config_u.cfg_net.net_peer_rtr_credits,
+ &config->cfg_config_u.cfg_net.net_max_tx_credits,
+ net_config);
+ }
+
+ case IOC_LIBCFS_GET_LNET_STATS: {
+ struct lnet_ioctl_lnet_stats *lnet_stats = arg;
+
+ if (lnet_stats->st_hdr.ioc_len < sizeof(*lnet_stats))
+ return -EINVAL;
+
+ lnet_counters_get(&lnet_stats->st_cntrs);
+ return 0;
+ }
+
+ case IOC_LIBCFS_CONFIG_RTR:
+ config = arg;
+
+ if (config->cfg_hdr.ioc_len < sizeof(*config))
+ return -EINVAL;
+
+ mutex_lock(&the_lnet.ln_api_mutex);
+ if (config->cfg_config_u.cfg_buffers.buf_enable) {
+ rc = lnet_rtrpools_enable();
+ mutex_unlock(&the_lnet.ln_api_mutex);
+ return rc;
+ }
+ lnet_rtrpools_disable();
+ mutex_unlock(&the_lnet.ln_api_mutex);
+ return 0;
+
+ case IOC_LIBCFS_ADD_BUF:
+ config = arg;
+
+ if (config->cfg_hdr.ioc_len < sizeof(*config))
+ return -EINVAL;
+
+ mutex_lock(&the_lnet.ln_api_mutex);
+ rc = lnet_rtrpools_adjust(config->cfg_config_u.cfg_buffers.buf_tiny,
+ config->cfg_config_u.cfg_buffers.buf_small,
+ config->cfg_config_u.cfg_buffers.buf_large);
+ mutex_unlock(&the_lnet.ln_api_mutex);
+ return rc;
+
+ case IOC_LIBCFS_GET_BUF: {
+ struct lnet_ioctl_pool_cfg *pool_cfg;
+ size_t total = sizeof(*config) + sizeof(*pool_cfg);
+
+ config = arg;
+
+ if (config->cfg_hdr.ioc_len < total)
+ return -EINVAL;
+
+ pool_cfg = (struct lnet_ioctl_pool_cfg *)config->cfg_bulk;
+ return lnet_get_rtr_pool_cfg(config->cfg_count, pool_cfg);
+ }
+
+ case IOC_LIBCFS_GET_PEER_INFO: {
+ struct lnet_ioctl_peer *peer_info = arg;
+
+ if (peer_info->pr_hdr.ioc_len < sizeof(*peer_info))
+ return -EINVAL;
+
+ return lnet_get_peer_info(peer_info->pr_count,
+ &peer_info->pr_nid,
+ peer_info->pr_lnd_u.pr_peer_credits.cr_aliveness,
+ &peer_info->pr_lnd_u.pr_peer_credits.cr_ncpt,
+ &peer_info->pr_lnd_u.pr_peer_credits.cr_refcount,
+ &peer_info->pr_lnd_u.pr_peer_credits.cr_ni_peer_tx_credits,
+ &peer_info->pr_lnd_u.pr_peer_credits.cr_peer_tx_credits,
+ &peer_info->pr_lnd_u.pr_peer_credits.cr_peer_rtr_credits,
+ &peer_info->pr_lnd_u.pr_peer_credits.cr_peer_min_rtr_credits,
+ &peer_info->pr_lnd_u.pr_peer_credits.cr_peer_tx_qnob);
+ }
+
case IOC_LIBCFS_NOTIFY_ROUTER:
secs_passed = (ktime_get_real_seconds() - data->ioc_u64[0]);
return lnet_notify(NULL, data->ioc_nid, data->ioc_flags,
@@ -1441,8 +2082,6 @@
int cpt;
int rc = -ENOENT;
- LASSERT(the_lnet.ln_init);
-
/* LNetNI initilization failed? */
if (!the_lnet.ln_refcount)
return rc;
@@ -1477,192 +2116,6 @@
}
EXPORT_SYMBOL(LNetSnprintHandle);
-static int
-lnet_create_ping_info(void)
-{
- int i;
- int n;
- int rc;
- unsigned int infosz;
- lnet_ni_t *ni;
- lnet_process_id_t id;
- lnet_ping_info_t *pinfo;
-
- for (n = 0; ; n++) {
- rc = LNetGetId(n, &id);
- if (rc == -ENOENT)
- break;
-
- LASSERT(!rc);
- }
-
- infosz = offsetof(lnet_ping_info_t, pi_ni[n]);
- LIBCFS_ALLOC(pinfo, infosz);
- if (!pinfo) {
- CERROR("Can't allocate ping info[%d]\n", n);
- return -ENOMEM;
- }
-
- pinfo->pi_nnis = n;
- pinfo->pi_pid = the_lnet.ln_pid;
- pinfo->pi_magic = LNET_PROTO_PING_MAGIC;
- pinfo->pi_features = LNET_PING_FEAT_NI_STATUS;
-
- for (i = 0; i < n; i++) {
- lnet_ni_status_t *ns = &pinfo->pi_ni[i];
-
- rc = LNetGetId(i, &id);
- LASSERT(!rc);
-
- ns->ns_nid = id.nid;
- ns->ns_status = LNET_NI_STATUS_UP;
-
- lnet_net_lock(0);
-
- ni = lnet_nid2ni_locked(id.nid, 0);
- LASSERT(ni);
-
- lnet_ni_lock(ni);
- LASSERT(!ni->ni_status);
- ni->ni_status = ns;
- lnet_ni_unlock(ni);
-
- lnet_ni_decref_locked(ni, 0);
- lnet_net_unlock(0);
- }
-
- the_lnet.ln_ping_info = pinfo;
- return 0;
-}
-
-static void
-lnet_destroy_ping_info(void)
-{
- struct lnet_ni *ni;
-
- lnet_net_lock(0);
-
- list_for_each_entry(ni, &the_lnet.ln_nis, ni_list) {
- lnet_ni_lock(ni);
- ni->ni_status = NULL;
- lnet_ni_unlock(ni);
- }
-
- lnet_net_unlock(0);
-
- LIBCFS_FREE(the_lnet.ln_ping_info,
- offsetof(lnet_ping_info_t,
- pi_ni[the_lnet.ln_ping_info->pi_nnis]));
- the_lnet.ln_ping_info = NULL;
-}
-
-int
-lnet_ping_target_init(void)
-{
- lnet_md_t md = { NULL };
- lnet_handle_me_t meh;
- lnet_process_id_t id;
- int rc;
- int rc2;
- int infosz;
-
- rc = lnet_create_ping_info();
- if (rc)
- return rc;
-
- /*
- * We can have a tiny EQ since we only need to see the unlink event on
- * teardown, which by definition is the last one!
- */
- rc = LNetEQAlloc(2, LNET_EQ_HANDLER_NONE, &the_lnet.ln_ping_target_eq);
- if (rc) {
- CERROR("Can't allocate ping EQ: %d\n", rc);
- goto failed_0;
- }
-
- memset(&id, 0, sizeof(lnet_process_id_t));
- id.nid = LNET_NID_ANY;
- id.pid = LNET_PID_ANY;
-
- rc = LNetMEAttach(LNET_RESERVED_PORTAL, id,
- LNET_PROTO_PING_MATCHBITS, 0,
- LNET_UNLINK, LNET_INS_AFTER,
- &meh);
- if (rc) {
- CERROR("Can't create ping ME: %d\n", rc);
- goto failed_1;
- }
-
- /* initialize md content */
- infosz = offsetof(lnet_ping_info_t,
- pi_ni[the_lnet.ln_ping_info->pi_nnis]);
- md.start = the_lnet.ln_ping_info;
- md.length = infosz;
- md.threshold = LNET_MD_THRESH_INF;
- md.max_size = 0;
- md.options = LNET_MD_OP_GET | LNET_MD_TRUNCATE |
- LNET_MD_MANAGE_REMOTE;
- md.user_ptr = NULL;
- md.eq_handle = the_lnet.ln_ping_target_eq;
-
- rc = LNetMDAttach(meh, md,
- LNET_RETAIN,
- &the_lnet.ln_ping_target_md);
- if (rc) {
- CERROR("Can't attach ping MD: %d\n", rc);
- goto failed_2;
- }
-
- return 0;
-
- failed_2:
- rc2 = LNetMEUnlink(meh);
- LASSERT(!rc2);
- failed_1:
- rc2 = LNetEQFree(the_lnet.ln_ping_target_eq);
- LASSERT(!rc2);
- failed_0:
- lnet_destroy_ping_info();
- return rc;
-}
-
-void
-lnet_ping_target_fini(void)
-{
- lnet_event_t event;
- int rc;
- int which;
- int timeout_ms = 1000;
- sigset_t blocked = cfs_block_allsigs();
-
- LNetMDUnlink(the_lnet.ln_ping_target_md);
- /* NB md could be busy; this just starts the unlink */
-
- for (;;) {
- rc = LNetEQPoll(&the_lnet.ln_ping_target_eq, 1,
- timeout_ms, &event, &which);
-
- /* I expect overflow... */
- LASSERT(rc >= 0 || rc == -EOVERFLOW);
-
- if (!rc) {
- /* timed out: provide a diagnostic */
- CWARN("Still waiting for ping MD to unlink\n");
- timeout_ms *= 2;
- continue;
- }
-
- /* Got a valid event */
- if (event.unlinked)
- break;
- }
-
- rc = LNetEQFree(the_lnet.ln_ping_target_eq);
- LASSERT(!rc);
- lnet_destroy_ping_info();
- cfs_restore_sigs(blocked);
-}
-
static int lnet_ping(lnet_process_id_t id, int timeout_ms,
lnet_process_id_t __user *ids, int n_ids)
{
@@ -1674,7 +2127,7 @@
int unlinked = 0;
int replied = 0;
const int a_long_time = 60000; /* mS */
- int infosz = offsetof(lnet_ping_info_t, pi_ni[n_ids]);
+ int infosz;
lnet_ping_info_t *info;
lnet_process_id_t tmpid;
int i;
@@ -1683,6 +2136,8 @@
int rc2;
sigset_t blocked;
+ infosz = offsetof(lnet_ping_info_t, pi_ni[n_ids]);
+
if (n_ids <= 0 ||
id.nid == LNET_NID_ANY ||
timeout_ms > 500000 || /* arbitrary limit! */
@@ -1690,7 +2145,7 @@
return -EINVAL;
if (id.pid == LNET_PID_ANY)
- id.pid = LUSTRE_SRV_LNET_PID;
+ id.pid = LNET_PID_LUSTRE;
LIBCFS_ALLOC(info, infosz);
if (!info)
diff --git a/drivers/staging/lustre/lnet/lnet/config.c b/drivers/staging/lustre/lnet/lnet/config.c
index e817eb3..8c80625 100644
--- a/drivers/staging/lustre/lnet/lnet/config.c
+++ b/drivers/staging/lustre/lnet/lnet/config.c
@@ -77,7 +77,7 @@
}
}
-static int
+int
lnet_net_unique(__u32 net, struct list_head *nilist)
{
struct list_head *tmp;
@@ -96,6 +96,8 @@
void
lnet_ni_free(struct lnet_ni *ni)
{
+ int i;
+
if (ni->ni_refs)
cfs_percpt_free(ni->ni_refs);
@@ -105,10 +107,14 @@
if (ni->ni_cpts)
cfs_expr_list_values_free(ni->ni_cpts, ni->ni_ncpts);
+ for (i = 0; i < LNET_MAX_INTERFACES && ni->ni_interfaces[i]; i++) {
+ LIBCFS_FREE(ni->ni_interfaces[i],
+ strlen(ni->ni_interfaces[i]) + 1);
+ }
LIBCFS_FREE(ni, sizeof(*ni));
}
-static lnet_ni_t *
+lnet_ni_t *
lnet_ni_alloc(__u32 net, struct cfs_expr_list *el, struct list_head *nilist)
{
struct lnet_tx_queue *tq;
@@ -178,13 +184,19 @@
lnet_parse_networks(struct list_head *nilist, char *networks)
{
struct cfs_expr_list *el = NULL;
- int tokensize = strlen(networks) + 1;
+ int tokensize;
char *tokens;
char *str;
char *tmp;
struct lnet_ni *ni;
__u32 net;
int nnets = 0;
+ struct list_head *temp_node;
+
+ if (!networks) {
+ CERROR("networks string is undefined\n");
+ return -EINVAL;
+ }
if (strlen(networks) > LNET_SINGLE_TEXTBUF_NOB) {
/* _WAY_ conservative */
@@ -193,23 +205,18 @@
return -EINVAL;
}
+ tokensize = strlen(networks) + 1;
+
LIBCFS_ALLOC(tokens, tokensize);
if (!tokens) {
CERROR("Can't allocate net tokens\n");
return -ENOMEM;
}
- the_lnet.ln_network_tokens = tokens;
- the_lnet.ln_network_tokens_nob = tokensize;
memcpy(tokens, networks, tokensize);
tmp = tokens;
str = tokens;
- /* Add in the loopback network */
- ni = lnet_ni_alloc(LNET_MKNET(LOLND, 0), NULL, nilist);
- if (!ni)
- goto failed;
-
while (str && *str) {
char *comma = strchr(str, ',');
char *bracket = strchr(str, '(');
@@ -283,7 +290,6 @@
goto failed_syntax;
}
- nnets++;
ni = lnet_ni_alloc(net, el, nilist);
if (!ni)
goto failed;
@@ -321,7 +327,23 @@
goto failed;
}
- ni->ni_interfaces[niface++] = iface;
+ /*
+ * Allocate a separate piece of memory and copy
+ * into it the string, so we don't have
+ * a depencency on the tokens string. This way we
+ * can free the tokens at the end of the function.
+ * The newly allocated ni_interfaces[] can be
+ * freed when freeing the NI
+ */
+ LIBCFS_ALLOC(ni->ni_interfaces[niface],
+ strlen(iface) + 1);
+ if (!ni->ni_interfaces[niface]) {
+ CERROR("Can't allocate net interface name\n");
+ goto failed;
+ }
+ strncpy(ni->ni_interfaces[niface], iface,
+ strlen(iface));
+ niface++;
iface = comma;
} while (iface);
@@ -345,8 +367,11 @@
}
}
- LASSERT(!list_empty(nilist));
- return 0;
+ list_for_each(temp_node, nilist)
+ nnets++;
+
+ LIBCFS_FREE(tokens, tokensize);
+ return nnets;
failed_syntax:
lnet_syntax("networks", networks, (int)(tmp - tokens), strlen(tmp));
@@ -362,7 +387,6 @@
cfs_expr_list_free(el);
LIBCFS_FREE(tokens, tokensize);
- the_lnet.ln_network_tokens = NULL;
return -EINVAL;
}
@@ -745,7 +769,7 @@
}
rc = lnet_add_route(net, hops, nid, priority);
- if (rc) {
+ if (rc && rc != -EEXIST && rc != -EHOSTUNREACH) {
CERROR("Can't create route to %s via %s\n",
libcfs_net2str(net),
libcfs_nid2str(nid));
diff --git a/drivers/staging/lustre/lnet/lnet/lib-eq.c b/drivers/staging/lustre/lnet/lnet/lib-eq.c
index b8f248e..042e974 100644
--- a/drivers/staging/lustre/lnet/lnet/lib-eq.c
+++ b/drivers/staging/lustre/lnet/lnet/lib-eq.c
@@ -72,7 +72,6 @@
{
lnet_eq_t *eq;
- LASSERT(the_lnet.ln_init);
LASSERT(the_lnet.ln_refcount > 0);
/*
@@ -167,7 +166,6 @@
int size = 0;
int i;
- LASSERT(the_lnet.ln_init);
LASSERT(the_lnet.ln_refcount > 0);
lnet_res_lock(LNET_LOCK_EX);
@@ -383,7 +381,6 @@
int rc;
int i;
- LASSERT(the_lnet.ln_init);
LASSERT(the_lnet.ln_refcount > 0);
if (neq < 1)
diff --git a/drivers/staging/lustre/lnet/lnet/lib-md.c b/drivers/staging/lustre/lnet/lnet/lib-md.c
index f26bb03..c74514f 100644
--- a/drivers/staging/lustre/lnet/lnet/lib-md.c
+++ b/drivers/staging/lustre/lnet/lnet/lib-md.c
@@ -281,7 +281,6 @@
int cpt;
int rc;
- LASSERT(the_lnet.ln_init);
LASSERT(the_lnet.ln_refcount > 0);
if (lnet_md_validate(&umd))
@@ -360,7 +359,6 @@
int cpt;
int rc;
- LASSERT(the_lnet.ln_init);
LASSERT(the_lnet.ln_refcount > 0);
if (lnet_md_validate(&umd))
@@ -435,7 +433,6 @@
lnet_libmd_t *md;
int cpt;
- LASSERT(the_lnet.ln_init);
LASSERT(the_lnet.ln_refcount > 0);
cpt = lnet_cpt_of_cookie(mdh.cookie);
diff --git a/drivers/staging/lustre/lnet/lnet/lib-me.c b/drivers/staging/lustre/lnet/lnet/lib-me.c
index 3c59c88..e671aed 100644
--- a/drivers/staging/lustre/lnet/lnet/lib-me.c
+++ b/drivers/staging/lustre/lnet/lnet/lib-me.c
@@ -83,7 +83,6 @@
struct lnet_me *me;
struct list_head *head;
- LASSERT(the_lnet.ln_init);
LASSERT(the_lnet.ln_refcount > 0);
if ((int)portal >= the_lnet.ln_nportals)
@@ -156,7 +155,6 @@
struct lnet_portal *ptl;
int cpt;
- LASSERT(the_lnet.ln_init);
LASSERT(the_lnet.ln_refcount > 0);
if (pos == LNET_INS_LOCAL)
@@ -233,7 +231,6 @@
lnet_event_t ev;
int cpt;
- LASSERT(the_lnet.ln_init);
LASSERT(the_lnet.ln_refcount > 0);
cpt = lnet_cpt_of_cookie(meh.cookie);
diff --git a/drivers/staging/lustre/lnet/lnet/lib-move.c b/drivers/staging/lustre/lnet/lnet/lib-move.c
index 8f16913..7bc3e91 100644
--- a/drivers/staging/lustre/lnet/lnet/lib-move.c
+++ b/drivers/staging/lustre/lnet/lnet/lib-move.c
@@ -42,6 +42,11 @@
#include "../../include/linux/lnet/lib-lnet.h"
+/** lnet message has credit and can be submitted to lnd for send/receive */
+#define LNET_CREDIT_OK 0
+/** lnet message is waiting for credit */
+#define LNET_CREDIT_WAIT 1
+
static int local_nid_dist_zero = 1;
module_param(local_nid_dist_zero, int, 0444);
MODULE_PARM_DESC(local_nid_dist_zero, "Reserved");
@@ -54,8 +59,6 @@
struct list_head *next;
struct list_head cull;
- LASSERT(the_lnet.ln_init);
-
/* NB: use lnet_net_lock(0) to serialize operations on test peers */
if (threshold) {
/* Adding a new entry */
@@ -786,10 +789,10 @@
* lnet_send() is going to lnet_net_unlock immediately after this, so
* it sets do_send FALSE and I don't do the unlock/send/lock bit.
*
- * \retval 0 If \a msg sent or OK to send.
- * \retval EAGAIN If \a msg blocked for credit.
- * \retval EHOSTUNREACH If the next hop of the message appears dead.
- * \retval ECANCELED If the MD of the message has been unlinked.
+ * \retval LNET_CREDIT_OK If \a msg sent or OK to send.
+ * \retval LNET_CREDIT_WAIT If \a msg blocked for credit.
+ * \retval -EHOSTUNREACH If the next hop of the message appears dead.
+ * \retval -ECANCELED If the MD of the message has been unlinked.
*/
static int
lnet_post_send_locked(lnet_msg_t *msg, int do_send)
@@ -817,7 +820,7 @@
lnet_finalize(ni, msg, -EHOSTUNREACH);
lnet_net_lock(cpt);
- return EHOSTUNREACH;
+ return -EHOSTUNREACH;
}
if (msg->msg_md &&
@@ -830,7 +833,7 @@
lnet_finalize(ni, msg, -ECANCELED);
lnet_net_lock(cpt);
- return ECANCELED;
+ return -ECANCELED;
}
if (!msg->msg_peertxcredit) {
@@ -847,7 +850,7 @@
if (lp->lp_txcredits < 0) {
msg->msg_tx_delayed = 1;
list_add_tail(&msg->msg_list, &lp->lp_txq);
- return EAGAIN;
+ return LNET_CREDIT_WAIT;
}
}
@@ -864,7 +867,7 @@
if (tq->tq_credits < 0) {
msg->msg_tx_delayed = 1;
list_add_tail(&msg->msg_list, &tq->tq_delayed);
- return EAGAIN;
+ return LNET_CREDIT_WAIT;
}
}
@@ -873,7 +876,7 @@
lnet_ni_send(ni, msg);
lnet_net_lock(cpt);
}
- return 0;
+ return LNET_CREDIT_OK;
}
static lnet_rtrbufpool_t *
@@ -901,8 +904,9 @@
{
/*
* lnet_parse is going to lnet_net_unlock immediately after this, so it
- * sets do_recv FALSE and I don't do the unlock/send/lock bit. I
- * return EAGAIN if msg blocked and 0 if received or OK to receive
+ * sets do_recv FALSE and I don't do the unlock/send/lock bit.
+ * I return LNET_CREDIT_WAIT if msg blocked and LNET_CREDIT_OK if
+ * received or OK to receive
*/
lnet_peer_t *lp = msg->msg_rxpeer;
lnet_rtrbufpool_t *rbp;
@@ -932,16 +936,13 @@
LASSERT(msg->msg_rx_ready_delay);
msg->msg_rx_delayed = 1;
list_add_tail(&msg->msg_list, &lp->lp_rtrq);
- return EAGAIN;
+ return LNET_CREDIT_WAIT;
}
}
rbp = lnet_msg2bufpool(msg);
if (!msg->msg_rtrcredit) {
- LASSERT((rbp->rbp_credits < 0) ==
- !list_empty(&rbp->rbp_msgs));
-
msg->msg_rtrcredit = 1;
rbp->rbp_credits--;
if (rbp->rbp_credits < rbp->rbp_mincredits)
@@ -952,7 +953,7 @@
LASSERT(msg->msg_rx_ready_delay);
msg->msg_rx_delayed = 1;
list_add_tail(&msg->msg_list, &rbp->rbp_msgs);
- return EAGAIN;
+ return LNET_CREDIT_WAIT;
}
}
@@ -971,7 +972,7 @@
0, msg->msg_len, msg->msg_len);
lnet_net_lock(cpt);
}
- return 0;
+ return LNET_CREDIT_OK;
}
void
@@ -1033,6 +1034,43 @@
}
void
+lnet_schedule_blocked_locked(lnet_rtrbufpool_t *rbp)
+{
+ lnet_msg_t *msg;
+
+ if (list_empty(&rbp->rbp_msgs))
+ return;
+ msg = list_entry(rbp->rbp_msgs.next,
+ lnet_msg_t, msg_list);
+ list_del(&msg->msg_list);
+
+ (void)lnet_post_routed_recv_locked(msg, 1);
+}
+
+void
+lnet_drop_routed_msgs_locked(struct list_head *list, int cpt)
+{
+ struct list_head drop;
+ lnet_msg_t *msg;
+ lnet_msg_t *tmp;
+
+ INIT_LIST_HEAD(&drop);
+
+ list_splice_init(list, &drop);
+
+ lnet_net_unlock(cpt);
+
+ list_for_each_entry_safe(msg, tmp, &drop, msg_list) {
+ lnet_ni_recv(msg->msg_rxpeer->lp_ni, msg->msg_private, NULL,
+ 0, 0, 0, msg->msg_hdr.payload_length);
+ list_del_init(&msg->msg_list);
+ lnet_finalize(NULL, msg, -ECANCELED);
+ }
+
+ lnet_net_lock(cpt);
+}
+
+void
lnet_return_rx_credits_locked(lnet_msg_t *msg)
{
lnet_peer_t *rxpeer = msg->msg_rxpeer;
@@ -1052,27 +1090,42 @@
rb = list_entry(msg->msg_kiov, lnet_rtrbuf_t, rb_kiov[0]);
rbp = rb->rb_pool;
- LASSERT(rbp == lnet_msg2bufpool(msg));
msg->msg_kiov = NULL;
msg->msg_rtrcredit = 0;
- LASSERT((rbp->rbp_credits < 0) ==
- !list_empty(&rbp->rbp_msgs));
+ LASSERT(rbp == lnet_msg2bufpool(msg));
+
LASSERT((rbp->rbp_credits > 0) ==
!list_empty(&rbp->rbp_bufs));
- list_add(&rb->rb_list, &rbp->rbp_bufs);
- rbp->rbp_credits++;
- if (rbp->rbp_credits <= 0) {
- msg2 = list_entry(rbp->rbp_msgs.next,
- lnet_msg_t, msg_list);
- list_del(&msg2->msg_list);
+ /*
+ * If routing is now turned off, we just drop this buffer and
+ * don't bother trying to return credits.
+ */
+ if (!the_lnet.ln_routing) {
+ lnet_destroy_rtrbuf(rb, rbp->rbp_npages);
+ goto routing_off;
+ }
- (void) lnet_post_routed_recv_locked(msg2, 1);
+ /*
+ * It is possible that a user has lowered the desired number of
+ * buffers in this pool. Make sure we never put back
+ * more buffers than the stated number.
+ */
+ if (unlikely(rbp->rbp_credits >= rbp->rbp_req_nbuffers)) {
+ /* Discard this buffer so we don't have too many. */
+ lnet_destroy_rtrbuf(rb, rbp->rbp_npages);
+ rbp->rbp_nbuffers--;
+ } else {
+ list_add(&rb->rb_list, &rbp->rbp_bufs);
+ rbp->rbp_credits++;
+ if (rbp->rbp_credits <= 0)
+ lnet_schedule_blocked_locked(rbp);
}
}
+routing_off:
if (msg->msg_peerrtrcredit) {
/* give back peer router credits */
msg->msg_peerrtrcredit = 0;
@@ -1081,7 +1134,14 @@
!list_empty(&rxpeer->lp_rtrq));
rxpeer->lp_rtrcredits++;
- if (rxpeer->lp_rtrcredits <= 0) {
+ /*
+ * drop all messages which are queued to be routed on that
+ * peer.
+ */
+ if (!the_lnet.ln_routing) {
+ lnet_drop_routed_msgs_locked(&rxpeer->lp_rtrq,
+ msg->msg_rx_cpt);
+ } else if (rxpeer->lp_rtrcredits <= 0) {
msg2 = list_entry(rxpeer->lp_rtrq.next,
lnet_msg_t, msg_list);
list_del(&msg2->msg_list);
@@ -1135,9 +1195,9 @@
lnet_find_route_locked(lnet_ni_t *ni, lnet_nid_t target, lnet_nid_t rtr_nid)
{
lnet_remotenet_t *rnet;
- lnet_route_t *rtr;
- lnet_route_t *rtr_best;
- lnet_route_t *rtr_last;
+ lnet_route_t *route;
+ lnet_route_t *best_route;
+ lnet_route_t *last_route;
struct lnet_peer *lp_best;
struct lnet_peer *lp;
int rc;
@@ -1151,14 +1211,12 @@
return NULL;
lp_best = NULL;
- rtr_best = NULL;
- rtr_last = NULL;
- list_for_each_entry(rtr, &rnet->lrn_routes, lr_list) {
- lp = rtr->lr_gateway;
+ best_route = NULL;
+ last_route = NULL;
+ list_for_each_entry(route, &rnet->lrn_routes, lr_list) {
+ lp = route->lr_gateway;
- if (!lp->lp_alive || /* gateway is down */
- ((lp->lp_ping_feats & LNET_PING_FEAT_NI_STATUS) &&
- rtr->lr_downis)) /* NI to target is down */
+ if (!lnet_is_route_alive(route))
continue;
if (ni && lp->lp_ni != ni)
@@ -1168,21 +1226,21 @@
return lp;
if (!lp_best) {
- rtr_best = rtr;
- rtr_last = rtr;
+ best_route = route;
+ last_route = route;
lp_best = lp;
continue;
}
/* no protection on below fields, but it's harmless */
- if (rtr_last->lr_seq - rtr->lr_seq < 0)
- rtr_last = rtr;
+ if (last_route->lr_seq - route->lr_seq < 0)
+ last_route = route;
- rc = lnet_compare_routes(rtr, rtr_best);
+ rc = lnet_compare_routes(route, best_route);
if (rc < 0)
continue;
- rtr_best = rtr;
+ best_route = route;
lp_best = lp;
}
@@ -1191,8 +1249,8 @@
* so we can round-robin all routers, it's race and inaccurate but
* harmless and functional
*/
- if (rtr_best)
- rtr_best->lr_seq = rtr_last->lr_seq + 1;
+ if (best_route)
+ best_route->lr_seq = last_route->lr_seq + 1;
return lp_best;
}
@@ -1348,7 +1406,7 @@
msg->msg_target_is_router = 1;
msg->msg_target.nid = lp->lp_nid;
- msg->msg_target.pid = LUSTRE_SRV_LNET_PID;
+ msg->msg_target.pid = LNET_PID_LUSTRE;
}
/* 'lp' is our best choice of peer */
@@ -1362,13 +1420,13 @@
rc = lnet_post_send_locked(msg, 0);
lnet_net_unlock(cpt);
- if (rc == EHOSTUNREACH || rc == ECANCELED)
- return -rc;
+ if (rc < 0)
+ return rc;
- if (!rc)
+ if (rc == LNET_CREDIT_OK)
lnet_ni_send(src_ni, msg);
- return 0; /* !rc or EAGAIN */
+ return 0; /* rc == LNET_CREDIT_OK or LNET_CREDIT_WAIT */
}
static void
@@ -1632,11 +1690,19 @@
return 0;
}
+/**
+ * \retval LNET_CREDIT_OK If \a msg is forwarded
+ * \retval LNET_CREDIT_WAIT If \a msg is blocked because w/o buffer
+ * \retval -ve error code
+ */
static int
lnet_parse_forward_locked(lnet_ni_t *ni, lnet_msg_t *msg)
{
int rc = 0;
+ if (!the_lnet.ln_routing)
+ return -ECANCELED;
+
if (msg->msg_rxpeer->lp_rtrcredits <= 0 ||
lnet_msg2bufpool(msg)->rbp_credits <= 0) {
if (!ni->ni_lnd->lnd_eager_recv) {
@@ -1790,9 +1856,8 @@
if (the_lnet.ln_routing &&
ni->ni_last_alive != ktime_get_real_seconds()) {
- lnet_ni_lock(ni);
-
/* NB: so far here is the only place to set NI status to "up */
+ lnet_ni_lock(ni);
ni->ni_last_alive = ktime_get_real_seconds();
if (ni->ni_status &&
ni->ni_status->ns_status == LNET_NI_STATUS_DOWN)
@@ -1923,7 +1988,8 @@
if (rc < 0)
goto free_drop;
- if (!rc) {
+
+ if (rc == LNET_CREDIT_OK) {
lnet_ni_recv(ni, msg->msg_private, msg, 0,
0, payload_length, payload_length);
}
@@ -2095,7 +2161,6 @@
int cpt;
int rc;
- LASSERT(the_lnet.ln_init);
LASSERT(the_lnet.ln_refcount > 0);
if (!list_empty(&the_lnet.ln_test_peers) && /* normally we don't */
@@ -2189,17 +2254,17 @@
LASSERT(!getmsg->msg_target_is_router);
LASSERT(!getmsg->msg_routing);
- cpt = lnet_cpt_of_cookie(getmd->md_lh.lh_cookie);
- lnet_res_lock(cpt);
-
- LASSERT(getmd->md_refcount > 0);
-
if (!msg) {
CERROR("%s: Dropping REPLY from %s: can't allocate msg\n",
libcfs_nid2str(ni->ni_nid), libcfs_id2str(peer_id));
goto drop;
}
+ cpt = lnet_cpt_of_cookie(getmd->md_lh.lh_cookie);
+ lnet_res_lock(cpt);
+
+ LASSERT(getmd->md_refcount > 0);
+
if (!getmd->md_threshold) {
CERROR("%s: Dropping REPLY from %s for inactive MD %p\n",
libcfs_nid2str(ni->ni_nid), libcfs_id2str(peer_id),
@@ -2300,7 +2365,6 @@
int cpt;
int rc;
- LASSERT(the_lnet.ln_init);
LASSERT(the_lnet.ln_refcount > 0);
if (!list_empty(&the_lnet.ln_test_peers) && /* normally we don't */
@@ -2400,7 +2464,6 @@
* keep order 0 free for 0@lo and order 1 free for a local NID
* match
*/
- LASSERT(the_lnet.ln_init);
LASSERT(the_lnet.ln_refcount > 0);
cpt = lnet_net_lock_current();
diff --git a/drivers/staging/lustre/lnet/lnet/lib-msg.c b/drivers/staging/lustre/lnet/lnet/lib-msg.c
index 749e76a..c372390 100644
--- a/drivers/staging/lustre/lnet/lnet/lib-msg.c
+++ b/drivers/staging/lustre/lnet/lnet/lib-msg.c
@@ -571,35 +571,17 @@
sizeof(*container->msc_finalizers));
container->msc_finalizers = NULL;
}
-#ifdef LNET_USE_LIB_FREELIST
- lnet_freelist_fini(&container->msc_freelist);
-#endif
container->msc_init = 0;
}
int
lnet_msg_container_setup(struct lnet_msg_container *container, int cpt)
{
- int rc;
-
container->msc_init = 1;
INIT_LIST_HEAD(&container->msc_active);
INIT_LIST_HEAD(&container->msc_finalizing);
-#ifdef LNET_USE_LIB_FREELIST
- memset(&container->msc_freelist, 0, sizeof(lnet_freelist_t));
-
- rc = lnet_freelist_init(&container->msc_freelist,
- LNET_FL_MAX_MSGS, sizeof(lnet_msg_t));
- if (rc) {
- CERROR("Failed to init freelist for message container\n");
- lnet_msg_container_cleanup(container);
- return rc;
- }
-#else
- rc = 0;
-#endif
/* number of CPUs */
container->msc_nfinalizers = cfs_cpt_weight(lnet_cpt_table(), cpt);
@@ -613,7 +595,7 @@
return -ENOMEM;
}
- return rc;
+ return 0;
}
void
diff --git a/drivers/staging/lustre/lnet/lnet/lib-ptl.c b/drivers/staging/lustre/lnet/lnet/lib-ptl.c
index 0cdeea9..0281c6a1 100644
--- a/drivers/staging/lustre/lnet/lnet/lib-ptl.c
+++ b/drivers/staging/lustre/lnet/lnet/lib-ptl.c
@@ -13,11 +13,6 @@
* General Public License version 2 for more details (a copy is included
* in the LICENSE file that accompanied this code).
*
- * You should have received a copy of the GNU General Public License
- * version 2 along with this program; if not, write to the
- * Free Software Foundation, Inc., 59 Temple Place - Suite 330,
- * Boston, MA 021110-1307, USA
- *
* GPL HEADER END
*/
/*
@@ -902,17 +897,8 @@
}
EXPORT_SYMBOL(LNetSetLazyPortal);
-/**
- * Turn off the lazy portal attribute. Delayed requests on the portal,
- * if any, will be all dropped when this function returns.
- *
- * \param portal Index of the portal to disable the lazy attribute on.
- *
- * \retval 0 On success.
- * \retval -EINVAL If \a portal is not a valid index.
- */
int
-LNetClearLazyPortal(int portal)
+lnet_clear_lazy_portal(struct lnet_ni *ni, int portal, char *reason)
{
struct lnet_portal *ptl;
LIST_HEAD(zombies);
@@ -931,21 +917,48 @@
return 0;
}
- if (the_lnet.ln_shutdown)
- CWARN("Active lazy portal %d on exit\n", portal);
- else
- CDEBUG(D_NET, "clearing portal %d lazy\n", portal);
+ if (ni) {
+ struct lnet_msg *msg, *tmp;
- /* grab all the blocked messages atomically */
- list_splice_init(&ptl->ptl_msg_delayed, &zombies);
+ /* grab all messages which are on the NI passed in */
+ list_for_each_entry_safe(msg, tmp, &ptl->ptl_msg_delayed,
+ msg_list) {
+ if (msg->msg_rxpeer->lp_ni == ni)
+ list_move(&msg->msg_list, &zombies);
+ }
+ } else {
+ if (the_lnet.ln_shutdown)
+ CWARN("Active lazy portal %d on exit\n", portal);
+ else
+ CDEBUG(D_NET, "clearing portal %d lazy\n", portal);
- lnet_ptl_unsetopt(ptl, LNET_PTL_LAZY);
+ /* grab all the blocked messages atomically */
+ list_splice_init(&ptl->ptl_msg_delayed, &zombies);
+
+ lnet_ptl_unsetopt(ptl, LNET_PTL_LAZY);
+ }
lnet_ptl_unlock(ptl);
lnet_res_unlock(LNET_LOCK_EX);
- lnet_drop_delayed_msg_list(&zombies, "Clearing lazy portal attr");
+ lnet_drop_delayed_msg_list(&zombies, reason);
return 0;
}
+
+/**
+ * Turn off the lazy portal attribute. Delayed requests on the portal,
+ * if any, will be all dropped when this function returns.
+ *
+ * \param portal Index of the portal to disable the lazy attribute on.
+ *
+ * \retval 0 On success.
+ * \retval -EINVAL If \a portal is not a valid index.
+ */
+int
+LNetClearLazyPortal(int portal)
+{
+ return lnet_clear_lazy_portal(NULL, portal,
+ "Clearing lazy portal attr");
+}
EXPORT_SYMBOL(LNetClearLazyPortal);
diff --git a/drivers/staging/lustre/lnet/lnet/lib-socket.c b/drivers/staging/lustre/lnet/lnet/lib-socket.c
index 53dd0bd..88905d5 100644
--- a/drivers/staging/lustre/lnet/lnet/lib-socket.c
+++ b/drivers/staging/lustre/lnet/lnet/lib-socket.c
@@ -514,7 +514,6 @@
sock_release(*sockp);
return rc;
}
-EXPORT_SYMBOL(lnet_sock_listen);
int
lnet_sock_accept(struct socket **newsockp, struct socket *sock)
@@ -558,7 +557,6 @@
sock_release(newsock);
return rc;
}
-EXPORT_SYMBOL(lnet_sock_accept);
int
lnet_sock_connect(struct socket **sockp, int *fatal, __u32 local_ip,
@@ -596,4 +594,3 @@
sock_release(*sockp);
return rc;
}
-EXPORT_SYMBOL(lnet_sock_connect);
diff --git a/drivers/staging/lustre/lnet/lnet/module.c b/drivers/staging/lustre/lnet/lnet/module.c
index cd37303..e12fe37 100644
--- a/drivers/staging/lustre/lnet/lnet/module.c
+++ b/drivers/staging/lustre/lnet/lnet/module.c
@@ -36,6 +36,7 @@
#define DEBUG_SUBSYSTEM S_LNET
#include "../../include/linux/lnet/lib-lnet.h"
+#include "../../include/linux/lnet/lib-dlc.h"
static int config_on_load;
module_param(config_on_load, int, 0444);
@@ -52,13 +53,21 @@
mutex_lock(&lnet_config_mutex);
if (!the_lnet.ln_niinit_self) {
- rc = LNetNIInit(LUSTRE_SRV_LNET_PID);
+ rc = try_module_get(THIS_MODULE);
+
+ if (rc != 1)
+ goto out;
+
+ rc = LNetNIInit(LNET_PID_LUSTRE);
if (rc >= 0) {
the_lnet.ln_niinit_self = 1;
rc = 0;
+ } else {
+ module_put(THIS_MODULE);
}
}
+out:
mutex_unlock(&lnet_config_mutex);
return rc;
}
@@ -73,6 +82,7 @@
if (the_lnet.ln_niinit_self) {
the_lnet.ln_niinit_self = 0;
LNetNIFini();
+ module_put(THIS_MODULE);
}
mutex_lock(&the_lnet.ln_api_mutex);
@@ -84,17 +94,80 @@
}
static int
-lnet_ioctl(unsigned int cmd, struct libcfs_ioctl_data *data)
+lnet_dyn_configure(struct libcfs_ioctl_hdr *hdr)
+{
+ struct lnet_ioctl_config_data *conf =
+ (struct lnet_ioctl_config_data *)hdr;
+ int rc;
+
+ if (conf->cfg_hdr.ioc_len < sizeof(*conf))
+ return -EINVAL;
+
+ mutex_lock(&lnet_config_mutex);
+ if (!the_lnet.ln_niinit_self) {
+ rc = -EINVAL;
+ goto out_unlock;
+ }
+ rc = lnet_dyn_add_ni(LNET_PID_LUSTRE,
+ conf->cfg_config_u.cfg_net.net_intf,
+ conf->cfg_config_u.cfg_net.net_peer_timeout,
+ conf->cfg_config_u.cfg_net.net_peer_tx_credits,
+ conf->cfg_config_u.cfg_net.net_peer_rtr_credits,
+ conf->cfg_config_u.cfg_net.net_max_tx_credits);
+out_unlock:
+ mutex_unlock(&lnet_config_mutex);
+
+ return rc;
+}
+
+static int
+lnet_dyn_unconfigure(struct libcfs_ioctl_hdr *hdr)
+{
+ struct lnet_ioctl_config_data *conf =
+ (struct lnet_ioctl_config_data *)hdr;
+ int rc;
+
+ if (conf->cfg_hdr.ioc_len < sizeof(*conf))
+ return -EINVAL;
+
+ mutex_lock(&lnet_config_mutex);
+ if (!the_lnet.ln_niinit_self) {
+ rc = -EINVAL;
+ goto out_unlock;
+ }
+ rc = lnet_dyn_del_ni(conf->cfg_net);
+out_unlock:
+ mutex_unlock(&lnet_config_mutex);
+
+ return rc;
+}
+
+static int
+lnet_ioctl(unsigned int cmd, struct libcfs_ioctl_hdr *hdr)
{
int rc;
switch (cmd) {
- case IOC_LIBCFS_CONFIGURE:
+ case IOC_LIBCFS_CONFIGURE: {
+ struct libcfs_ioctl_data *data =
+ (struct libcfs_ioctl_data *)hdr;
+
+ if (data->ioc_hdr.ioc_len < sizeof(*data))
+ return -EINVAL;
+
+ the_lnet.ln_nis_from_mod_params = data->ioc_flags;
return lnet_configure(NULL);
+ }
case IOC_LIBCFS_UNCONFIGURE:
return lnet_unconfigure();
+ case IOC_LIBCFS_ADD_NET:
+ return lnet_dyn_configure(hdr);
+
+ case IOC_LIBCFS_DEL_NET:
+ return lnet_dyn_unconfigure(hdr);
+
default:
/*
* Passing LNET_PID_ANY only gives me a ref if the net is up
@@ -103,7 +176,7 @@
*/
rc = LNetNIInit(LNET_PID_ANY);
if (rc >= 0) {
- rc = LNetCtl(cmd, data);
+ rc = LNetCtl(cmd, hdr);
LNetNIFini();
}
return rc;
diff --git a/drivers/staging/lustre/lnet/lnet/peer.c b/drivers/staging/lustre/lnet/lnet/peer.c
index 00086ee..19c80c9 100644
--- a/drivers/staging/lustre/lnet/lnet/peer.c
+++ b/drivers/staging/lustre/lnet/lnet/peer.c
@@ -39,6 +39,7 @@
#define DEBUG_SUBSYSTEM S_LNET
#include "../../include/linux/lnet/lib-lnet.h"
+#include "../../include/linux/lnet/lib-dlc.h"
int
lnet_peer_tables_create(void)
@@ -103,62 +104,116 @@
the_lnet.ln_peer_tables = NULL;
}
+static void
+lnet_peer_table_cleanup_locked(lnet_ni_t *ni, struct lnet_peer_table *ptable)
+{
+ int i;
+ lnet_peer_t *lp;
+ lnet_peer_t *tmp;
+
+ for (i = 0; i < LNET_PEER_HASH_SIZE; i++) {
+ list_for_each_entry_safe(lp, tmp, &ptable->pt_hash[i],
+ lp_hashlist) {
+ if (ni && ni != lp->lp_ni)
+ continue;
+ list_del_init(&lp->lp_hashlist);
+ /* Lose hash table's ref */
+ ptable->pt_zombies++;
+ lnet_peer_decref_locked(lp);
+ }
+ }
+}
+
+static void
+lnet_peer_table_deathrow_wait_locked(struct lnet_peer_table *ptable,
+ int cpt_locked)
+{
+ int i;
+
+ for (i = 3; ptable->pt_zombies; i++) {
+ lnet_net_unlock(cpt_locked);
+
+ if (is_power_of_2(i)) {
+ CDEBUG(D_WARNING,
+ "Waiting for %d zombies on peer table\n",
+ ptable->pt_zombies);
+ }
+ set_current_state(TASK_UNINTERRUPTIBLE);
+ schedule_timeout(cfs_time_seconds(1) >> 1);
+ lnet_net_lock(cpt_locked);
+ }
+}
+
+static void
+lnet_peer_table_del_rtrs_locked(lnet_ni_t *ni, struct lnet_peer_table *ptable,
+ int cpt_locked)
+{
+ lnet_peer_t *lp;
+ lnet_peer_t *tmp;
+ lnet_nid_t lp_nid;
+ int i;
+
+ for (i = 0; i < LNET_PEER_HASH_SIZE; i++) {
+ list_for_each_entry_safe(lp, tmp, &ptable->pt_hash[i],
+ lp_hashlist) {
+ if (ni != lp->lp_ni)
+ continue;
+
+ if (!lp->lp_rtr_refcount)
+ continue;
+
+ lp_nid = lp->lp_nid;
+
+ lnet_net_unlock(cpt_locked);
+ lnet_del_route(LNET_NIDNET(LNET_NID_ANY), lp_nid);
+ lnet_net_lock(cpt_locked);
+ }
+ }
+}
+
void
-lnet_peer_tables_cleanup(void)
+lnet_peer_tables_cleanup(lnet_ni_t *ni)
{
struct lnet_peer_table *ptable;
+ struct list_head deathrow;
+ lnet_peer_t *lp;
int i;
- int j;
- LASSERT(the_lnet.ln_shutdown); /* i.e. no new peers */
+ INIT_LIST_HEAD(&deathrow);
+ LASSERT(the_lnet.ln_shutdown || ni);
+ /*
+ * If just deleting the peers for a NI, get rid of any routes these
+ * peers are gateways for.
+ */
cfs_percpt_for_each(ptable, i, the_lnet.ln_peer_tables) {
lnet_net_lock(i);
-
- for (j = 0; j < LNET_PEER_HASH_SIZE; j++) {
- struct list_head *peers = &ptable->pt_hash[j];
-
- while (!list_empty(peers)) {
- lnet_peer_t *lp = list_entry(peers->next,
- lnet_peer_t,
- lp_hashlist);
- list_del_init(&lp->lp_hashlist);
- /* lose hash table's ref */
- lnet_peer_decref_locked(lp);
- }
- }
-
+ lnet_peer_table_del_rtrs_locked(ni, ptable, i);
lnet_net_unlock(i);
}
+ /*
+ * Start the process of moving the applicable peers to
+ * deathrow.
+ */
cfs_percpt_for_each(ptable, i, the_lnet.ln_peer_tables) {
- LIST_HEAD(deathrow);
- lnet_peer_t *lp;
-
lnet_net_lock(i);
-
- for (j = 3; ptable->pt_number; j++) {
- lnet_net_unlock(i);
-
- if (!(j & (j - 1))) {
- CDEBUG(D_WARNING,
- "Waiting for %d peers on peer table\n",
- ptable->pt_number);
- }
- set_current_state(TASK_UNINTERRUPTIBLE);
- schedule_timeout(cfs_time_seconds(1) / 2);
- lnet_net_lock(i);
- }
- list_splice_init(&ptable->pt_deathrow, &deathrow);
-
+ lnet_peer_table_cleanup_locked(ni, ptable);
lnet_net_unlock(i);
+ }
- while (!list_empty(&deathrow)) {
- lp = list_entry(deathrow.next,
- lnet_peer_t, lp_hashlist);
- list_del(&lp->lp_hashlist);
- LIBCFS_FREE(lp, sizeof(*lp));
- }
+ /* Cleanup all entries on deathrow. */
+ cfs_percpt_for_each(ptable, i, the_lnet.ln_peer_tables) {
+ lnet_net_lock(i);
+ lnet_peer_table_deathrow_wait_locked(ptable, i);
+ list_splice_init(&ptable->pt_deathrow, &deathrow);
+ lnet_net_unlock(i);
+ }
+
+ while (!list_empty(&deathrow)) {
+ lp = list_entry(deathrow.next, lnet_peer_t, lp_hashlist);
+ list_del(&lp->lp_hashlist);
+ LIBCFS_FREE(lp, sizeof(*lp));
}
}
@@ -181,6 +236,8 @@
lp->lp_ni = NULL;
list_add(&lp->lp_hashlist, &ptable->pt_deathrow);
+ LASSERT(ptable->pt_zombies > 0);
+ ptable->pt_zombies--;
}
lnet_peer_t *
@@ -336,3 +393,65 @@
lnet_net_unlock(cpt);
}
+
+int
+lnet_get_peer_info(__u32 peer_index, __u64 *nid,
+ char aliveness[LNET_MAX_STR_LEN],
+ __u32 *cpt_iter, __u32 *refcount,
+ __u32 *ni_peer_tx_credits, __u32 *peer_tx_credits,
+ __u32 *peer_rtr_credits, __u32 *peer_min_rtr_credits,
+ __u32 *peer_tx_qnob)
+{
+ struct lnet_peer_table *peer_table;
+ lnet_peer_t *lp;
+ bool found = false;
+ int lncpt, j;
+
+ /* get the number of CPTs */
+ lncpt = cfs_percpt_number(the_lnet.ln_peer_tables);
+
+ /*
+ * if the cpt number to be examined is >= the number of cpts in
+ * the system then indicate that there are no more cpts to examin
+ */
+ if (*cpt_iter >= lncpt)
+ return -ENOENT;
+
+ /* get the current table */
+ peer_table = the_lnet.ln_peer_tables[*cpt_iter];
+ /* if the ptable is NULL then there are no more cpts to examine */
+ if (!peer_table)
+ return -ENOENT;
+
+ lnet_net_lock(*cpt_iter);
+
+ for (j = 0; j < LNET_PEER_HASH_SIZE && !found; j++) {
+ struct list_head *peers = &peer_table->pt_hash[j];
+
+ list_for_each_entry(lp, peers, lp_hashlist) {
+ if (peer_index-- > 0)
+ continue;
+
+ snprintf(aliveness, LNET_MAX_STR_LEN, "NA");
+ if (lnet_isrouter(lp) ||
+ lnet_peer_aliveness_enabled(lp))
+ snprintf(aliveness, LNET_MAX_STR_LEN,
+ lp->lp_alive ? "up" : "down");
+
+ *nid = lp->lp_nid;
+ *refcount = lp->lp_refcount;
+ *ni_peer_tx_credits = lp->lp_ni->ni_peertxcredits;
+ *peer_tx_credits = lp->lp_txcredits;
+ *peer_rtr_credits = lp->lp_rtrcredits;
+ *peer_min_rtr_credits = lp->lp_mintxcredits;
+ *peer_tx_qnob = lp->lp_txqnob;
+
+ found = true;
+ }
+ }
+ lnet_net_unlock(*cpt_iter);
+
+ *cpt_iter = lncpt;
+
+ return found ? 0 : -ENOENT;
+}
diff --git a/drivers/staging/lustre/lnet/lnet/router.c b/drivers/staging/lustre/lnet/lnet/router.c
index 735a8f2..5e8b0ba 100644
--- a/drivers/staging/lustre/lnet/lnet/router.c
+++ b/drivers/staging/lustre/lnet/lnet/router.c
@@ -15,10 +15,6 @@
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
- * You should have received a copy of the GNU General Public License
- * along with Portals; if not, write to the Free Software
- * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
- *
*/
#define DEBUG_SUBSYSTEM S_LNET
@@ -28,8 +24,11 @@
#define LNET_NRB_TINY (LNET_NRB_TINY_MIN * 4)
#define LNET_NRB_SMALL_MIN 4096 /* min value for each CPT */
#define LNET_NRB_SMALL (LNET_NRB_SMALL_MIN * 4)
+#define LNET_NRB_SMALL_PAGES 1
#define LNET_NRB_LARGE_MIN 256 /* min value for each CPT */
#define LNET_NRB_LARGE (LNET_NRB_LARGE_MIN * 4)
+#define LNET_NRB_LARGE_PAGES ((LNET_MTU + PAGE_CACHE_SIZE - 1) >> \
+ PAGE_CACHE_SHIFT)
static char *forwarding = "";
module_param(forwarding, charp, 0444);
@@ -318,7 +317,7 @@
return -EINVAL;
if (lnet_islocalnet(net)) /* it's a local network */
- return 0; /* ignore the route entry */
+ return -EEXIST;
/* Assume net, route, all new */
LIBCFS_ALLOC(route, sizeof(*route));
@@ -349,7 +348,7 @@
LIBCFS_FREE(rnet, sizeof(*rnet));
if (rc == -EHOSTUNREACH) /* gateway is not on a local net */
- return 0; /* ignore the route entry */
+ return rc; /* ignore the route entry */
CERROR("Error %d creating route %s %d %s\n", rc,
libcfs_net2str(net), hops,
libcfs_nid2str(gateway));
@@ -396,14 +395,20 @@
/* -1 for notify or !add_route */
lnet_peer_decref_locked(route->lr_gateway);
lnet_net_unlock(LNET_LOCK_EX);
+ rc = 0;
- if (!add_route)
+ if (!add_route) {
+ rc = -EEXIST;
LIBCFS_FREE(route, sizeof(*route));
+ }
if (rnet != rnet2)
LIBCFS_FREE(rnet, sizeof(*rnet));
- return 0;
+ /* indicate to startup the router checker if configured */
+ wake_up(&the_lnet.ln_rc_waitq);
+
+ return rc;
}
int
@@ -543,6 +548,38 @@
lnet_del_route(LNET_NIDNET(LNET_NID_ANY), LNET_NID_ANY);
}
+int lnet_get_rtr_pool_cfg(int idx, struct lnet_ioctl_pool_cfg *pool_cfg)
+{
+ int i, rc = -ENOENT, j;
+
+ if (!the_lnet.ln_rtrpools)
+ return rc;
+
+ for (i = 0; i < LNET_NRBPOOLS; i++) {
+ lnet_rtrbufpool_t *rbp;
+
+ lnet_net_lock(LNET_LOCK_EX);
+ cfs_percpt_for_each(rbp, j, the_lnet.ln_rtrpools) {
+ if (i++ != idx)
+ continue;
+
+ pool_cfg->pl_pools[i].pl_npages = rbp[i].rbp_npages;
+ pool_cfg->pl_pools[i].pl_nbuffers = rbp[i].rbp_nbuffers;
+ pool_cfg->pl_pools[i].pl_credits = rbp[i].rbp_credits;
+ pool_cfg->pl_pools[i].pl_mincredits = rbp[i].rbp_mincredits;
+ rc = 0;
+ break;
+ }
+ lnet_net_unlock(LNET_LOCK_EX);
+ }
+
+ lnet_net_lock(LNET_LOCK_EX);
+ pool_cfg->pl_routing = the_lnet.ln_routing;
+ lnet_net_unlock(LNET_LOCK_EX);
+
+ return rc;
+}
+
int
lnet_get_route(int idx, __u32 *net, __u32 *hops,
lnet_nid_t *gateway, __u32 *alive, __u32 *priority)
@@ -570,7 +607,7 @@
*hops = route->lr_hops;
*priority = route->lr_priority;
*gateway = route->lr_gateway->lp_nid;
- *alive = route->lr_gateway->lp_alive;
+ *alive = lnet_is_route_alive(route);
lnet_net_unlock(cpt);
return 0;
}
@@ -608,7 +645,7 @@
{
lnet_ping_info_t *info = rcd->rcd_pinginfo;
struct lnet_peer *gw = rcd->rcd_gateway;
- lnet_route_t *rtr;
+ lnet_route_t *rte;
if (!gw->lp_alive)
return;
@@ -634,12 +671,16 @@
if (!(gw->lp_ping_feats & LNET_PING_FEAT_NI_STATUS))
return; /* can't carry NI status info */
- list_for_each_entry(rtr, &gw->lp_routes, lr_gwlist) {
- int ptl_status = LNET_NI_STATUS_INVALID;
+ list_for_each_entry(rte, &gw->lp_routes, lr_gwlist) {
int down = 0;
int up = 0;
int i;
+ if (gw->lp_ping_feats & LNET_PING_FEAT_RTE_DISABLED) {
+ rte->lr_downis = 1;
+ continue;
+ }
+
for (i = 0; i < info->pi_nnis && i < LNET_MAX_RTR_NIS; i++) {
lnet_ni_status_t *stat = &info->pi_ni[i];
lnet_nid_t nid = stat->ns_nid;
@@ -655,24 +696,15 @@
continue;
if (stat->ns_status == LNET_NI_STATUS_DOWN) {
- if (LNET_NETTYP(LNET_NIDNET(nid)) != PTLLND)
- down++;
- else if (ptl_status != LNET_NI_STATUS_UP)
- ptl_status = LNET_NI_STATUS_DOWN;
+ down++;
continue;
}
if (stat->ns_status == LNET_NI_STATUS_UP) {
- if (LNET_NIDNET(nid) == rtr->lr_net) {
+ if (LNET_NIDNET(nid) == rte->lr_net) {
up = 1;
break;
}
- /*
- * ptl NIs are considered down only when
- * they're all down
- */
- if (LNET_NETTYP(LNET_NIDNET(nid)) == PTLLND)
- ptl_status = LNET_NI_STATUS_UP;
continue;
}
@@ -683,10 +715,10 @@
}
if (up) { /* ignore downed NIs if NI for dest network is up */
- rtr->lr_downis = 0;
+ rte->lr_downis = 0;
continue;
}
- rtr->lr_downis = down + (ptl_status == LNET_NI_STATUS_DOWN);
+ rte->lr_downis = down;
}
}
@@ -985,7 +1017,7 @@
lnet_handle_md_t mdh;
id.nid = rtr->lp_nid;
- id.pid = LUSTRE_SRV_LNET_PID;
+ id.pid = LNET_PID_LUSTRE;
CDEBUG(D_NET, "Check: %s\n", libcfs_id2str(id));
rtr->lp_ping_notsent = 1;
@@ -1014,8 +1046,9 @@
int
lnet_router_checker_start(void)
{
+ struct task_struct *task;
int rc;
- int eqsz;
+ int eqsz = 0;
LASSERT(the_lnet.ln_rc_state == LNET_RC_STATE_SHUTDOWN);
@@ -1025,28 +1058,18 @@
return -EINVAL;
}
- if (!the_lnet.ln_routing &&
- live_router_check_interval <= 0 &&
- dead_router_check_interval <= 0)
- return 0;
-
sema_init(&the_lnet.ln_rc_signal, 0);
- /*
- * EQ size doesn't matter; the callback is guaranteed to get every
- * event
- */
- eqsz = 0;
- rc = LNetEQAlloc(eqsz, lnet_router_checker_event,
- &the_lnet.ln_rc_eqh);
+
+ rc = LNetEQAlloc(0, lnet_router_checker_event, &the_lnet.ln_rc_eqh);
if (rc) {
CERROR("Can't allocate EQ(%d): %d\n", eqsz, rc);
return -ENOMEM;
}
the_lnet.ln_rc_state = LNET_RC_STATE_RUNNING;
- rc = PTR_ERR(kthread_run(lnet_router_checker,
- NULL, "router_checker"));
- if (IS_ERR_VALUE(rc)) {
+ task = kthread_run(lnet_router_checker, NULL, "router_checker");
+ if (IS_ERR(task)) {
+ rc = PTR_ERR(task);
CERROR("Can't start router checker thread: %d\n", rc);
/* block until event callback signals exit */
down(&the_lnet.ln_rc_signal);
@@ -1078,6 +1101,8 @@
LASSERT(the_lnet.ln_rc_state == LNET_RC_STATE_RUNNING);
the_lnet.ln_rc_state = LNET_RC_STATE_STOPPING;
+ /* wakeup the RC thread if it's sleeping */
+ wake_up(&the_lnet.ln_rc_waitq);
/* block until event callback signals exit */
down(&the_lnet.ln_rc_signal);
@@ -1168,6 +1193,33 @@
lnet_net_unlock(LNET_LOCK_EX);
}
+/*
+ * This function is called to check if the RC should block indefinitely.
+ * It's called from lnet_router_checker() as well as being passed to
+ * wait_event_interruptible() to avoid the lost wake_up problem.
+ *
+ * When it's called from wait_event_interruptible() it is necessary to
+ * also not sleep if the rc state is not running to avoid a deadlock
+ * when the system is shutting down
+ */
+static inline bool
+lnet_router_checker_active(void)
+{
+ if (the_lnet.ln_rc_state != LNET_RC_STATE_RUNNING)
+ return true;
+
+ /*
+ * Router Checker thread needs to run when routing is enabled in
+ * order to call lnet_update_ni_status_locked()
+ */
+ if (the_lnet.ln_routing)
+ return true;
+
+ return !list_empty(&the_lnet.ln_routers) &&
+ (live_router_check_interval > 0 ||
+ dead_router_check_interval > 0);
+}
+
static int
lnet_router_checker(void *arg)
{
@@ -1176,8 +1228,6 @@
cfs_block_allsigs();
- LASSERT(the_lnet.ln_rc_state == LNET_RC_STATE_RUNNING);
-
while (the_lnet.ln_rc_state == LNET_RC_STATE_RUNNING) {
__u64 version;
int cpt;
@@ -1221,12 +1271,20 @@
* because kernel counts # active tasks as nr_running
* + nr_uninterruptible.
*/
- set_current_state(TASK_INTERRUPTIBLE);
- schedule_timeout(cfs_time_seconds(1));
+ /*
+ * if there are any routes then wakeup every second. If
+ * there are no routes then sleep indefinitely until woken
+ * up by a user adding a route
+ */
+ if (!lnet_router_checker_active())
+ wait_event_interruptible(the_lnet.ln_rc_waitq,
+ lnet_router_checker_active());
+ else
+ wait_event_interruptible_timeout(the_lnet.ln_rc_waitq,
+ false,
+ cfs_time_seconds(1));
}
- LASSERT(the_lnet.ln_rc_state == LNET_RC_STATE_STOPPING);
-
lnet_prune_rc_data(1); /* wait for UNLINK */
the_lnet.ln_rc_state = LNET_RC_STATE_SHUTDOWN;
@@ -1235,7 +1293,7 @@
return 0;
}
-static void
+void
lnet_destroy_rtrbuf(lnet_rtrbuf_t *rb, int npages)
{
int sz = offsetof(lnet_rtrbuf_t, rb_kiov[npages]);
@@ -1282,67 +1340,119 @@
}
static void
-lnet_rtrpool_free_bufs(lnet_rtrbufpool_t *rbp)
+lnet_rtrpool_free_bufs(lnet_rtrbufpool_t *rbp, int cpt)
{
int npages = rbp->rbp_npages;
- int nbuffers = 0;
+ struct list_head tmp;
lnet_rtrbuf_t *rb;
if (!rbp->rbp_nbuffers) /* not initialized or already freed */
return;
- LASSERT(list_empty(&rbp->rbp_msgs));
- LASSERT(rbp->rbp_credits == rbp->rbp_nbuffers);
+ INIT_LIST_HEAD(&tmp);
- while (!list_empty(&rbp->rbp_bufs)) {
- LASSERT(rbp->rbp_credits > 0);
-
- rb = list_entry(rbp->rbp_bufs.next,
- lnet_rtrbuf_t, rb_list);
- list_del(&rb->rb_list);
- lnet_destroy_rtrbuf(rb, npages);
- nbuffers++;
- }
-
- LASSERT(rbp->rbp_nbuffers == nbuffers);
- LASSERT(rbp->rbp_credits == nbuffers);
-
+ lnet_net_lock(cpt);
+ lnet_drop_routed_msgs_locked(&rbp->rbp_msgs, cpt);
+ list_splice_init(&rbp->rbp_bufs, &tmp);
+ rbp->rbp_req_nbuffers = 0;
rbp->rbp_nbuffers = 0;
rbp->rbp_credits = 0;
+ rbp->rbp_mincredits = 0;
+ lnet_net_unlock(cpt);
+
+ /* Free buffers on the free list. */
+ while (!list_empty(&tmp)) {
+ rb = list_entry(tmp.next, lnet_rtrbuf_t, rb_list);
+ list_del(&rb->rb_list);
+ lnet_destroy_rtrbuf(rb, npages);
+ }
}
static int
-lnet_rtrpool_alloc_bufs(lnet_rtrbufpool_t *rbp, int nbufs, int cpt)
+lnet_rtrpool_adjust_bufs(lnet_rtrbufpool_t *rbp, int nbufs, int cpt)
{
+ struct list_head rb_list;
lnet_rtrbuf_t *rb;
- int i;
+ int num_rb;
+ int num_buffers = 0;
+ int old_req_nbufs;
+ int npages = rbp->rbp_npages;
- if (rbp->rbp_nbuffers) {
- LASSERT(rbp->rbp_nbuffers == nbufs);
+ lnet_net_lock(cpt);
+ /*
+ * If we are called for less buffers than already in the pool, we
+ * just lower the req_nbuffers number and excess buffers will be
+ * thrown away as they are returned to the free list. Credits
+ * then get adjusted as well.
+ * If we already have enough buffers allocated to serve the
+ * increase requested, then we can treat that the same way as we
+ * do the decrease.
+ */
+ num_rb = nbufs - rbp->rbp_nbuffers;
+ if (nbufs <= rbp->rbp_req_nbuffers || num_rb <= 0) {
+ rbp->rbp_req_nbuffers = nbufs;
+ lnet_net_unlock(cpt);
return 0;
}
+ /*
+ * store the older value of rbp_req_nbuffers and then set it to
+ * the new request to prevent lnet_return_rx_credits_locked() from
+ * freeing buffers that we need to keep around
+ */
+ old_req_nbufs = rbp->rbp_req_nbuffers;
+ rbp->rbp_req_nbuffers = nbufs;
+ lnet_net_unlock(cpt);
- for (i = 0; i < nbufs; i++) {
+ INIT_LIST_HEAD(&rb_list);
+
+ /*
+ * allocate the buffers on a local list first. If all buffers are
+ * allocated successfully then join this list to the rbp buffer
+ * list. If not then free all allocated buffers.
+ */
+ while (num_rb-- > 0) {
rb = lnet_new_rtrbuf(rbp, cpt);
-
if (!rb) {
- CERROR("Failed to allocate %d router bufs of %d pages\n",
- nbufs, rbp->rbp_npages);
- return -ENOMEM;
+ CERROR("Failed to allocate %d route bufs of %d pages\n",
+ nbufs, npages);
+
+ lnet_net_lock(cpt);
+ rbp->rbp_req_nbuffers = old_req_nbufs;
+ lnet_net_unlock(cpt);
+
+ goto failed;
}
- rbp->rbp_nbuffers++;
- rbp->rbp_credits++;
- rbp->rbp_mincredits++;
- list_add(&rb->rb_list, &rbp->rbp_bufs);
-
- /* No allocation "under fire" */
- /* Otherwise we'd need code to schedule blocked msgs etc */
- LASSERT(!the_lnet.ln_routing);
+ list_add(&rb->rb_list, &rb_list);
+ num_buffers++;
}
- LASSERT(rbp->rbp_credits == nbufs);
+ lnet_net_lock(cpt);
+
+ list_splice_tail(&rb_list, &rbp->rbp_bufs);
+ rbp->rbp_nbuffers += num_buffers;
+ rbp->rbp_credits += num_buffers;
+ rbp->rbp_mincredits = rbp->rbp_credits;
+ /*
+ * We need to schedule blocked msg using the newly
+ * added buffers.
+ */
+ while (!list_empty(&rbp->rbp_bufs) &&
+ !list_empty(&rbp->rbp_msgs))
+ lnet_schedule_blocked_locked(rbp);
+
+ lnet_net_unlock(cpt);
+
return 0;
+
+failed:
+ while (!list_empty(&rb_list)) {
+ rb = list_entry(rb_list.next, lnet_rtrbuf_t, rb_list);
+ list_del(&rb->rb_list);
+ lnet_destroy_rtrbuf(rb, npages);
+ }
+
+ return -ENOMEM;
}
static void
@@ -1357,7 +1467,7 @@
}
void
-lnet_rtrpools_free(void)
+lnet_rtrpools_free(int keep_pools)
{
lnet_rtrbufpool_t *rtrp;
int i;
@@ -1366,17 +1476,19 @@
return;
cfs_percpt_for_each(rtrp, i, the_lnet.ln_rtrpools) {
- lnet_rtrpool_free_bufs(&rtrp[0]);
- lnet_rtrpool_free_bufs(&rtrp[1]);
- lnet_rtrpool_free_bufs(&rtrp[2]);
+ lnet_rtrpool_free_bufs(&rtrp[LNET_TINY_BUF_IDX], i);
+ lnet_rtrpool_free_bufs(&rtrp[LNET_SMALL_BUF_IDX], i);
+ lnet_rtrpool_free_bufs(&rtrp[LNET_LARGE_BUF_IDX], i);
}
- cfs_percpt_free(the_lnet.ln_rtrpools);
- the_lnet.ln_rtrpools = NULL;
+ if (!keep_pools) {
+ cfs_percpt_free(the_lnet.ln_rtrpools);
+ the_lnet.ln_rtrpools = NULL;
+ }
}
static int
-lnet_nrb_tiny_calculate(int npages)
+lnet_nrb_tiny_calculate(void)
{
int nrbs = LNET_NRB_TINY;
@@ -1395,7 +1507,7 @@
}
static int
-lnet_nrb_small_calculate(int npages)
+lnet_nrb_small_calculate(void)
{
int nrbs = LNET_NRB_SMALL;
@@ -1414,7 +1526,7 @@
}
static int
-lnet_nrb_large_calculate(int npages)
+lnet_nrb_large_calculate(void)
{
int nrbs = LNET_NRB_LARGE;
@@ -1436,16 +1548,12 @@
lnet_rtrpools_alloc(int im_a_router)
{
lnet_rtrbufpool_t *rtrp;
- int large_pages;
- int small_pages = 1;
int nrb_tiny;
int nrb_small;
int nrb_large;
int rc;
int i;
- large_pages = (LNET_MTU + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT;
-
if (!strcmp(forwarding, "")) {
/* not set either way */
if (!im_a_router)
@@ -1460,15 +1568,15 @@
return -EINVAL;
}
- nrb_tiny = lnet_nrb_tiny_calculate(0);
+ nrb_tiny = lnet_nrb_tiny_calculate();
if (nrb_tiny < 0)
return -EINVAL;
- nrb_small = lnet_nrb_small_calculate(small_pages);
+ nrb_small = lnet_nrb_small_calculate();
if (nrb_small < 0)
return -EINVAL;
- nrb_large = lnet_nrb_large_calculate(large_pages);
+ nrb_large = lnet_nrb_large_calculate();
if (nrb_large < 0)
return -EINVAL;
@@ -1482,18 +1590,23 @@
}
cfs_percpt_for_each(rtrp, i, the_lnet.ln_rtrpools) {
- lnet_rtrpool_init(&rtrp[0], 0);
- rc = lnet_rtrpool_alloc_bufs(&rtrp[0], nrb_tiny, i);
+ lnet_rtrpool_init(&rtrp[LNET_TINY_BUF_IDX], 0);
+ rc = lnet_rtrpool_adjust_bufs(&rtrp[LNET_TINY_BUF_IDX],
+ nrb_tiny, i);
if (rc)
goto failed;
- lnet_rtrpool_init(&rtrp[1], small_pages);
- rc = lnet_rtrpool_alloc_bufs(&rtrp[1], nrb_small, i);
+ lnet_rtrpool_init(&rtrp[LNET_SMALL_BUF_IDX],
+ LNET_NRB_SMALL_PAGES);
+ rc = lnet_rtrpool_adjust_bufs(&rtrp[LNET_SMALL_BUF_IDX],
+ nrb_small, i);
if (rc)
goto failed;
- lnet_rtrpool_init(&rtrp[2], large_pages);
- rc = lnet_rtrpool_alloc_bufs(&rtrp[2], nrb_large, i);
+ lnet_rtrpool_init(&rtrp[LNET_LARGE_BUF_IDX],
+ LNET_NRB_LARGE_PAGES);
+ rc = lnet_rtrpool_adjust_bufs(&rtrp[LNET_LARGE_BUF_IDX],
+ nrb_large, i);
if (rc)
goto failed;
}
@@ -1505,10 +1618,118 @@
return 0;
failed:
- lnet_rtrpools_free();
+ lnet_rtrpools_free(0);
return rc;
}
+static int
+lnet_rtrpools_adjust_helper(int tiny, int small, int large)
+{
+ int nrb = 0;
+ int rc = 0;
+ int i;
+ lnet_rtrbufpool_t *rtrp;
+
+ /*
+ * If the provided values for each buffer pool are different than the
+ * configured values, we need to take action.
+ */
+ if (tiny >= 0) {
+ tiny_router_buffers = tiny;
+ nrb = lnet_nrb_tiny_calculate();
+ cfs_percpt_for_each(rtrp, i, the_lnet.ln_rtrpools) {
+ rc = lnet_rtrpool_adjust_bufs(&rtrp[LNET_TINY_BUF_IDX],
+ nrb, i);
+ if (rc)
+ return rc;
+ }
+ }
+ if (small >= 0) {
+ small_router_buffers = small;
+ nrb = lnet_nrb_small_calculate();
+ cfs_percpt_for_each(rtrp, i, the_lnet.ln_rtrpools) {
+ rc = lnet_rtrpool_adjust_bufs(&rtrp[LNET_SMALL_BUF_IDX],
+ nrb, i);
+ if (rc)
+ return rc;
+ }
+ }
+ if (large >= 0) {
+ large_router_buffers = large;
+ nrb = lnet_nrb_large_calculate();
+ cfs_percpt_for_each(rtrp, i, the_lnet.ln_rtrpools) {
+ rc = lnet_rtrpool_adjust_bufs(&rtrp[LNET_LARGE_BUF_IDX],
+ nrb, i);
+ if (rc)
+ return rc;
+ }
+ }
+
+ return 0;
+}
+
+int
+lnet_rtrpools_adjust(int tiny, int small, int large)
+{
+ /*
+ * this function doesn't revert the changes if adding new buffers
+ * failed. It's up to the user space caller to revert the
+ * changes.
+ */
+ if (!the_lnet.ln_routing)
+ return 0;
+
+ return lnet_rtrpools_adjust_helper(tiny, small, large);
+}
+
+int
+lnet_rtrpools_enable(void)
+{
+ int rc;
+
+ if (the_lnet.ln_routing)
+ return 0;
+
+ if (!the_lnet.ln_rtrpools)
+ /*
+ * If routing is turned off, and we have never
+ * initialized the pools before, just call the
+ * standard buffer pool allocation routine as
+ * if we are just configuring this for the first
+ * time.
+ */
+ return lnet_rtrpools_alloc(1);
+
+ rc = lnet_rtrpools_adjust_helper(0, 0, 0);
+ if (rc)
+ return rc;
+
+ lnet_net_lock(LNET_LOCK_EX);
+ the_lnet.ln_routing = 1;
+
+ the_lnet.ln_ping_info->pi_features &= ~LNET_PING_FEAT_RTE_DISABLED;
+ lnet_net_unlock(LNET_LOCK_EX);
+
+ return 0;
+}
+
+void
+lnet_rtrpools_disable(void)
+{
+ if (!the_lnet.ln_routing)
+ return;
+
+ lnet_net_lock(LNET_LOCK_EX);
+ the_lnet.ln_routing = 0;
+ the_lnet.ln_ping_info->pi_features |= LNET_PING_FEAT_RTE_DISABLED;
+
+ tiny_router_buffers = 0;
+ small_router_buffers = 0;
+ large_router_buffers = 0;
+ lnet_net_unlock(LNET_LOCK_EX);
+ lnet_rtrpools_free(1);
+}
+
int
lnet_notify(lnet_ni_t *ni, lnet_nid_t nid, int alive, unsigned long when)
{
diff --git a/drivers/staging/lustre/lnet/lnet/router_proc.c b/drivers/staging/lustre/lnet/lnet/router_proc.c
index a7aaf0c..fc643df 100644
--- a/drivers/staging/lustre/lnet/lnet/router_proc.c
+++ b/drivers/staging/lustre/lnet/lnet/router_proc.c
@@ -15,10 +15,6 @@
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
- * You should have received a copy of the GNU General Public License
- * along with Portals; if not, write to the Free Software
- * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
- *
*/
#define DEBUG_SUBSYSTEM S_LNET
@@ -242,7 +238,7 @@
unsigned int hops = route->lr_hops;
unsigned int priority = route->lr_priority;
lnet_nid_t nid = route->lr_gateway->lp_nid;
- int alive = route->lr_gateway->lp_alive;
+ int alive = lnet_is_route_alive(route);
s += snprintf(s, tmpstr + tmpsiz - s,
"%-8s %4u %8u %7s %s\n",
@@ -804,8 +800,6 @@
},
};
-extern int portal_rotor;
-
static int __proc_lnet_portal_rotor(void *data, int write,
loff_t pos, void __user *buffer, int nob)
{
diff --git a/drivers/staging/lustre/lnet/selftest/conctl.c b/drivers/staging/lustre/lnet/selftest/conctl.c
index 210e24e..90b7771 100644
--- a/drivers/staging/lustre/lnet/selftest/conctl.c
+++ b/drivers/staging/lustre/lnet/selftest/conctl.c
@@ -801,15 +801,20 @@
}
int
-lstcon_ioctl_entry(unsigned int cmd, struct libcfs_ioctl_data *data)
+lstcon_ioctl_entry(unsigned int cmd, struct libcfs_ioctl_hdr *hdr)
{
char *buf;
- int opc = data->ioc_u32[0];
+ struct libcfs_ioctl_data *data;
+ int opc;
int rc;
if (cmd != IOC_LIBCFS_LNETST)
return -EINVAL;
+ data = container_of(hdr, struct libcfs_ioctl_data, ioc_hdr);
+
+ opc = data->ioc_u32[0];
+
if (data->ioc_plen1 > PAGE_CACHE_SIZE)
return -EINVAL;
diff --git a/drivers/staging/lustre/lnet/selftest/console.c b/drivers/staging/lustre/lnet/selftest/console.c
index 54fb1ab..e8ca1bf 100644
--- a/drivers/staging/lustre/lnet/selftest/console.c
+++ b/drivers/staging/lustre/lnet/selftest/console.c
@@ -58,7 +58,7 @@
(p)->nle_nnode++; \
} while (0)
-lstcon_session_t console_session;
+struct lstcon_session console_session;
static void
lstcon_node_get(lstcon_node_t *nd)
@@ -1693,8 +1693,6 @@
sid->ses_stamp = cfs_time_current();
}
-extern srpc_service_t lstcon_acceptor_service;
-
int
lstcon_session_new(char *name, int key, unsigned feats,
int timeout, int force, lst_sid_t __user *sid_up)
@@ -1973,7 +1971,7 @@
return rc;
}
-srpc_service_t lstcon_acceptor_service;
+static srpc_service_t lstcon_acceptor_service;
static void lstcon_init_acceptor_service(void)
{
/* initialize selftest console acceptor service table */
@@ -1983,7 +1981,7 @@
lstcon_acceptor_service.sv_wi_total = SFW_FRWK_WI_MAX;
}
-extern int lstcon_ioctl_entry(unsigned int cmd, struct libcfs_ioctl_data *data);
+extern int lstcon_ioctl_entry(unsigned int cmd, struct libcfs_ioctl_hdr *hdr);
static DECLARE_IOCTL_HANDLER(lstcon_ioctl_handler, lstcon_ioctl_entry);
@@ -1994,7 +1992,7 @@
int i;
int rc;
- memset(&console_session, 0, sizeof(lstcon_session_t));
+ memset(&console_session, 0, sizeof(struct lstcon_session));
console_session.ses_id = LST_INVALID_SID;
console_session.ses_state = LST_SESSION_NONE;
diff --git a/drivers/staging/lustre/lnet/selftest/console.h b/drivers/staging/lustre/lnet/selftest/console.h
index 5651b08..c9d1081 100644
--- a/drivers/staging/lustre/lnet/selftest/console.h
+++ b/drivers/staging/lustre/lnet/selftest/console.h
@@ -135,7 +135,7 @@
#define LST_CONSOLE_TIMEOUT 300 /* default console timeout */
-typedef struct {
+struct lstcon_session {
struct mutex ses_mutex; /* only 1 thread in session */
lst_sid_t ses_id; /* global session id */
int ses_key; /* local session key */
@@ -165,9 +165,9 @@
spinlock_t ses_rpc_lock; /* serialize */
atomic_t ses_rpc_counter; /* # of initialized RPCs */
struct list_head ses_rpc_freelist; /* idle console rpc */
-} lstcon_session_t; /* session descriptor */
+}; /* session descriptor */
-extern lstcon_session_t console_session;
+extern struct lstcon_session console_session;
static inline lstcon_trans_stat_t *
lstcon_trans_stat(void)
@@ -184,7 +184,6 @@
}
int lstcon_console_init(void);
-int lstcon_ioctl_entry(unsigned int cmd, struct libcfs_ioctl_data *data);
int lstcon_console_fini(void);
int lstcon_session_match(lst_sid_t sid);
int lstcon_session_new(char *name, int key, unsigned version,
diff --git a/drivers/staging/lustre/lnet/selftest/framework.c b/drivers/staging/lustre/lnet/selftest/framework.c
index 7eca046..3bbc720 100644
--- a/drivers/staging/lustre/lnet/selftest/framework.c
+++ b/drivers/staging/lustre/lnet/selftest/framework.c
@@ -1629,16 +1629,6 @@
}
};
-extern sfw_test_client_ops_t ping_test_client;
-extern srpc_service_t ping_test_service;
-extern void ping_init_test_client(void);
-extern void ping_init_test_service(void);
-
-extern sfw_test_client_ops_t brw_test_client;
-extern srpc_service_t brw_test_service;
-extern void brw_init_test_client(void);
-extern void brw_init_test_service(void);
-
int
sfw_startup(void)
{
diff --git a/drivers/staging/lustre/lnet/selftest/module.c b/drivers/staging/lustre/lnet/selftest/module.c
index c4bf442..cbb7884 100644
--- a/drivers/staging/lustre/lnet/selftest/module.c
+++ b/drivers/staging/lustre/lnet/selftest/module.c
@@ -37,6 +37,7 @@
#define DEBUG_SUBSYSTEM S_LNET
#include "selftest.h"
+#include "console.h"
enum {
LST_INIT_NONE = 0,
@@ -47,9 +48,6 @@
LST_INIT_CONSOLE
};
-extern int lstcon_console_init(void);
-extern int lstcon_console_fini(void);
-
static int lst_init_step = LST_INIT_NONE;
struct cfs_wi_sched *lst_sched_serial;
diff --git a/drivers/staging/lustre/lnet/selftest/rpc.c b/drivers/staging/lustre/lnet/selftest/rpc.c
index f95fd9b..1b76933 100644
--- a/drivers/staging/lustre/lnet/selftest/rpc.c
+++ b/drivers/staging/lustre/lnet/selftest/rpc.c
@@ -1097,7 +1097,7 @@
spin_unlock(&srpc_data.rpc_glock);
}
-inline void
+static void
srpc_add_client_rpc_timer(srpc_client_rpc_t *rpc)
{
stt_timer_t *timer = &rpc->crpc_timer;
@@ -1612,7 +1612,7 @@
srpc_data.rpc_state = SRPC_STATE_NONE;
- rc = LNetNIInit(LUSTRE_SRV_LNET_PID);
+ rc = LNetNIInit(LNET_PID_LUSTRE);
if (rc < 0) {
CERROR("LNetNIInit() has failed: %d\n", rc);
return rc;
diff --git a/drivers/staging/lustre/lustre/fid/fid_request.c b/drivers/staging/lustre/lustre/fid/fid_request.c
index ff8f38d..70400aa 100644
--- a/drivers/staging/lustre/lustre/fid/fid_request.c
+++ b/drivers/staging/lustre/lustre/fid/fid_request.c
@@ -68,7 +68,7 @@
req = ptlrpc_request_alloc_pack(class_exp2cliimp(exp), &RQF_SEQ_QUERY,
LUSTRE_MDS_VERSION, SEQ_QUERY);
- if (req == NULL)
+ if (!req)
return -ENOMEM;
/* Init operation code */
@@ -95,7 +95,8 @@
* precreating objects on this OST), and it will send the
* request to MDT0 here, so we can not keep resending the
* request here, otherwise if MDT0 is failed(umounted),
- * it can not release the export of MDT0 */
+ * it can not release the export of MDT0
+ */
if (seq->lcs_type == LUSTRE_SEQ_DATA)
req->rq_no_delay = req->rq_no_resend = 1;
debug_mask = D_CONSOLE;
@@ -152,7 +153,8 @@
/* If meta server return -EINPROGRESS or EAGAIN,
* it means meta server might not be ready to
* allocate super sequence from sequence controller
- * (MDT0)yet */
+ * (MDT0)yet
+ */
rc = seq_client_rpc(seq, &seq->lcs_space,
SEQ_ALLOC_META, "meta");
} while (rc == -EINPROGRESS || rc == -EAGAIN);
@@ -226,8 +228,8 @@
wait_queue_t link;
int rc;
- LASSERT(seq != NULL);
- LASSERT(fid != NULL);
+ LASSERT(seq);
+ LASSERT(fid);
init_waitqueue_entry(&link, current);
mutex_lock(&seq->lcs_mutex);
@@ -292,7 +294,7 @@
{
wait_queue_t link;
- LASSERT(seq != NULL);
+ LASSERT(seq);
init_waitqueue_entry(&link, current);
mutex_lock(&seq->lcs_mutex);
@@ -375,8 +377,8 @@
{
int rc;
- LASSERT(seq != NULL);
- LASSERT(prefix != NULL);
+ LASSERT(seq);
+ LASSERT(prefix);
seq->lcs_type = type;
@@ -438,7 +440,7 @@
{
struct client_obd *cli = &obd->u.cli;
- if (cli->cl_seq != NULL) {
+ if (cli->cl_seq) {
seq_client_fini(cli->cl_seq);
kfree(cli->cl_seq);
cli->cl_seq = NULL;
diff --git a/drivers/staging/lustre/lustre/fid/lproc_fid.c b/drivers/staging/lustre/lustre/fid/lproc_fid.c
index 0320b6e..1f0e786 100644
--- a/drivers/staging/lustre/lustre/fid/lproc_fid.c
+++ b/drivers/staging/lustre/lustre/fid/lproc_fid.c
@@ -66,7 +66,7 @@
int rc;
char kernbuf[MAX_FID_RANGE_STRLEN];
- LASSERT(range != NULL);
+ LASSERT(range);
if (count >= sizeof(kernbuf))
return -EINVAL;
@@ -104,7 +104,6 @@
int rc;
seq = ((struct seq_file *)file->private_data)->private;
- LASSERT(seq != NULL);
mutex_lock(&seq->lcs_mutex);
rc = ldebugfs_fid_write_common(buffer, count, &seq->lcs_space);
@@ -124,8 +123,6 @@
{
struct lu_client_seq *seq = (struct lu_client_seq *)m->private;
- LASSERT(seq != NULL);
-
mutex_lock(&seq->lcs_mutex);
seq_printf(m, "[%#llx - %#llx]:%x:%s\n", PRANGE(&seq->lcs_space));
mutex_unlock(&seq->lcs_mutex);
@@ -143,7 +140,6 @@
int rc, val;
seq = ((struct seq_file *)file->private_data)->private;
- LASSERT(seq != NULL);
rc = lprocfs_write_helper(buffer, count, &val);
if (rc)
@@ -172,8 +168,6 @@
{
struct lu_client_seq *seq = (struct lu_client_seq *)m->private;
- LASSERT(seq != NULL);
-
mutex_lock(&seq->lcs_mutex);
seq_printf(m, "%llu\n", seq->lcs_width);
mutex_unlock(&seq->lcs_mutex);
@@ -186,8 +180,6 @@
{
struct lu_client_seq *seq = (struct lu_client_seq *)m->private;
- LASSERT(seq != NULL);
-
mutex_lock(&seq->lcs_mutex);
seq_printf(m, DFID "\n", PFID(&seq->lcs_fid));
mutex_unlock(&seq->lcs_mutex);
@@ -201,9 +193,7 @@
struct lu_client_seq *seq = (struct lu_client_seq *)m->private;
struct client_obd *cli;
- LASSERT(seq != NULL);
-
- if (seq->lcs_exp != NULL) {
+ if (seq->lcs_exp) {
cli = &seq->lcs_exp->exp_obd->u.cli;
seq_printf(m, "%s\n", cli->cl_target_uuid.uuid);
}
diff --git a/drivers/staging/lustre/lustre/fld/fld_cache.c b/drivers/staging/lustre/lustre/fld/fld_cache.c
index d9459e5..2b09b76 100644
--- a/drivers/staging/lustre/lustre/fld/fld_cache.c
+++ b/drivers/staging/lustre/lustre/fld/fld_cache.c
@@ -65,7 +65,7 @@
{
struct fld_cache *cache;
- LASSERT(name != NULL);
+ LASSERT(name);
LASSERT(cache_threshold < cache_size);
cache = kzalloc(sizeof(*cache), GFP_NOFS);
@@ -100,7 +100,7 @@
{
__u64 pct;
- LASSERT(cache != NULL);
+ LASSERT(cache);
fld_cache_flush(cache);
if (cache->fci_stat.fst_count > 0) {
@@ -183,7 +183,8 @@
}
/* we could have overlap over next
- * range too. better restart. */
+ * range too. better restart.
+ */
goto restart_fixup;
}
@@ -218,8 +219,6 @@
struct list_head *curr;
int num = 0;
- LASSERT(cache != NULL);
-
if (cache->fci_cache_count < cache->fci_cache_size)
return 0;
@@ -304,7 +303,8 @@
const u32 mdt = range->lsr_index;
/* this is overlap case, these case are checking overlapping with
- * prev range only. fixup will handle overlapping with next range. */
+ * prev range only. fixup will handle overlapping with next range.
+ */
if (f_curr->fce_range.lsr_index == mdt) {
f_curr->fce_range.lsr_start = min(f_curr->fce_range.lsr_start,
@@ -319,7 +319,8 @@
} else if (new_start <= f_curr->fce_range.lsr_start &&
f_curr->fce_range.lsr_end <= new_end) {
/* case 1: new range completely overshadowed existing range.
- * e.g. whole range migrated. update fld cache entry */
+ * e.g. whole range migrated. update fld cache entry
+ */
f_curr->fce_range = *range;
kfree(f_new);
@@ -414,7 +415,7 @@
}
}
- if (prev == NULL)
+ if (!prev)
prev = head;
CDEBUG(D_INFO, "insert range "DRANGE"\n", PRANGE(&f_new->fce_range));
@@ -499,7 +500,7 @@
cache->fci_stat.fst_count++;
list_for_each_entry(flde, head, fce_list) {
if (flde->fce_range.lsr_start > seq) {
- if (prev != NULL)
+ if (prev)
*range = prev->fce_range;
break;
}
diff --git a/drivers/staging/lustre/lustre/fld/fld_internal.h b/drivers/staging/lustre/lustre/fld/fld_internal.h
index 12eb164..e8a3caf 100644
--- a/drivers/staging/lustre/lustre/fld/fld_internal.h
+++ b/drivers/staging/lustre/lustre/fld/fld_internal.h
@@ -58,22 +58,16 @@
__u64 fst_inflight;
};
-typedef int (*fld_hash_func_t) (struct lu_client_fld *, __u64);
-
-typedef struct lu_fld_target *
-(*fld_scan_func_t) (struct lu_client_fld *, __u64);
-
struct lu_fld_hash {
const char *fh_name;
- fld_hash_func_t fh_hash_func;
- fld_scan_func_t fh_scan_func;
+ int (*fh_hash_func)(struct lu_client_fld *, __u64);
+ struct lu_fld_target *(*fh_scan_func)(struct lu_client_fld *, __u64);
};
struct fld_cache_entry {
struct list_head fce_lru;
struct list_head fce_list;
- /**
- * fld cache entries are sorted on range->lsr_start field. */
+ /** fld cache entries are sorted on range->lsr_start field. */
struct lu_seq_range fce_range;
};
@@ -84,32 +78,25 @@
*/
rwlock_t fci_lock;
- /**
- * Cache shrink threshold */
+ /** Cache shrink threshold */
int fci_threshold;
- /**
- * Preferred number of cached entries */
+ /** Preferred number of cached entries */
int fci_cache_size;
- /**
- * Current number of cached entries. Protected by \a fci_lock */
+ /** Current number of cached entries. Protected by \a fci_lock */
int fci_cache_count;
- /**
- * LRU list fld entries. */
+ /** LRU list fld entries. */
struct list_head fci_lru;
- /**
- * sorted fld entries. */
+ /** sorted fld entries. */
struct list_head fci_entries_head;
- /**
- * Cache statistics. */
+ /** Cache statistics. */
struct fld_stats fci_stat;
- /**
- * Cache name used for debug and messages. */
+ /** Cache name used for debug and messages. */
char fci_name[LUSTRE_MDT_MAXNAMELEN];
unsigned int fci_no_shrink:1;
};
@@ -169,7 +156,7 @@
static inline const char *
fld_target_name(struct lu_fld_target *tar)
{
- if (tar->ft_srv != NULL)
+ if (tar->ft_srv)
return tar->ft_srv->lsf_name;
return (const char *)tar->ft_exp->exp_obd->obd_name;
diff --git a/drivers/staging/lustre/lustre/fld/fld_request.c b/drivers/staging/lustre/lustre/fld/fld_request.c
index d92c01b..b50a57b 100644
--- a/drivers/staging/lustre/lustre/fld/fld_request.c
+++ b/drivers/staging/lustre/lustre/fld/fld_request.c
@@ -58,7 +58,8 @@
#include "fld_internal.h"
/* TODO: these 3 functions are copies of flow-control code from mdc_lib.c
- * It should be common thing. The same about mdc RPC lock */
+ * It should be common thing. The same about mdc RPC lock
+ */
static int fld_req_avail(struct client_obd *cli, struct mdc_cache_waiter *mcw)
{
int rc;
@@ -124,7 +125,8 @@
* it should go to index 0 directly, instead of calculating
* hash again, and also if other MDTs is not being connected,
* the fld lookup requests(for seq on MDT0) should not be
- * blocked because of other MDTs */
+ * blocked because of other MDTs
+ */
if (fid_seq_is_norm(seq))
hash = fld_rrb_hash(fld, seq);
else
@@ -139,7 +141,8 @@
if (hash != 0) {
/* It is possible the remote target(MDT) are not connected to
* with client yet, so we will refer this to MDT0, which should
- * be connected during mount */
+ * be connected during mount
+ */
hash = 0;
goto again;
}
@@ -148,9 +151,9 @@
fld->lcf_name, hash, seq, fld->lcf_count);
list_for_each_entry(target, &fld->lcf_targets, ft_chain) {
- const char *srv_name = target->ft_srv != NULL ?
+ const char *srv_name = target->ft_srv ?
target->ft_srv->lsf_name : "<null>";
- const char *exp_name = target->ft_exp != NULL ?
+ const char *exp_name = target->ft_exp ?
(char *)target->ft_exp->exp_obd->obd_uuid.uuid :
"<null>";
@@ -183,13 +186,13 @@
{
struct lu_fld_target *target;
- LASSERT(fld->lcf_hash != NULL);
+ LASSERT(fld->lcf_hash);
spin_lock(&fld->lcf_lock);
target = fld->lcf_hash->fh_scan_func(fld, seq);
spin_unlock(&fld->lcf_lock);
- if (target != NULL) {
+ if (target) {
CDEBUG(D_INFO, "%s: Found target (idx %llu) by seq %#llx\n",
fld->lcf_name, target->ft_idx, seq);
}
@@ -207,10 +210,10 @@
const char *name;
struct lu_fld_target *target, *tmp;
- LASSERT(tar != NULL);
+ LASSERT(tar);
name = fld_target_name(tar);
- LASSERT(name != NULL);
- LASSERT(tar->ft_srv != NULL || tar->ft_exp != NULL);
+ LASSERT(name);
+ LASSERT(tar->ft_srv || tar->ft_exp);
if (fld->lcf_flags != LUSTRE_FLD_INIT) {
CERROR("%s: Attempt to add target %s (idx %llu) on fly - skip it\n",
@@ -236,7 +239,7 @@
}
target->ft_exp = tar->ft_exp;
- if (target->ft_exp != NULL)
+ if (target->ft_exp)
class_export_get(target->ft_exp);
target->ft_srv = tar->ft_srv;
target->ft_idx = tar->ft_idx;
@@ -264,7 +267,7 @@
list_del(&target->ft_chain);
spin_unlock(&fld->lcf_lock);
- if (target->ft_exp != NULL)
+ if (target->ft_exp)
class_export_put(target->ft_exp);
kfree(target);
@@ -326,8 +329,6 @@
int cache_size, cache_threshold;
int rc;
- LASSERT(fld != NULL);
-
snprintf(fld->lcf_name, sizeof(fld->lcf_name),
"cli-%s", prefix);
@@ -379,13 +380,13 @@
&fld->lcf_targets, ft_chain) {
fld->lcf_count--;
list_del(&target->ft_chain);
- if (target->ft_exp != NULL)
+ if (target->ft_exp)
class_export_put(target->ft_exp);
kfree(target);
}
spin_unlock(&fld->lcf_lock);
- if (fld->lcf_cache != NULL) {
+ if (fld->lcf_cache) {
if (!IS_ERR(fld->lcf_cache))
fld_cache_fini(fld->lcf_cache);
fld->lcf_cache = NULL;
@@ -402,12 +403,12 @@
int rc;
struct obd_import *imp;
- LASSERT(exp != NULL);
+ LASSERT(exp);
imp = class_exp2cliimp(exp);
req = ptlrpc_request_alloc_pack(imp, &RQF_FLD_QUERY, LUSTRE_MDS_VERSION,
FLD_QUERY);
- if (req == NULL)
+ if (!req)
return -ENOMEM;
op = req_capsule_client_get(&req->rq_pill, &RMF_FLD_OPC);
@@ -436,7 +437,7 @@
goto out_req;
prange = req_capsule_server_get(&req->rq_pill, &RMF_FLD_MDFLD);
- if (prange == NULL) {
+ if (!prange) {
rc = -EFAULT;
goto out_req;
}
@@ -463,7 +464,7 @@
/* Can not find it in the cache */
target = fld_client_get_target(fld, seq);
- LASSERT(target != NULL);
+ LASSERT(target);
CDEBUG(D_INFO, "%s: Lookup fld entry (seq: %#llx) on target %s (idx %llu)\n",
fld->lcf_name, seq, fld_target_name(target), target->ft_idx);
diff --git a/drivers/staging/lustre/lustre/fld/lproc_fld.c b/drivers/staging/lustre/lustre/fld/lproc_fld.c
index 41ceaa8..da31281 100644
--- a/drivers/staging/lustre/lustre/fld/lproc_fld.c
+++ b/drivers/staging/lustre/lustre/fld/lproc_fld.c
@@ -60,8 +60,6 @@
struct lu_client_fld *fld = (struct lu_client_fld *)m->private;
struct lu_fld_target *target;
- LASSERT(fld != NULL);
-
spin_lock(&fld->lcf_lock);
list_for_each_entry(target,
&fld->lcf_targets, ft_chain)
@@ -76,8 +74,6 @@
{
struct lu_client_fld *fld = (struct lu_client_fld *)m->private;
- LASSERT(fld != NULL);
-
spin_lock(&fld->lcf_lock);
seq_printf(m, "%s\n", fld->lcf_hash->fh_name);
spin_unlock(&fld->lcf_lock);
@@ -102,9 +98,8 @@
return -EFAULT;
fld = ((struct seq_file *)file->private_data)->private;
- LASSERT(fld != NULL);
- for (i = 0; fld_hash[i].fh_name != NULL; i++) {
+ for (i = 0; fld_hash[i].fh_name; i++) {
if (count != strlen(fld_hash[i].fh_name))
continue;
@@ -114,7 +109,7 @@
}
}
- if (hash != NULL) {
+ if (hash) {
spin_lock(&fld->lcf_lock);
fld->lcf_hash = hash;
spin_unlock(&fld->lcf_lock);
@@ -132,8 +127,6 @@
{
struct lu_client_fld *fld = file->private_data;
- LASSERT(fld != NULL);
-
fld_cache_flush(fld->lcf_cache);
CDEBUG(D_INFO, "%s: Lookup cache is flushed\n", fld->lcf_name);
diff --git a/drivers/staging/lustre/lustre/include/cl_object.h b/drivers/staging/lustre/lustre/include/cl_object.h
index bd7acc2..2a77ea2f8 100644
--- a/drivers/staging/lustre/lustre/include/cl_object.h
+++ b/drivers/staging/lustre/lustre/include/cl_object.h
@@ -157,7 +157,8 @@
};
/** \addtogroup cl_object cl_object
- * @{ */
+ * @{
+ */
/**
* "Data attributes" of cl_object. Data attributes can be updated
* independently for a sub-object, and top-object's attributes are calculated
@@ -288,13 +289,14 @@
enum {
/** configure layout, set up a new stripe, must be called while
- * holding layout lock. */
+ * holding layout lock.
+ */
OBJECT_CONF_SET = 0,
/** invalidate the current stripe configuration due to losing
- * layout lock. */
+ * layout lock.
+ */
OBJECT_CONF_INVALIDATE = 1,
- /** wait for old layout to go away so that new layout can be
- * set up. */
+ /** wait for old layout to go away so that new layout can be set up. */
OBJECT_CONF_WAIT = 2
};
@@ -393,7 +395,8 @@
*/
struct cl_object_header {
/** Standard lu_object_header. cl_object::co_lu::lo_header points
- * here. */
+ * here.
+ */
struct lu_object_header coh_lu;
/** \name locks
* \todo XXX move locks below to the separate cache-lines, they are
@@ -464,7 +467,8 @@
#define CL_PAGE_EOF ((pgoff_t)~0ull)
/** \addtogroup cl_page cl_page
- * @{ */
+ * @{
+ */
/** \struct cl_page
* Layered client page.
@@ -687,12 +691,14 @@
enum cl_page_type {
/** Host page, the page is from the host inode which the cl_page
- * belongs to. */
+ * belongs to.
+ */
CPT_CACHEABLE = 1,
/** Transient page, the transient cl_page is used to bind a cl_page
* to vmpage which is not belonging to the same object of cl_page.
- * it is used in DirectIO, lockless IO and liblustre. */
+ * it is used in DirectIO, lockless IO and liblustre.
+ */
CPT_TRANSIENT,
};
@@ -728,7 +734,8 @@
/** Parent page, NULL for top-level page. Immutable after creation. */
struct cl_page *cp_parent;
/** Lower-layer page. NULL for bottommost page. Immutable after
- * creation. */
+ * creation.
+ */
struct cl_page *cp_child;
/**
* Page state. This field is const to avoid accidental update, it is
@@ -1126,7 +1133,8 @@
/** @} cl_page */
/** \addtogroup cl_lock cl_lock
- * @{ */
+ * @{
+ */
/** \struct cl_lock
*
* Extent locking on the client.
@@ -1641,7 +1649,8 @@
struct cl_lock_slice {
struct cl_lock *cls_lock;
/** Object slice corresponding to this lock slice. Immutable after
- * creation. */
+ * creation.
+ */
struct cl_object *cls_obj;
const struct cl_lock_operations *cls_ops;
/** Linkage into cl_lock::cll_layers. Immutable after creation. */
@@ -1885,7 +1894,8 @@
/** @} cl_page_list */
/** \addtogroup cl_io cl_io
- * @{ */
+ * @{
+ */
/** \struct cl_io
* I/O
*
@@ -2284,7 +2294,8 @@
/** discard all of dirty pages in a specific file range */
CL_FSYNC_DISCARD = 2,
/** start writeback and make sure they have reached storage before
- * return. OST_SYNC RPC must be issued and finished */
+ * return. OST_SYNC RPC must be issued and finished
+ */
CL_FSYNC_ALL = 3
};
@@ -2403,7 +2414,8 @@
/** @} cl_io */
/** \addtogroup cl_req cl_req
- * @{ */
+ * @{
+ */
/** \struct cl_req
* Transfer.
*
@@ -2582,7 +2594,8 @@
/** how many entities are in the cache right now */
CS_total,
/** how many entities in the cache are actively used (and cannot be
- * evicted) right now */
+ * evicted) right now
+ */
CS_busy,
/** how many entities were created at all */
CS_create,
@@ -2613,7 +2626,7 @@
* Statistical counters. Atomics do not scale, something better like
* per-cpu counters is needed.
*
- * These are exported as /proc/fs/lustre/llite/.../site
+ * These are exported as /sys/kernel/debug/lustre/llite/.../site
*
* When interpreting keep in mind that both sub-locks (and sub-pages)
* and top-locks (and top-pages) are accounted here.
@@ -2653,7 +2666,7 @@
static inline struct cl_device *lu2cl_dev(const struct lu_device *d)
{
- LASSERT(d == NULL || IS_ERR(d) || lu_device_is_cl(d));
+ LASSERT(!d || IS_ERR(d) || lu_device_is_cl(d));
return container_of0(d, struct cl_device, cd_lu_dev);
}
@@ -2664,7 +2677,7 @@
static inline struct cl_object *lu2cl(const struct lu_object *o)
{
- LASSERT(o == NULL || IS_ERR(o) || lu_device_is_cl(o->lo_dev));
+ LASSERT(!o || IS_ERR(o) || lu_device_is_cl(o->lo_dev));
return container_of0(o, struct cl_object, co_lu);
}
@@ -2681,7 +2694,7 @@
static inline struct cl_device *cl_object_device(const struct cl_object *o)
{
- LASSERT(o == NULL || IS_ERR(o) || lu_device_is_cl(o->co_lu.lo_dev));
+ LASSERT(!o || IS_ERR(o) || lu_device_is_cl(o->co_lu.lo_dev));
return container_of0(o->co_lu.lo_dev, struct cl_device, cd_lu_dev);
}
@@ -2725,7 +2738,8 @@
/** @} helpers */
/** \defgroup cl_object cl_object
- * @{ */
+ * @{
+ */
struct cl_object *cl_object_top (struct cl_object *o);
struct cl_object *cl_object_find(const struct lu_env *env, struct cl_device *cd,
const struct lu_fid *fid,
@@ -2770,7 +2784,8 @@
/** @} cl_object */
/** \defgroup cl_page cl_page
- * @{ */
+ * @{
+ */
enum {
CLP_GANG_OKAY = 0,
CLP_GANG_RESCHED,
@@ -2888,7 +2903,8 @@
/** @} cl_page */
/** \defgroup cl_lock cl_lock
- * @{ */
+ * @{
+ */
struct cl_lock *cl_lock_hold(const struct lu_env *env, const struct cl_io *io,
const struct cl_lock_descr *need,
@@ -2966,7 +2982,8 @@
*
* cl_use_try() NONE cl_lock_operations::clo_use() CLS_HELD
*
- * @{ */
+ * @{
+ */
int cl_wait (const struct lu_env *env, struct cl_lock *lock);
void cl_unuse (const struct lu_env *env, struct cl_lock *lock);
@@ -3019,7 +3036,8 @@
/** @} cl_lock */
/** \defgroup cl_io cl_io
- * @{ */
+ * @{
+ */
int cl_io_init (const struct lu_env *env, struct cl_io *io,
enum cl_io_type iot, struct cl_object *obj);
@@ -3094,7 +3112,8 @@
/** @} cl_io */
/** \defgroup cl_page_list cl_page_list
- * @{ */
+ * @{
+ */
/**
* Last page in the page list.
@@ -3137,7 +3156,8 @@
/** @} cl_page_list */
/** \defgroup cl_req cl_req
- * @{ */
+ * @{
+ */
struct cl_req *cl_req_alloc(const struct lu_env *env, struct cl_page *page,
enum cl_req_type crt, int nr_objects);
@@ -3214,7 +3234,8 @@
* - cl_env_reexit(cl_env_reenter had to be called priorly)
*
* \see lu_env, lu_context, lu_context_key
- * @{ */
+ * @{
+ */
struct cl_env_nest {
int cen_refcheck;
diff --git a/drivers/staging/lustre/lustre/include/lclient.h b/drivers/staging/lustre/lustre/include/lclient.h
index 36e7a67..5d839a9 100644
--- a/drivers/staging/lustre/lustre/include/lclient.h
+++ b/drivers/staging/lustre/lustre/include/lclient.h
@@ -127,7 +127,7 @@
struct ccc_thread_info *info;
info = lu_context_key_get(&env->le_ctx, &ccc_key);
- LASSERT(info != NULL);
+ LASSERT(info);
return info;
}
@@ -156,7 +156,7 @@
struct ccc_session *ses;
ses = lu_context_key_get(env->le_ses, &ccc_session_key);
- LASSERT(ses != NULL);
+ LASSERT(ses);
return ses;
}
@@ -383,7 +383,8 @@
*
* NB: If you find you have to use these interfaces for your new code, please
* think about it again. These interfaces may be removed in the future for
- * better layering. */
+ * better layering.
+ */
struct lov_stripe_md *lov_lsm_get(struct cl_object *clobj);
void lov_lsm_put(struct cl_object *clobj, struct lov_stripe_md *lsm);
int lov_read_and_clear_async_rc(struct cl_object *clob);
diff --git a/drivers/staging/lustre/lustre/include/linux/obd.h b/drivers/staging/lustre/lustre/include/linux/obd.h
index 468bc28..3907bf4 100644
--- a/drivers/staging/lustre/lustre/include/linux/obd.h
+++ b/drivers/staging/lustre/lustre/include/linux/obd.h
@@ -57,23 +57,23 @@
#define CLIENT_OBD_LIST_LOCK_DEBUG 1
-typedef struct {
+struct client_obd_lock {
spinlock_t lock;
unsigned long time;
struct task_struct *task;
const char *func;
int line;
-} client_obd_lock_t;
+};
-static inline void __client_obd_list_lock(client_obd_lock_t *lock,
+static inline void __client_obd_list_lock(struct client_obd_lock *lock,
const char *func, int line)
{
unsigned long cur = jiffies;
while (1) {
if (spin_trylock(&lock->lock)) {
- LASSERT(lock->task == NULL);
+ LASSERT(!lock->task);
lock->task = current;
lock->func = func;
lock->line = line;
@@ -85,7 +85,7 @@
time_before(lock->time + 5 * HZ, jiffies)) {
struct task_struct *task = lock->task;
- if (task == NULL)
+ if (!task)
continue;
LCONSOLE_WARN("%s:%d: lock %p was acquired by <%s:%d:%s:%d> for %lu seconds.\n",
@@ -106,20 +106,20 @@
#define client_obd_list_lock(lock) \
__client_obd_list_lock(lock, __func__, __LINE__)
-static inline void client_obd_list_unlock(client_obd_lock_t *lock)
+static inline void client_obd_list_unlock(struct client_obd_lock *lock)
{
- LASSERT(lock->task != NULL);
+ LASSERT(lock->task);
lock->task = NULL;
lock->time = jiffies;
spin_unlock(&lock->lock);
}
-static inline void client_obd_list_lock_init(client_obd_lock_t *lock)
+static inline void client_obd_list_lock_init(struct client_obd_lock *lock)
{
spin_lock_init(&lock->lock);
}
-static inline void client_obd_list_lock_done(client_obd_lock_t *lock)
+static inline void client_obd_list_lock_done(struct client_obd_lock *lock)
{}
#endif /* __LINUX_OBD_H */
diff --git a/drivers/staging/lustre/lustre/include/lprocfs_status.h b/drivers/staging/lustre/lustre/include/lprocfs_status.h
index fb13094..0d98111 100644
--- a/drivers/staging/lustre/lustre/include/lprocfs_status.h
+++ b/drivers/staging/lustre/lustre/include/lprocfs_status.h
@@ -54,7 +54,7 @@
struct file_operations *fops;
void *data;
/**
- * /proc file mode.
+ * sysfs file mode.
*/
umode_t proc_mode;
};
@@ -175,7 +175,8 @@
enum lprocfs_stats_flags {
LPROCFS_STATS_FLAG_NONE = 0x0000, /* per cpu counter */
LPROCFS_STATS_FLAG_NOPERCPU = 0x0001, /* stats have no percpu
- * area and need locking */
+ * area and need locking
+ */
LPROCFS_STATS_FLAG_IRQ_SAFE = 0x0002, /* alloc need irq safe */
};
@@ -196,7 +197,8 @@
unsigned short ls_biggest_alloc_num;
enum lprocfs_stats_flags ls_flags;
/* Lock used when there are no percpu stats areas; For percpu stats,
- * it is used to protect ls_biggest_alloc_num change */
+ * it is used to protect ls_biggest_alloc_num change
+ */
spinlock_t ls_lock;
/* has ls_num of counter headers */
@@ -274,20 +276,7 @@
OPC_RANGE(OST));
} else if (opc < FLD_LAST_OPC) {
/* FLD opcode */
- return (opc - FLD_FIRST_OPC +
- OPC_RANGE(SEC) +
- OPC_RANGE(SEQ) +
- OPC_RANGE(QUOTA) +
- OPC_RANGE(LLOG) +
- OPC_RANGE(OBD) +
- OPC_RANGE(MGS) +
- OPC_RANGE(LDLM) +
- OPC_RANGE(MDS) +
- OPC_RANGE(OST));
- } else if (opc < UPDATE_LAST_OPC) {
- /* update opcode */
- return (opc - UPDATE_FIRST_OPC +
- OPC_RANGE(FLD) +
+ return (opc - FLD_FIRST_OPC +
OPC_RANGE(SEC) +
OPC_RANGE(SEQ) +
OPC_RANGE(QUOTA) +
@@ -312,8 +301,7 @@
OPC_RANGE(SEC) + \
OPC_RANGE(SEQ) + \
OPC_RANGE(SEC) + \
- OPC_RANGE(FLD) + \
- OPC_RANGE(UPDATE))
+ OPC_RANGE(FLD))
#define EXTRA_MAX_OPCODES ((PTLRPC_LAST_CNTR - PTLRPC_FIRST_CNTR) + \
OPC_RANGE(EXTRA))
@@ -407,7 +395,7 @@
} else {
unsigned int cpuid = get_cpu();
- if (unlikely(stats->ls_percpu[cpuid] == NULL)) {
+ if (unlikely(!stats->ls_percpu[cpuid])) {
rc = lprocfs_stats_alloc_one(stats, cpuid);
if (rc < 0) {
put_cpu();
@@ -521,11 +509,11 @@
unsigned long flags = 0;
__u64 ret = 0;
- LASSERT(stats != NULL);
+ LASSERT(stats);
num_cpu = lprocfs_stats_lock(stats, LPROCFS_GET_NUM_CPU, &flags);
for (i = 0; i < num_cpu; i++) {
- if (stats->ls_percpu[i] == NULL)
+ if (!stats->ls_percpu[i])
continue;
ret += lprocfs_read_helper(
lprocfs_stats_counter_get(stats, i, idx),
@@ -625,9 +613,10 @@
int lprocfs_seq_release(struct inode *, struct file *);
/* write the name##_seq_show function, call LPROC_SEQ_FOPS_RO for read-only
- proc entries; otherwise, you will define name##_seq_write function also for
- a read-write proc entry, and then call LPROC_SEQ_SEQ instead. Finally,
- call ldebugfs_obd_seq_create(obd, filename, 0444, &name#_fops, data); */
+ * proc entries; otherwise, you will define name##_seq_write function also for
+ * a read-write proc entry, and then call LPROC_SEQ_SEQ instead. Finally,
+ * call ldebugfs_obd_seq_create(obd, filename, 0444, &name#_fops, data);
+ */
#define __LPROC_SEQ_FOPS(name, custom_seq_write) \
static int name##_single_open(struct inode *inode, struct file *file) \
{ \
diff --git a/drivers/staging/lustre/lustre/include/lu_object.h b/drivers/staging/lustre/lustre/include/lu_object.h
index 0b22e5e..b5088b1 100644
--- a/drivers/staging/lustre/lustre/include/lu_object.h
+++ b/drivers/staging/lustre/lustre/include/lu_object.h
@@ -164,11 +164,12 @@
/**
* For lu_object_conf flags
*/
-typedef enum {
+enum loc_flags {
/* This is a new object to be allocated, or the file
- * corresponding to the object does not exists. */
+ * corresponding to the object does not exists.
+ */
LOC_F_NEW = 0x00000001,
-} loc_flags_t;
+};
/**
* Object configuration, describing particulars of object being created. On
@@ -179,7 +180,7 @@
/**
* Some hints for obj find and alloc.
*/
- loc_flags_t loc_flags;
+ enum loc_flags loc_flags;
};
/**
@@ -392,7 +393,7 @@
static inline int lu_device_is_md(const struct lu_device *d)
{
- return ergo(d != NULL, d->ld_type->ldt_tags & LU_DEVICE_MD);
+ return ergo(d, d->ld_type->ldt_tags & LU_DEVICE_MD);
}
/**
@@ -895,7 +896,8 @@
/** @} helpers */
/** \name lu_context
- * @{ */
+ * @{
+ */
/** For lu_context health-checks */
enum lu_context_state {
@@ -1119,7 +1121,7 @@
CLASSERT(PAGE_CACHE_SIZE >= sizeof (*value)); \
\
value = kzalloc(sizeof(*value), GFP_NOFS); \
- if (value == NULL) \
+ if (!value) \
value = ERR_PTR(-ENOMEM); \
\
return value; \
@@ -1174,7 +1176,7 @@
do { \
LU_CONTEXT_KEY_INIT(key); \
key = va_arg(args, struct lu_context_key *); \
- } while (key != NULL); \
+ } while (key); \
va_end(args); \
}
diff --git a/drivers/staging/lustre/lustre/include/lu_ref.h b/drivers/staging/lustre/lustre/include/lu_ref.h
index 97cd157..f7dfd83 100644
--- a/drivers/staging/lustre/lustre/include/lu_ref.h
+++ b/drivers/staging/lustre/lustre/include/lu_ref.h
@@ -17,10 +17,6 @@
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
- * You should have received a copy of the GNU General Public License
- * along with Lustre; if not, write to the Free Software
- * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
- *
*/
#ifndef __LUSTRE_LU_REF_H
diff --git a/drivers/staging/lustre/lustre/include/lustre/ll_fiemap.h b/drivers/staging/lustre/lustre/include/lustre/ll_fiemap.h
index 09088f4..07d45de 100644
--- a/drivers/staging/lustre/lustre/include/lustre/ll_fiemap.h
+++ b/drivers/staging/lustre/lustre/include/lustre/ll_fiemap.h
@@ -47,9 +47,11 @@
struct ll_fiemap_extent {
__u64 fe_logical; /* logical offset in bytes for the start of
- * the extent from the beginning of the file */
+ * the extent from the beginning of the file
+ */
__u64 fe_physical; /* physical offset in bytes for the start
- * of the extent from the beginning of the disk */
+ * of the extent from the beginning of the disk
+ */
__u64 fe_length; /* length in bytes for this extent */
__u64 fe_reserved64[2];
__u32 fe_flags; /* FIEMAP_EXTENT_* flags for this extent */
@@ -59,9 +61,11 @@
struct ll_user_fiemap {
__u64 fm_start; /* logical offset (inclusive) at
- * which to start mapping (in) */
+ * which to start mapping (in)
+ */
__u64 fm_length; /* logical length of mapping which
- * userspace wants (in) */
+ * userspace wants (in)
+ */
__u32 fm_flags; /* FIEMAP_FLAG_* flags for request (in/out) */
__u32 fm_mapped_extents;/* number of extents that were mapped (out) */
__u32 fm_extent_count; /* size of fm_extents array (in) */
@@ -71,28 +75,38 @@
#define FIEMAP_MAX_OFFSET (~0ULL)
-#define FIEMAP_FLAG_SYNC 0x00000001 /* sync file data before map */
-#define FIEMAP_FLAG_XATTR 0x00000002 /* map extended attribute tree */
-
-#define FIEMAP_EXTENT_LAST 0x00000001 /* Last extent in file. */
-#define FIEMAP_EXTENT_UNKNOWN 0x00000002 /* Data location unknown. */
-#define FIEMAP_EXTENT_DELALLOC 0x00000004 /* Location still pending.
- * Sets EXTENT_UNKNOWN. */
-#define FIEMAP_EXTENT_ENCODED 0x00000008 /* Data can not be read
- * while fs is unmounted */
-#define FIEMAP_EXTENT_DATA_ENCRYPTED 0x00000080 /* Data is encrypted by fs.
- * Sets EXTENT_NO_DIRECT. */
+#define FIEMAP_FLAG_SYNC 0x00000001 /* sync file data before
+ * map
+ */
+#define FIEMAP_FLAG_XATTR 0x00000002 /* map extended attribute
+ * tree
+ */
+#define FIEMAP_EXTENT_LAST 0x00000001 /* Last extent in file. */
+#define FIEMAP_EXTENT_UNKNOWN 0x00000002 /* Data location unknown. */
+#define FIEMAP_EXTENT_DELALLOC 0x00000004 /* Location still pending.
+ * Sets EXTENT_UNKNOWN.
+ */
+#define FIEMAP_EXTENT_ENCODED 0x00000008 /* Data can not be read
+ * while fs is unmounted
+ */
+#define FIEMAP_EXTENT_DATA_ENCRYPTED 0x00000080 /* Data is encrypted by fs.
+ * Sets EXTENT_NO_DIRECT.
+ */
#define FIEMAP_EXTENT_NOT_ALIGNED 0x00000100 /* Extent offsets may not be
- * block aligned. */
+ * block aligned.
+ */
#define FIEMAP_EXTENT_DATA_INLINE 0x00000200 /* Data mixed with metadata.
* Sets EXTENT_NOT_ALIGNED.*/
-#define FIEMAP_EXTENT_DATA_TAIL 0x00000400 /* Multiple files in block.
- * Sets EXTENT_NOT_ALIGNED.*/
-#define FIEMAP_EXTENT_UNWRITTEN 0x00000800 /* Space allocated, but
- * no data (i.e. zero). */
-#define FIEMAP_EXTENT_MERGED 0x00001000 /* File does not natively
+#define FIEMAP_EXTENT_DATA_TAIL 0x00000400 /* Multiple files in block.
+ * Sets EXTENT_NOT_ALIGNED.
+ */
+#define FIEMAP_EXTENT_UNWRITTEN 0x00000800 /* Space allocated, but
+ * no data (i.e. zero).
+ */
+#define FIEMAP_EXTENT_MERGED 0x00001000 /* File does not natively
* support extents. Result
- * merged for efficiency. */
+ * merged for efficiency.
+ */
static inline size_t fiemap_count_to_size(size_t extent_count)
{
@@ -114,7 +128,8 @@
/* Lustre specific flags - use a high bit, don't conflict with upstream flag */
#define FIEMAP_EXTENT_NO_DIRECT 0x40000000 /* Data mapping undefined */
-#define FIEMAP_EXTENT_NET 0x80000000 /* Data stored remotely.
- * Sets NO_DIRECT flag */
+#define FIEMAP_EXTENT_NET 0x80000000 /* Data stored remotely.
+ * Sets NO_DIRECT flag
+ */
#endif /* _LUSTRE_FIEMAP_H */
diff --git a/drivers/staging/lustre/lustre/include/lustre/lustre_build_version.h b/drivers/staging/lustre/lustre/include/lustre/lustre_build_version.h
deleted file mode 100644
index 93a3d7d..0000000
--- a/drivers/staging/lustre/lustre/include/lustre/lustre_build_version.h
+++ /dev/null
@@ -1,2 +0,0 @@
-#define BUILD_VERSION "v2_3_64_0-g6e62c21-CHANGED-3.9.0"
-#define LUSTRE_RELEASE 3.9.0_g6e62c21
diff --git a/drivers/staging/lustre/lustre/include/lustre/lustre_idl.h b/drivers/staging/lustre/lustre/include/lustre/lustre_idl.h
index b064b58..284affb 100644
--- a/drivers/staging/lustre/lustre/include/lustre/lustre_idl.h
+++ b/drivers/staging/lustre/lustre/include/lustre/lustre_idl.h
@@ -113,25 +113,25 @@
#define CONNMGR_REQUEST_PORTAL 1
#define CONNMGR_REPLY_PORTAL 2
-//#define OSC_REQUEST_PORTAL 3
+/*#define OSC_REQUEST_PORTAL 3 */
#define OSC_REPLY_PORTAL 4
-//#define OSC_BULK_PORTAL 5
+/*#define OSC_BULK_PORTAL 5 */
#define OST_IO_PORTAL 6
#define OST_CREATE_PORTAL 7
#define OST_BULK_PORTAL 8
-//#define MDC_REQUEST_PORTAL 9
+/*#define MDC_REQUEST_PORTAL 9 */
#define MDC_REPLY_PORTAL 10
-//#define MDC_BULK_PORTAL 11
+/*#define MDC_BULK_PORTAL 11 */
#define MDS_REQUEST_PORTAL 12
-//#define MDS_REPLY_PORTAL 13
+/*#define MDS_REPLY_PORTAL 13 */
#define MDS_BULK_PORTAL 14
#define LDLM_CB_REQUEST_PORTAL 15
#define LDLM_CB_REPLY_PORTAL 16
#define LDLM_CANCEL_REQUEST_PORTAL 17
#define LDLM_CANCEL_REPLY_PORTAL 18
-//#define PTLBD_REQUEST_PORTAL 19
-//#define PTLBD_REPLY_PORTAL 20
-//#define PTLBD_BULK_PORTAL 21
+/*#define PTLBD_REQUEST_PORTAL 19 */
+/*#define PTLBD_REPLY_PORTAL 20 */
+/*#define PTLBD_BULK_PORTAL 21 */
#define MDS_SETATTR_PORTAL 22
#define MDS_READPAGE_PORTAL 23
#define OUT_PORTAL 24
@@ -146,7 +146,9 @@
#define SEQ_CONTROLLER_PORTAL 32
#define MGS_BULK_PORTAL 33
-/* Portal 63 is reserved for the Cray Inc DVS - nic@cray.com, roe@cray.com, n8851@cray.com */
+/* Portal 63 is reserved for the Cray Inc DVS - nic@cray.com, roe@cray.com,
+ * n8851@cray.com
+ */
/* packet types */
#define PTL_RPC_MSG_REQUEST 4711
@@ -295,7 +297,8 @@
fld_range_is_mdt(range) ? "mdt" : "ost"
/** \defgroup lu_fid lu_fid
- * @{ */
+ * @{
+ */
/**
* Flags for lustre_mdt_attrs::lma_compat and lustre_mdt_attrs::lma_incompat.
@@ -307,7 +310,8 @@
LMAC_SOM = 0x00000002,
LMAC_NOT_IN_OI = 0x00000004, /* the object does NOT need OI mapping */
LMAC_FID_ON_OST = 0x00000008, /* For OST-object, its OI mapping is
- * under /O/<seq>/d<x>. */
+ * under /O/<seq>/d<x>.
+ */
};
/**
@@ -319,7 +323,8 @@
LMAI_RELEASED = 0x00000001, /* file is released */
LMAI_AGENT = 0x00000002, /* agent inode */
LMAI_REMOTE_PARENT = 0x00000004, /* the parent of the object
- is on the remote MDT */
+ * is on the remote MDT
+ */
};
#define LMA_INCOMPAT_SUPP (LMAI_AGENT | LMAI_REMOTE_PARENT)
@@ -395,12 +400,14 @@
FID_SEQ_LOCAL_FILE = 0x200000001ULL,
FID_SEQ_DOT_LUSTRE = 0x200000002ULL,
/* sequence is used for local named objects FIDs generated
- * by local_object_storage library */
+ * by local_object_storage library
+ */
FID_SEQ_LOCAL_NAME = 0x200000003ULL,
/* Because current FLD will only cache the fid sequence, instead
* of oid on the client side, if the FID needs to be exposed to
* clients sides, it needs to make sure all of fids under one
- * sequence will be located in one MDT. */
+ * sequence will be located in one MDT.
+ */
FID_SEQ_SPECIAL = 0x200000004ULL,
FID_SEQ_QUOTA = 0x200000005ULL,
FID_SEQ_QUOTA_GLB = 0x200000006ULL,
@@ -601,7 +608,8 @@
oi->oi_fid.f_seq = seq;
/* Note: if f_oid + f_ver is zero, we need init it
* to be 1, otherwise, ostid_seq will treat this
- * as old ostid (oi_seq == 0) */
+ * as old ostid (oi_seq == 0)
+ */
if (oi->oi_fid.f_oid == 0 && oi->oi_fid.f_ver == 0)
oi->oi_fid.f_oid = LUSTRE_FID_INIT_OID;
}
@@ -689,11 +697,12 @@
* that we map into the IDIF namespace. It allows up to 2^48
* objects per OST, as this is the object namespace that has
* been in production for years. This can handle create rates
- * of 1M objects/s/OST for 9 years, or combinations thereof. */
+ * of 1M objects/s/OST for 9 years, or combinations thereof.
+ */
if (ostid_id(ostid) >= IDIF_MAX_OID) {
- CERROR("bad MDT0 id, "DOSTID" ost_idx:%u\n",
- POSTID(ostid), ost_idx);
- return -EBADF;
+ CERROR("bad MDT0 id, " DOSTID " ost_idx:%u\n",
+ POSTID(ostid), ost_idx);
+ return -EBADF;
}
fid->f_seq = fid_idif_seq(ostid_id(ostid), ost_idx);
/* truncate to 32 bits by assignment */
@@ -704,7 +713,8 @@
/* This is either an IDIF object, which identifies objects across
* all OSTs, or a regular FID. The IDIF namespace maps legacy
* OST objects into the FID namespace. In both cases, we just
- * pass the FID through, no conversion needed. */
+ * pass the FID through, no conversion needed.
+ */
if (ostid->oi_fid.f_ver != 0) {
CERROR("bad MDT0 id, "DOSTID" ost_idx:%u\n",
POSTID(ostid), ost_idx);
@@ -807,7 +817,7 @@
static inline int fid_is_sane(const struct lu_fid *fid)
{
- return fid != NULL &&
+ return fid &&
((fid_seq(fid) >= FID_SEQ_START && fid_ver(fid) == 0) ||
fid_is_igif(fid) || fid_is_idif(fid) ||
fid_seq_is_rsvd(fid_seq(fid)));
@@ -868,7 +878,8 @@
/** @} lu_fid */
/** \defgroup lu_dir lu_dir
- * @{ */
+ * @{
+ */
/**
* Enumeration of possible directory entry attributes.
@@ -880,24 +891,8 @@
LUDA_FID = 0x0001,
LUDA_TYPE = 0x0002,
LUDA_64BITHASH = 0x0004,
-
- /* The following attrs are used for MDT internal only,
- * not visible to client */
-
- /* Verify the dirent consistency */
- LUDA_VERIFY = 0x8000,
- /* Only check but not repair the dirent inconsistency */
- LUDA_VERIFY_DRYRUN = 0x4000,
- /* The dirent has been repaired, or to be repaired (dryrun). */
- LUDA_REPAIR = 0x2000,
- /* The system is upgraded, has beed or to be repaired (dryrun). */
- LUDA_UPGRADE = 0x1000,
- /* Ignore this record, go to next directly. */
- LUDA_IGNORE = 0x0800,
};
-#define LU_DIRENT_ATTRS_MASK 0xf800
-
/**
* Layout of readdir pages, as transmitted on wire.
*/
@@ -1128,7 +1123,8 @@
__u32 pb_conn_cnt;
__u32 pb_timeout; /* for req, the deadline, for rep, the service est */
__u32 pb_service_time; /* for rep, actual service time, also used for
- net_latency of req */
+ * net_latency of req
+ */
__u32 pb_limit;
__u64 pb_slv;
/* VBR: pre-versions */
@@ -1174,7 +1170,8 @@
/* #define MSG_AT_SUPPORT 0x0008
* This was used in early prototypes of adaptive timeouts, and while there
* shouldn't be any users of that code there also isn't a need for using this
- * bits. Defer usage until at least 1.10 to avoid potential conflict. */
+ * bits. Defer usage until at least 1.10 to avoid potential conflict.
+ */
#define MSG_DELAY_REPLAY 0x0010
#define MSG_VERSION_REPLAY 0x0020
#define MSG_REQ_REPLAY_DONE 0x0040
@@ -1187,7 +1184,7 @@
#define MSG_CONNECT_RECOVERING 0x00000001
#define MSG_CONNECT_RECONNECT 0x00000002
#define MSG_CONNECT_REPLAYABLE 0x00000004
-//#define MSG_CONNECT_PEER 0x8
+/*#define MSG_CONNECT_PEER 0x8 */
#define MSG_CONNECT_LIBCLIENT 0x00000010
#define MSG_CONNECT_INITIAL 0x00000020
#define MSG_CONNECT_ASYNC 0x00000040
@@ -1195,60 +1192,65 @@
#define MSG_CONNECT_TRANSNO 0x00000100 /* report transno */
/* Connect flags */
-#define OBD_CONNECT_RDONLY 0x1ULL /*client has read-only access*/
-#define OBD_CONNECT_INDEX 0x2ULL /*connect specific LOV idx */
-#define OBD_CONNECT_MDS 0x4ULL /*connect from MDT to OST */
-#define OBD_CONNECT_GRANT 0x8ULL /*OSC gets grant at connect */
-#define OBD_CONNECT_SRVLOCK 0x10ULL /*server takes locks for cli */
-#define OBD_CONNECT_VERSION 0x20ULL /*Lustre versions in ocd */
-#define OBD_CONNECT_REQPORTAL 0x40ULL /*Separate non-IO req portal */
-#define OBD_CONNECT_ACL 0x80ULL /*access control lists */
-#define OBD_CONNECT_XATTR 0x100ULL /*client use extended attr */
+#define OBD_CONNECT_RDONLY 0x1ULL /*client has read-only access*/
+#define OBD_CONNECT_INDEX 0x2ULL /*connect specific LOV idx */
+#define OBD_CONNECT_MDS 0x4ULL /*connect from MDT to OST */
+#define OBD_CONNECT_GRANT 0x8ULL /*OSC gets grant at connect */
+#define OBD_CONNECT_SRVLOCK 0x10ULL /*server takes locks for cli */
+#define OBD_CONNECT_VERSION 0x20ULL /*Lustre versions in ocd */
+#define OBD_CONNECT_REQPORTAL 0x40ULL /*Separate non-IO req portal */
+#define OBD_CONNECT_ACL 0x80ULL /*access control lists */
+#define OBD_CONNECT_XATTR 0x100ULL /*client use extended attr */
#define OBD_CONNECT_CROW 0x200ULL /*MDS+OST create obj on write*/
-#define OBD_CONNECT_TRUNCLOCK 0x400ULL /*locks on server for punch */
-#define OBD_CONNECT_TRANSNO 0x800ULL /*replay sends init transno */
-#define OBD_CONNECT_IBITS 0x1000ULL /*support for inodebits locks*/
+#define OBD_CONNECT_TRUNCLOCK 0x400ULL /*locks on server for punch */
+#define OBD_CONNECT_TRANSNO 0x800ULL /*replay sends init transno */
+#define OBD_CONNECT_IBITS 0x1000ULL /*support for inodebits locks*/
#define OBD_CONNECT_JOIN 0x2000ULL /*files can be concatenated.
*We do not support JOIN FILE
*anymore, reserve this flags
*just for preventing such bit
- *to be reused.*/
-#define OBD_CONNECT_ATTRFID 0x4000ULL /*Server can GetAttr By Fid*/
-#define OBD_CONNECT_NODEVOH 0x8000ULL /*No open hndl on specl nodes*/
-#define OBD_CONNECT_RMT_CLIENT 0x10000ULL /*Remote client */
+ *to be reused.
+ */
+#define OBD_CONNECT_ATTRFID 0x4000ULL /*Server can GetAttr By Fid*/
+#define OBD_CONNECT_NODEVOH 0x8000ULL /*No open hndl on specl nodes*/
+#define OBD_CONNECT_RMT_CLIENT 0x10000ULL /*Remote client */
#define OBD_CONNECT_RMT_CLIENT_FORCE 0x20000ULL /*Remote client by force */
-#define OBD_CONNECT_BRW_SIZE 0x40000ULL /*Max bytes per rpc */
-#define OBD_CONNECT_QUOTA64 0x80000ULL /*Not used since 2.4 */
-#define OBD_CONNECT_MDS_CAPA 0x100000ULL /*MDS capability */
-#define OBD_CONNECT_OSS_CAPA 0x200000ULL /*OSS capability */
-#define OBD_CONNECT_CANCELSET 0x400000ULL /*Early batched cancels. */
-#define OBD_CONNECT_SOM 0x800000ULL /*Size on MDS */
-#define OBD_CONNECT_AT 0x1000000ULL /*client uses AT */
+#define OBD_CONNECT_BRW_SIZE 0x40000ULL /*Max bytes per rpc */
+#define OBD_CONNECT_QUOTA64 0x80000ULL /*Not used since 2.4 */
+#define OBD_CONNECT_MDS_CAPA 0x100000ULL /*MDS capability */
+#define OBD_CONNECT_OSS_CAPA 0x200000ULL /*OSS capability */
+#define OBD_CONNECT_CANCELSET 0x400000ULL /*Early batched cancels. */
+#define OBD_CONNECT_SOM 0x800000ULL /*Size on MDS */
+#define OBD_CONNECT_AT 0x1000000ULL /*client uses AT */
#define OBD_CONNECT_LRU_RESIZE 0x2000000ULL /*LRU resize feature. */
-#define OBD_CONNECT_MDS_MDS 0x4000000ULL /*MDS-MDS connection */
+#define OBD_CONNECT_MDS_MDS 0x4000000ULL /*MDS-MDS connection */
#define OBD_CONNECT_REAL 0x8000000ULL /*real connection */
#define OBD_CONNECT_CHANGE_QS 0x10000000ULL /*Not used since 2.4 */
-#define OBD_CONNECT_CKSUM 0x20000000ULL /*support several cksum algos*/
-#define OBD_CONNECT_FID 0x40000000ULL /*FID is supported by server */
-#define OBD_CONNECT_VBR 0x80000000ULL /*version based recovery */
-#define OBD_CONNECT_LOV_V3 0x100000000ULL /*client supports LOV v3 EA */
+#define OBD_CONNECT_CKSUM 0x20000000ULL /*support several cksum algos*/
+#define OBD_CONNECT_FID 0x40000000ULL /*FID is supported by server */
+#define OBD_CONNECT_VBR 0x80000000ULL /*version based recovery */
+#define OBD_CONNECT_LOV_V3 0x100000000ULL /*client supports LOV v3 EA */
#define OBD_CONNECT_GRANT_SHRINK 0x200000000ULL /* support grant shrink */
#define OBD_CONNECT_SKIP_ORPHAN 0x400000000ULL /* don't reuse orphan objids */
#define OBD_CONNECT_MAX_EASIZE 0x800000000ULL /* preserved for large EA */
#define OBD_CONNECT_FULL20 0x1000000000ULL /* it is 2.0 client */
#define OBD_CONNECT_LAYOUTLOCK 0x2000000000ULL /* client uses layout lock */
#define OBD_CONNECT_64BITHASH 0x4000000000ULL /* client supports 64-bits
- * directory hash */
+ * directory hash
+ */
#define OBD_CONNECT_MAXBYTES 0x8000000000ULL /* max stripe size */
#define OBD_CONNECT_IMP_RECOV 0x10000000000ULL /* imp recovery support */
#define OBD_CONNECT_JOBSTATS 0x20000000000ULL /* jobid in ptlrpc_body */
#define OBD_CONNECT_UMASK 0x40000000000ULL /* create uses client umask */
#define OBD_CONNECT_EINPROGRESS 0x80000000000ULL /* client handles -EINPROGRESS
- * RPC error properly */
+ * RPC error properly
+ */
#define OBD_CONNECT_GRANT_PARAM 0x100000000000ULL/* extra grant params used for
- * finer space reservation */
+ * finer space reservation
+ */
#define OBD_CONNECT_FLOCK_OWNER 0x200000000000ULL /* for the fixed 1.8
- * policy and 2.x server */
+ * policy and 2.x server
+ */
#define OBD_CONNECT_LVB_TYPE 0x400000000000ULL /* variable type of LVB */
#define OBD_CONNECT_NANOSEC_TIME 0x800000000000ULL /* nanosecond timestamps */
#define OBD_CONNECT_LIGHTWEIGHT 0x1000000000000ULL/* lightweight connection */
@@ -1264,61 +1266,19 @@
* submit a small patch against EVERY branch that ONLY adds the new flag,
* updates obd_connect_names[] for lprocfs_rd_connect_flags(), adds the
* flag to check_obd_connect_data(), and updates wiretests accordingly, so it
- * can be approved and landed easily to reserve the flag for future use. */
+ * can be approved and landed easily to reserve the flag for future use.
+ */
/* The MNE_SWAB flag is overloading the MDS_MDS bit only for the MGS
* connection. It is a temporary bug fix for Imperative Recovery interop
* between 2.2 and 2.3 x86/ppc nodes, and can be removed when interop for
- * 2.2 clients/servers is no longer needed. LU-1252/LU-1644. */
+ * 2.2 clients/servers is no longer needed. LU-1252/LU-1644.
+ */
#define OBD_CONNECT_MNE_SWAB OBD_CONNECT_MDS_MDS
#define OCD_HAS_FLAG(ocd, flg) \
(!!((ocd)->ocd_connect_flags & OBD_CONNECT_##flg))
-#define LRU_RESIZE_CONNECT_FLAG OBD_CONNECT_LRU_RESIZE
-
-#define MDT_CONNECT_SUPPORTED (OBD_CONNECT_RDONLY | OBD_CONNECT_VERSION | \
- OBD_CONNECT_ACL | OBD_CONNECT_XATTR | \
- OBD_CONNECT_IBITS | \
- OBD_CONNECT_NODEVOH | OBD_CONNECT_ATTRFID | \
- OBD_CONNECT_CANCELSET | OBD_CONNECT_AT | \
- OBD_CONNECT_RMT_CLIENT | \
- OBD_CONNECT_RMT_CLIENT_FORCE | \
- OBD_CONNECT_BRW_SIZE | OBD_CONNECT_MDS_CAPA | \
- OBD_CONNECT_OSS_CAPA | OBD_CONNECT_MDS_MDS | \
- OBD_CONNECT_FID | LRU_RESIZE_CONNECT_FLAG | \
- OBD_CONNECT_VBR | OBD_CONNECT_LOV_V3 | \
- OBD_CONNECT_SOM | OBD_CONNECT_FULL20 | \
- OBD_CONNECT_64BITHASH | OBD_CONNECT_JOBSTATS | \
- OBD_CONNECT_EINPROGRESS | \
- OBD_CONNECT_LIGHTWEIGHT | OBD_CONNECT_UMASK | \
- OBD_CONNECT_LVB_TYPE | OBD_CONNECT_LAYOUTLOCK |\
- OBD_CONNECT_PINGLESS | OBD_CONNECT_MAX_EASIZE |\
- OBD_CONNECT_FLOCK_DEAD | \
- OBD_CONNECT_DISP_STRIPE)
-
-#define OST_CONNECT_SUPPORTED (OBD_CONNECT_SRVLOCK | OBD_CONNECT_GRANT | \
- OBD_CONNECT_REQPORTAL | OBD_CONNECT_VERSION | \
- OBD_CONNECT_TRUNCLOCK | OBD_CONNECT_INDEX | \
- OBD_CONNECT_BRW_SIZE | OBD_CONNECT_OSS_CAPA | \
- OBD_CONNECT_CANCELSET | OBD_CONNECT_AT | \
- LRU_RESIZE_CONNECT_FLAG | OBD_CONNECT_CKSUM | \
- OBD_CONNECT_RMT_CLIENT | \
- OBD_CONNECT_RMT_CLIENT_FORCE | OBD_CONNECT_VBR | \
- OBD_CONNECT_MDS | OBD_CONNECT_SKIP_ORPHAN | \
- OBD_CONNECT_GRANT_SHRINK | OBD_CONNECT_FULL20 | \
- OBD_CONNECT_64BITHASH | OBD_CONNECT_MAXBYTES | \
- OBD_CONNECT_MAX_EASIZE | \
- OBD_CONNECT_EINPROGRESS | \
- OBD_CONNECT_JOBSTATS | \
- OBD_CONNECT_LIGHTWEIGHT | OBD_CONNECT_LVB_TYPE|\
- OBD_CONNECT_LAYOUTLOCK | OBD_CONNECT_FID | \
- OBD_CONNECT_PINGLESS)
-#define ECHO_CONNECT_SUPPORTED (0)
-#define MGS_CONNECT_SUPPORTED (OBD_CONNECT_VERSION | OBD_CONNECT_AT | \
- OBD_CONNECT_FULL20 | OBD_CONNECT_IMP_RECOV | \
- OBD_CONNECT_MNE_SWAB | OBD_CONNECT_PINGLESS)
-
/* Features required for this version of the client to work with server */
#define CLIENT_CONNECT_MDT_REQD (OBD_CONNECT_IBITS | OBD_CONNECT_FID | \
OBD_CONNECT_FULL20)
@@ -1334,7 +1294,8 @@
/* This structure is used for both request and reply.
*
* If we eventually have separate connect data for different types, which we
- * almost certainly will, then perhaps we stick a union in here. */
+ * almost certainly will, then perhaps we stick a union in here.
+ */
struct obd_connect_data_v1 {
__u64 ocd_connect_flags; /* OBD_CONNECT_* per above */
__u32 ocd_version; /* lustre release version number */
@@ -1364,7 +1325,7 @@
__u8 ocd_blocksize; /* log2 of the backend filesystem blocksize */
__u8 ocd_inodespace; /* log2 of the per-inode space consumption */
__u16 ocd_grant_extent; /* per-extent grant overhead, in 1K blocks */
- __u32 ocd_unused; /* also fix lustre_swab_connect */
+ __u32 ocd_unused; /* also fix lustre_swab_connect */
__u64 ocd_transno; /* first transno from client to be replayed */
__u32 ocd_group; /* MDS group on OST */
__u32 ocd_cksum_types; /* supported checksum algorithms */
@@ -1374,7 +1335,8 @@
/* Fields after ocd_maxbytes are only accessible by the receiver
* if the corresponding flag in ocd_connect_flags is set. Accessing
* any field after ocd_maxbytes on the receiver without a valid flag
- * may result in out-of-bound memory access and kernel oops. */
+ * may result in out-of-bound memory access and kernel oops.
+ */
__u64 padding1; /* added 2.1.0. also fix lustre_swab_connect */
__u64 padding2; /* added 2.1.0. also fix lustre_swab_connect */
__u64 padding3; /* added 2.1.0. also fix lustre_swab_connect */
@@ -1398,7 +1360,8 @@
* with senior engineers before starting to use a new field. Then, submit
* a small patch against EVERY branch that ONLY adds the new field along with
* the matching OBD_CONNECT flag, so that can be approved and landed easily to
- * reserve the flag for future use. */
+ * reserve the flag for future use.
+ */
void lustre_swab_connect(struct obd_connect_data *ocd);
@@ -1408,18 +1371,18 @@
* Please update DECLARE_CKSUM_NAME/OBD_CKSUM_ALL in obd.h when adding a new
* algorithm and also the OBD_FL_CKSUM* flags.
*/
-typedef enum {
+enum cksum_type {
OBD_CKSUM_CRC32 = 0x00000001,
OBD_CKSUM_ADLER = 0x00000002,
OBD_CKSUM_CRC32C = 0x00000004,
-} cksum_type_t;
+};
/*
* OST requests: OBDO & OBD request records
*/
/* opcodes */
-typedef enum {
+enum ost_cmd {
OST_REPLY = 0, /* reply ? */
OST_GETATTR = 1,
OST_SETATTR = 2,
@@ -1440,14 +1403,14 @@
OST_QUOTACTL = 19,
OST_QUOTA_ADJUST_QUNIT = 20, /* not used since 2.4 */
OST_LAST_OPC
-} ost_cmd_t;
+};
#define OST_FIRST_OPC OST_REPLY
enum obdo_flags {
OBD_FL_INLINEDATA = 0x00000001,
OBD_FL_OBDMDEXISTS = 0x00000002,
OBD_FL_DELORPHAN = 0x00000004, /* if set in o_flags delete orphans */
- OBD_FL_NORPC = 0x00000008, /* set in o_flags do in OSC not OST */
+ OBD_FL_NORPC = 0x00000008, /* set in o_flags do in OSC not OST */
OBD_FL_IDONLY = 0x00000010, /* set in o_flags only adjust obj id*/
OBD_FL_RECREATE_OBJS = 0x00000020, /* recreate missing obj */
OBD_FL_DEBUG_CHECK = 0x00000040, /* echo client/server debug check */
@@ -1461,14 +1424,16 @@
OBD_FL_CKSUM_RSVD2 = 0x00008000, /* for future cksum types */
OBD_FL_CKSUM_RSVD3 = 0x00010000, /* for future cksum types */
OBD_FL_SHRINK_GRANT = 0x00020000, /* object shrink the grant */
- OBD_FL_MMAP = 0x00040000, /* object is mmapped on the client.
+ OBD_FL_MMAP = 0x00040000, /* object is mmapped on the client.
* XXX: obsoleted - reserved for old
- * clients prior than 2.2 */
+ * clients prior than 2.2
+ */
OBD_FL_RECOV_RESEND = 0x00080000, /* recoverable resent */
OBD_FL_NOSPC_BLK = 0x00100000, /* no more block space on OST */
/* Note that while these checksum values are currently separate bits,
- * in 2.x we can actually allow all values from 1-31 if we wanted. */
+ * in 2.x we can actually allow all values from 1-31 if we wanted.
+ */
OBD_FL_CKSUM_ALL = OBD_FL_CKSUM_CRC32 | OBD_FL_CKSUM_ADLER |
OBD_FL_CKSUM_CRC32C,
@@ -1657,7 +1622,7 @@
}
}
-#define OBD_MD_FLID (0x00000001ULL) /* object ID */
+#define OBD_MD_FLID (0x00000001ULL) /* object ID */
#define OBD_MD_FLATIME (0x00000002ULL) /* access time */
#define OBD_MD_FLMTIME (0x00000004ULL) /* data modification time */
#define OBD_MD_FLCTIME (0x00000008ULL) /* change time */
@@ -1683,22 +1648,23 @@
#define OBD_MD_FLGROUP (0x01000000ULL) /* group */
#define OBD_MD_FLFID (0x02000000ULL) /* ->ost write inline fid */
#define OBD_MD_FLEPOCH (0x04000000ULL) /* ->ost write with ioepoch */
- /* ->mds if epoch opens or closes */
+ /* ->mds if epoch opens or closes
+ */
#define OBD_MD_FLGRANT (0x08000000ULL) /* ost preallocation space grant */
#define OBD_MD_FLDIREA (0x10000000ULL) /* dir's extended attribute data */
#define OBD_MD_FLUSRQUOTA (0x20000000ULL) /* over quota flags sent from ost */
#define OBD_MD_FLGRPQUOTA (0x40000000ULL) /* over quota flags sent from ost */
#define OBD_MD_FLMODEASIZE (0x80000000ULL) /* EA size will be changed */
-#define OBD_MD_MDS (0x0000000100000000ULL) /* where an inode lives on */
+#define OBD_MD_MDS (0x0000000100000000ULL) /* where an inode lives on */
#define OBD_MD_REINT (0x0000000200000000ULL) /* reintegrate oa */
-#define OBD_MD_MEA (0x0000000400000000ULL) /* CMD split EA */
+#define OBD_MD_MEA (0x0000000400000000ULL) /* CMD split EA */
#define OBD_MD_TSTATE (0x0000000800000000ULL) /* transient state field */
#define OBD_MD_FLXATTR (0x0000001000000000ULL) /* xattr */
#define OBD_MD_FLXATTRLS (0x0000002000000000ULL) /* xattr list */
#define OBD_MD_FLXATTRRM (0x0000004000000000ULL) /* xattr remove */
-#define OBD_MD_FLACL (0x0000008000000000ULL) /* ACL */
+#define OBD_MD_FLACL (0x0000008000000000ULL) /* ACL */
#define OBD_MD_FLRMTPERM (0x0000010000000000ULL) /* remote permission */
#define OBD_MD_FLMDSCAPA (0x0000020000000000ULL) /* MDS capability */
#define OBD_MD_FLOSSCAPA (0x0000040000000000ULL) /* OSS capability */
@@ -1707,7 +1673,8 @@
#define OBD_MD_FLGETATTRLOCK (0x0000200000000000ULL) /* Get IOEpoch attributes
* under lock; for xattr
* requests means the
- * client holds the lock */
+ * client holds the lock
+ */
#define OBD_MD_FLOBJCOUNT (0x0000400000000000ULL) /* for multiple destroy */
#define OBD_MD_FLRMTLSETFACL (0x0001000000000000ULL) /* lfs lsetfacl case */
@@ -1727,7 +1694,8 @@
#define OBD_MD_FLXATTRALL (OBD_MD_FLXATTR | OBD_MD_FLXATTRLS)
/* don't forget obdo_fid which is way down at the bottom so it can
- * come after the definition of llog_cookie */
+ * come after the definition of llog_cookie
+ */
enum hss_valid {
HSS_SETMASK = 0x01,
@@ -1749,19 +1717,20 @@
/* ost_body.data values for OST_BRW */
-#define OBD_BRW_READ 0x01
-#define OBD_BRW_WRITE 0x02
-#define OBD_BRW_RWMASK (OBD_BRW_READ | OBD_BRW_WRITE)
-#define OBD_BRW_SYNC 0x08 /* this page is a part of synchronous
+#define OBD_BRW_READ 0x01
+#define OBD_BRW_WRITE 0x02
+#define OBD_BRW_RWMASK (OBD_BRW_READ | OBD_BRW_WRITE)
+#define OBD_BRW_SYNC 0x08 /* this page is a part of synchronous
* transfer and is not accounted in
- * the grant. */
-#define OBD_BRW_CHECK 0x10
+ * the grant.
+ */
+#define OBD_BRW_CHECK 0x10
#define OBD_BRW_FROM_GRANT 0x20 /* the osc manages this under llite */
-#define OBD_BRW_GRANTED 0x40 /* the ost manages this */
-#define OBD_BRW_NOCACHE 0x80 /* this page is a part of non-cached IO */
-#define OBD_BRW_NOQUOTA 0x100
-#define OBD_BRW_SRVLOCK 0x200 /* Client holds no lock over this page */
-#define OBD_BRW_ASYNC 0x400 /* Server may delay commit to disk */
+#define OBD_BRW_GRANTED 0x40 /* the ost manages this */
+#define OBD_BRW_NOCACHE 0x80 /* this page is a part of non-cached IO */
+#define OBD_BRW_NOQUOTA 0x100
+#define OBD_BRW_SRVLOCK 0x200 /* Client holds no lock over this page */
+#define OBD_BRW_ASYNC 0x400 /* Server may delay commit to disk */
#define OBD_BRW_MEMALLOC 0x800 /* Client runs in the "kswapd" context */
#define OBD_BRW_OVER_USRQUOTA 0x1000 /* Running out of user quota */
#define OBD_BRW_OVER_GRPQUOTA 0x2000 /* Running out of group quota */
@@ -1775,7 +1744,8 @@
struct ost_id ioo_oid; /* object ID, if multi-obj BRW */
__u32 ioo_max_brw; /* low 16 bits were o_mode before 2.4,
* now (PTLRPC_BULK_OPS_COUNT - 1) in
- * high 16 bits in 2.4 and later */
+ * high 16 bits in 2.4 and later
+ */
__u32 ioo_bufcnt; /* number of niobufs for this object */
};
@@ -1799,7 +1769,8 @@
/* lock value block communicated between the filter and llite */
/* OST_LVB_ERR_INIT is needed because the return code in rc is
- * negative, i.e. because ((MASK + rc) & MASK) != MASK. */
+ * negative, i.e. because ((MASK + rc) & MASK) != MASK.
+ */
#define OST_LVB_ERR_INIT 0xffbadbad80000000ULL
#define OST_LVB_ERR_MASK 0xffbadbad00000000ULL
#define OST_LVB_IS_ERR(blocks) \
@@ -1836,23 +1807,12 @@
* lquota data structures
*/
-#ifndef QUOTABLOCK_BITS
-#define QUOTABLOCK_BITS 10
-#endif
-
-#ifndef QUOTABLOCK_SIZE
-#define QUOTABLOCK_SIZE (1 << QUOTABLOCK_BITS)
-#endif
-
-#ifndef toqb
-#define toqb(x) (((x) + QUOTABLOCK_SIZE - 1) >> QUOTABLOCK_BITS)
-#endif
-
/* The lquota_id structure is an union of all the possible identifier types that
* can be used with quota, this includes:
* - 64-bit user ID
* - 64-bit group ID
- * - a FID which can be used for per-directory quota in the future */
+ * - a FID which can be used for per-directory quota in the future
+ */
union lquota_id {
struct lu_fid qid_fid; /* FID for per-directory quota */
__u64 qid_uid; /* user identifier */
@@ -1889,89 +1849,6 @@
Q_COPY(out, in, qc_dqblk); \
} while (0)
-/* Body of quota request used for quota acquire/release RPCs between quota
- * master (aka QMT) and slaves (ak QSD). */
-struct quota_body {
- struct lu_fid qb_fid; /* FID of global index packing the pool ID
- * and type (data or metadata) as well as
- * the quota type (user or group). */
- union lquota_id qb_id; /* uid or gid or directory FID */
- __u32 qb_flags; /* see below */
- __u32 qb_padding;
- __u64 qb_count; /* acquire/release count (kbytes/inodes) */
- __u64 qb_usage; /* current slave usage (kbytes/inodes) */
- __u64 qb_slv_ver; /* slave index file version */
- struct lustre_handle qb_lockh; /* per-ID lock handle */
- struct lustre_handle qb_glb_lockh; /* global lock handle */
- __u64 qb_padding1[4];
-};
-
-/* When the quota_body is used in the reply of quota global intent
- * lock (IT_QUOTA_CONN) reply, qb_fid contains slave index file FID. */
-#define qb_slv_fid qb_fid
-/* qb_usage is the current qunit (in kbytes/inodes) when quota_body is used in
- * quota reply */
-#define qb_qunit qb_usage
-
-#define QUOTA_DQACQ_FL_ACQ 0x1 /* acquire quota */
-#define QUOTA_DQACQ_FL_PREACQ 0x2 /* pre-acquire */
-#define QUOTA_DQACQ_FL_REL 0x4 /* release quota */
-#define QUOTA_DQACQ_FL_REPORT 0x8 /* report usage */
-
-void lustre_swab_quota_body(struct quota_body *b);
-
-/* Quota types currently supported */
-enum {
- LQUOTA_TYPE_USR = 0x00, /* maps to USRQUOTA */
- LQUOTA_TYPE_GRP = 0x01, /* maps to GRPQUOTA */
- LQUOTA_TYPE_MAX
-};
-
-/* There are 2 different resource types on which a quota limit can be enforced:
- * - inodes on the MDTs
- * - blocks on the OSTs */
-enum {
- LQUOTA_RES_MD = 0x01, /* skip 0 to avoid null oid in FID */
- LQUOTA_RES_DT = 0x02,
- LQUOTA_LAST_RES,
- LQUOTA_FIRST_RES = LQUOTA_RES_MD
-};
-
-#define LQUOTA_NR_RES (LQUOTA_LAST_RES - LQUOTA_FIRST_RES + 1)
-
-/*
- * Space accounting support
- * Format of an accounting record, providing disk usage information for a given
- * user or group
- */
-struct lquota_acct_rec { /* 16 bytes */
- __u64 bspace; /* current space in use */
- __u64 ispace; /* current # inodes in use */
-};
-
-/*
- * Global quota index support
- * Format of a global record, providing global quota settings for a given quota
- * identifier
- */
-struct lquota_glb_rec { /* 32 bytes */
- __u64 qbr_hardlimit; /* quota hard limit, in #inodes or kbytes */
- __u64 qbr_softlimit; /* quota soft limit, in #inodes or kbytes */
- __u64 qbr_time; /* grace time, in seconds */
- __u64 qbr_granted; /* how much is granted to slaves, in #inodes or
- * kbytes */
-};
-
-/*
- * Slave index support
- * Format of a slave record, recording how much space is granted to a given
- * slave
- */
-struct lquota_slv_rec { /* 8 bytes */
- __u64 qsr_granted; /* space granted to the slave for the key=ID,
- * in #inodes or kbytes */
-};
-
/* Data structures associated with the quota locks */
/* Glimpse descriptor used for the index & per-ID quota locks */
@@ -1985,9 +1862,6 @@
__u64 gl_pad2;
};
-#define gl_qunit gl_hardlimit /* current qunit value used when
- * glimpsing per-ID quota locks */
-
/* quota glimpse flags */
#define LQUOTA_FL_EDQUOT 0x1 /* user/group out of quota space on QMT */
@@ -2002,15 +1876,12 @@
void lustre_swab_lquota_lvb(struct lquota_lvb *lvb);
-/* LVB used with global quota lock */
-#define lvb_glb_ver lvb_id_may_rel /* current version of the global index */
-
/* op codes */
-typedef enum {
+enum quota_cmd {
QUOTA_DQACQ = 601,
QUOTA_DQREL = 602,
QUOTA_LAST_OPC
-} quota_cmd_t;
+};
#define QUOTA_FIRST_OPC QUOTA_DQACQ
/*
@@ -2018,7 +1889,7 @@
*/
/* opcodes */
-typedef enum {
+enum mds_cmd {
MDS_GETATTR = 33,
MDS_GETATTR_NAME = 34,
MDS_CLOSE = 35,
@@ -2049,23 +1920,15 @@
MDS_HSM_CT_UNREGISTER = 60,
MDS_SWAP_LAYOUTS = 61,
MDS_LAST_OPC
-} mds_cmd_t;
+};
#define MDS_FIRST_OPC MDS_GETATTR
-/* opcodes for object update */
-typedef enum {
- UPDATE_OBJ = 1000,
- UPDATE_LAST_OPC
-} update_cmd_t;
-
-#define UPDATE_FIRST_OPC UPDATE_OBJ
-
/*
* Do not exceed 63
*/
-typedef enum {
+enum mdt_reint_cmd {
REINT_SETATTR = 1,
REINT_CREATE = 2,
REINT_LINK = 3,
@@ -2074,9 +1937,9 @@
REINT_OPEN = 6,
REINT_SETXATTR = 7,
REINT_RMENTRY = 8,
-// REINT_WRITE = 9,
+/* REINT_WRITE = 9, */
REINT_MAX
-} mds_reint_t, mdt_reint_t;
+};
void lustre_swab_generic_32s(__u32 *val);
@@ -2097,7 +1960,8 @@
/* INODE LOCK PARTS */
#define MDS_INODELOCK_LOOKUP 0x000001 /* For namespace, dentry etc, and also
* was used to protect permission (mode,
- * owner, group etc) before 2.4. */
+ * owner, group etc) before 2.4.
+ */
#define MDS_INODELOCK_UPDATE 0x000002 /* size, links, timestamps */
#define MDS_INODELOCK_OPEN 0x000004 /* For opened files */
#define MDS_INODELOCK_LAYOUT 0x000008 /* for layout */
@@ -2110,7 +1974,8 @@
* For local directory, MDT will always grant UPDATE_LOCK|PERM_LOCK together.
* For Remote directory, the master MDT, where the remote directory is, will
* grant UPDATE_LOCK|PERM_LOCK, and the remote MDT, where the name entry is,
- * will grant LOOKUP_LOCK. */
+ * will grant LOOKUP_LOCK.
+ */
#define MDS_INODELOCK_PERM 0x000010
#define MDS_INODELOCK_XATTR 0x000020 /* extended attributes */
@@ -2120,7 +1985,8 @@
/* NOTE: until Lustre 1.8.7/2.1.1 the fid_ver() was packed into name[2],
* but was moved into name[1] along with the OID to avoid consuming the
- * name[2,3] fields that need to be used for the quota id (also a FID). */
+ * name[2,3] fields that need to be used for the quota id (also a FID).
+ */
enum {
LUSTRE_RES_ID_SEQ_OFF = 0,
LUSTRE_RES_ID_VER_OID_OFF = 1,
@@ -2156,7 +2022,8 @@
#define LUSTRE_BFLAG_UNCOMMITTED_WRITES 0x1
/* these should be identical to their EXT4_*_FL counterparts, they are
- * redefined here only to avoid dragging in fs/ext4/ext4.h */
+ * redefined here only to avoid dragging in fs/ext4/ext4.h
+ */
#define LUSTRE_SYNC_FL 0x00000008 /* Synchronous updates */
#define LUSTRE_IMMUTABLE_FL 0x00000010 /* Immutable file */
#define LUSTRE_APPEND_FL 0x00000020 /* writes to file may only append */
@@ -2168,15 +2035,14 @@
* protocol equivalents of LDISKFS_*_FL values stored on disk, while
* the S_* flags are kernel-internal values that change between kernel
* versions. These flags are set/cleared via FSFILT_IOC_{GET,SET}_FLAGS.
- * See b=16526 for a full history. */
+ * See b=16526 for a full history.
+ */
static inline int ll_ext_to_inode_flags(int flags)
{
return (((flags & LUSTRE_SYNC_FL) ? S_SYNC : 0) |
((flags & LUSTRE_NOATIME_FL) ? S_NOATIME : 0) |
((flags & LUSTRE_APPEND_FL) ? S_APPEND : 0) |
-#if defined(S_DIRSYNC)
((flags & LUSTRE_DIRSYNC_FL) ? S_DIRSYNC : 0) |
-#endif
((flags & LUSTRE_IMMUTABLE_FL) ? S_IMMUTABLE : 0));
}
@@ -2185,9 +2051,7 @@
return (((iflags & S_SYNC) ? LUSTRE_SYNC_FL : 0) |
((iflags & S_NOATIME) ? LUSTRE_NOATIME_FL : 0) |
((iflags & S_APPEND) ? LUSTRE_APPEND_FL : 0) |
-#if defined(S_DIRSYNC)
((iflags & S_DIRSYNC) ? LUSTRE_DIRSYNC_FL : 0) |
-#endif
((iflags & S_IMMUTABLE) ? LUSTRE_IMMUTABLE_FL : 0));
}
@@ -2207,9 +2071,10 @@
__s64 ctime;
__u64 blocks; /* XID, in the case of MDS_READPAGE */
__u64 ioepoch;
- __u64 t_state; /* transient file state defined in
- * enum md_transient_state
- * was "ino" until 2.4.0 */
+ __u64 t_state; /* transient file state defined in
+ * enum md_transient_state
+ * was "ino" until 2.4.0
+ */
__u32 fsuid;
__u32 fsgid;
__u32 capability;
@@ -2219,7 +2084,7 @@
__u32 flags; /* from vfs for pin/unpin, LUSTRE_BFLAG close */
__u32 rdev;
__u32 nlink; /* #bytes to read in the case of MDS_READPAGE */
- __u32 unused2; /* was "generation" until 2.4.0 */
+ __u32 unused2; /* was "generation" until 2.4.0 */
__u32 suppgid;
__u32 eadatasize;
__u32 aclsize;
@@ -2256,7 +2121,8 @@
};
/* inode access permission for remote user, the inode info are omitted,
- * for client knows them. */
+ * for client knows them.
+ */
struct mdt_remote_perm {
__u32 rp_uid;
__u32 rp_gid;
@@ -2306,13 +2172,13 @@
* since the client and MDS may run different kernels (see bug 13828)
* Therefore, we should only use MDS_ATTR_* attributes for sa_valid.
*/
-#define MDS_ATTR_MODE 0x1ULL /* = 1 */
-#define MDS_ATTR_UID 0x2ULL /* = 2 */
-#define MDS_ATTR_GID 0x4ULL /* = 4 */
-#define MDS_ATTR_SIZE 0x8ULL /* = 8 */
-#define MDS_ATTR_ATIME 0x10ULL /* = 16 */
-#define MDS_ATTR_MTIME 0x20ULL /* = 32 */
-#define MDS_ATTR_CTIME 0x40ULL /* = 64 */
+#define MDS_ATTR_MODE 0x1ULL /* = 1 */
+#define MDS_ATTR_UID 0x2ULL /* = 2 */
+#define MDS_ATTR_GID 0x4ULL /* = 4 */
+#define MDS_ATTR_SIZE 0x8ULL /* = 8 */
+#define MDS_ATTR_ATIME 0x10ULL /* = 16 */
+#define MDS_ATTR_MTIME 0x20ULL /* = 32 */
+#define MDS_ATTR_CTIME 0x40ULL /* = 64 */
#define MDS_ATTR_ATIME_SET 0x80ULL /* = 128 */
#define MDS_ATTR_MTIME_SET 0x100ULL /* = 256 */
#define MDS_ATTR_FORCE 0x200ULL /* = 512, Not a change, but a change it */
@@ -2320,14 +2186,11 @@
#define MDS_ATTR_KILL_SUID 0x800ULL /* = 2048 */
#define MDS_ATTR_KILL_SGID 0x1000ULL /* = 4096 */
#define MDS_ATTR_CTIME_SET 0x2000ULL /* = 8192 */
-#define MDS_ATTR_FROM_OPEN 0x4000ULL /* = 16384, called from open path, ie O_TRUNC */
+#define MDS_ATTR_FROM_OPEN 0x4000ULL /* = 16384, called from open path,
+ * ie O_TRUNC
+ */
#define MDS_ATTR_BLOCKS 0x8000ULL /* = 32768 */
-#ifndef FMODE_READ
-#define FMODE_READ 00000001
-#define FMODE_WRITE 00000002
-#endif
-
#define MDS_FMODE_CLOSED 00000000
#define MDS_FMODE_EXEC 00000004
/* IO Epoch is opened on a closed file. */
@@ -2354,9 +2217,10 @@
* We do not support JOIN FILE
* anymore, reserve this flags
* just for preventing such bit
- * to be reused. */
+ * to be reused.
+ */
-#define MDS_OPEN_LOCK 04000000000 /* This open requires open lock */
+#define MDS_OPEN_LOCK 04000000000 /* This open requires open lock */
#define MDS_OPEN_HAS_EA 010000000000 /* specify object create pattern */
#define MDS_OPEN_HAS_OBJS 020000000000 /* Just set the EA the obj exist */
#define MDS_OPEN_NORESTORE 0100000000000ULL /* Do not restore file at open */
@@ -2409,7 +2273,8 @@
__u32 cr_bias;
/* use of helpers set/get_mrc_cr_flags() is needed to access
* 64 bits cr_flags [cr_flags_l, cr_flags_h], this is done to
- * extend cr_flags size without breaking 1.8 compat */
+ * extend cr_flags size without breaking 1.8 compat
+ */
__u32 cr_flags_l; /* for use with open, low 32 bits */
__u32 cr_flags_h; /* for use with open, high 32 bits */
__u32 cr_umask; /* umask for create */
@@ -2630,7 +2495,8 @@
#define LOV_MAX_UUID_BUFFER_SIZE 8192
/* The size of the buffer the lov/mdc reserves for the
* array of UUIDs returned by the MDS. With the current
- * protocol, this will limit the max number of OSTs per LOV */
+ * protocol, this will limit the max number of OSTs per LOV
+ */
#define LOV_DESC_MAGIC 0xB0CCDE5C
#define LOV_DESC_QOS_MAXAGE_DEFAULT 5 /* Seconds */
@@ -2639,13 +2505,13 @@
/* LOV settings descriptor (should only contain static info) */
struct lov_desc {
__u32 ld_tgt_count; /* how many OBD's */
- __u32 ld_active_tgt_count; /* how many active */
- __u32 ld_default_stripe_count; /* how many objects are used */
- __u32 ld_pattern; /* default PATTERN_RAID0 */
- __u64 ld_default_stripe_size; /* in bytes */
- __u64 ld_default_stripe_offset; /* in bytes */
+ __u32 ld_active_tgt_count; /* how many active */
+ __u32 ld_default_stripe_count; /* how many objects are used */
+ __u32 ld_pattern; /* default PATTERN_RAID0 */
+ __u64 ld_default_stripe_size; /* in bytes */
+ __u64 ld_default_stripe_offset; /* in bytes */
__u32 ld_padding_0; /* unused */
- __u32 ld_qos_maxage; /* in second */
+ __u32 ld_qos_maxage; /* in second */
__u32 ld_padding_1; /* also fix lustre_swab_lov_desc */
__u32 ld_padding_2; /* also fix lustre_swab_lov_desc */
struct obd_uuid ld_uuid;
@@ -2659,7 +2525,7 @@
* LDLM requests:
*/
/* opcodes -- MUST be distinct from OST/MDS opcodes */
-typedef enum {
+enum ldlm_cmd {
LDLM_ENQUEUE = 101,
LDLM_CONVERT = 102,
LDLM_CANCEL = 103,
@@ -2668,7 +2534,7 @@
LDLM_GL_CALLBACK = 106,
LDLM_SET_INFO = 107,
LDLM_LAST_OPC
-} ldlm_cmd_t;
+};
#define LDLM_FIRST_OPC LDLM_ENQUEUE
#define RES_NAME_SIZE 4
@@ -2687,7 +2553,7 @@
}
/* lock types */
-typedef enum {
+enum ldlm_mode {
LCK_MINMODE = 0,
LCK_EX = 1,
LCK_PW = 2,
@@ -2698,17 +2564,17 @@
LCK_GROUP = 64,
LCK_COS = 128,
LCK_MAXMODE
-} ldlm_mode_t;
+};
#define LCK_MODE_NUM 8
-typedef enum {
+enum ldlm_type {
LDLM_PLAIN = 10,
LDLM_EXTENT = 11,
LDLM_FLOCK = 12,
LDLM_IBITS = 13,
LDLM_MAX_TYPE
-} ldlm_type_t;
+};
#define LDLM_MIN_TYPE LDLM_PLAIN
@@ -2747,7 +2613,8 @@
* the first fields of the ldlm_flock structure because there is only
* one ldlm_swab routine to process the ldlm_policy_data_t union. if
* this ever changes we will need to swab the union differently based
- * on the resource type. */
+ * on the resource type.
+ */
typedef union {
struct ldlm_extent l_extent;
@@ -2768,15 +2635,15 @@
void lustre_swab_ldlm_intent(struct ldlm_intent *i);
struct ldlm_resource_desc {
- ldlm_type_t lr_type;
+ enum ldlm_type lr_type;
__u32 lr_padding; /* also fix lustre_swab_ldlm_resource_desc */
struct ldlm_res_id lr_name;
};
struct ldlm_lock_desc {
struct ldlm_resource_desc l_resource;
- ldlm_mode_t l_req_mode;
- ldlm_mode_t l_granted_mode;
+ enum ldlm_mode l_req_mode;
+ enum ldlm_mode l_granted_mode;
ldlm_wire_policy_data_t l_policy_data;
};
@@ -2793,7 +2660,8 @@
void lustre_swab_ldlm_request(struct ldlm_request *rq);
/* If LDLM_ENQUEUE, 1 slot is already occupied, 1 is available.
- * Otherwise, 2 are available. */
+ * Otherwise, 2 are available.
+ */
#define ldlm_request_bufsize(count, type) \
({ \
int _avail = LDLM_LOCKREQ_HANDLES; \
@@ -2820,7 +2688,7 @@
/*
* Opcodes for mountconf (mgs and mgc)
*/
-typedef enum {
+enum mgs_cmd {
MGS_CONNECT = 250,
MGS_DISCONNECT,
MGS_EXCEPTION, /* node died, etc. */
@@ -2829,7 +2697,7 @@
MGS_SET_INFO,
MGS_CONFIG_READ,
MGS_LAST_OPC
-} mgs_cmd_t;
+};
#define MGS_FIRST_OPC MGS_CONNECT
#define MGS_PARAM_MAXLEN 1024
@@ -2918,13 +2786,13 @@
* Opcodes for multiple servers.
*/
-typedef enum {
+enum obd_cmd {
OBD_PING = 400,
OBD_LOG_CANCEL,
OBD_QC_CALLBACK,
OBD_IDX_READ,
OBD_LAST_OPC
-} obd_cmd_t;
+};
#define OBD_FIRST_OPC OBD_PING
/* catalog of log objects */
@@ -2933,7 +2801,7 @@
struct llog_logid {
struct ost_id lgl_oi;
__u32 lgl_ogen;
-} __attribute__((packed));
+} __packed;
/** Records written to the CATALOGS list */
#define CATLIST "CATALOGS"
@@ -2942,7 +2810,7 @@
__u32 lci_padding1;
__u32 lci_padding2;
__u32 lci_padding3;
-} __attribute__((packed));
+} __packed;
/* Log data record types - there is no specific reason that these need to
* be related to the RPC opcodes, but no reason not to (may be handy later?)
@@ -2950,7 +2818,7 @@
#define LLOG_OP_MAGIC 0x10600000
#define LLOG_OP_MASK 0xfff00000
-typedef enum {
+enum llog_op_type {
LLOG_PAD_MAGIC = LLOG_OP_MAGIC | 0x00000,
OST_SZ_REC = LLOG_OP_MAGIC | 0x00f00,
/* OST_RAID1_REC = LLOG_OP_MAGIC | 0x01000, never used */
@@ -2970,7 +2838,7 @@
HSM_AGENT_REC = LLOG_OP_MAGIC | 0x80000,
LLOG_HDR_MAGIC = LLOG_OP_MAGIC | 0x45539,
LLOG_LOGID_MAGIC = LLOG_OP_MAGIC | 0x4553b,
-} llog_op_type;
+};
#define LLOG_REC_HDR_NEEDS_SWABBING(r) \
(((r)->lrh_type & __swab32(LLOG_OP_MASK)) == __swab32(LLOG_OP_MAGIC))
@@ -3006,7 +2874,7 @@
__u64 lid_padding2;
__u64 lid_padding3;
struct llog_rec_tail lid_tail;
-} __attribute__((packed));
+} __packed;
struct llog_unlink_rec {
struct llog_rec_hdr lur_hdr;
@@ -3014,7 +2882,7 @@
__u32 lur_oseq;
__u32 lur_count;
struct llog_rec_tail lur_tail;
-} __attribute__((packed));
+} __packed;
struct llog_unlink64_rec {
struct llog_rec_hdr lur_hdr;
@@ -3024,7 +2892,7 @@
__u64 lur_padding2;
__u64 lur_padding3;
struct llog_rec_tail lur_tail;
-} __attribute__((packed));
+} __packed;
struct llog_setattr64_rec {
struct llog_rec_hdr lsr_hdr;
@@ -3035,7 +2903,7 @@
__u32 lsr_gid_h;
__u64 lsr_padding;
struct llog_rec_tail lsr_tail;
-} __attribute__((packed));
+} __packed;
struct llog_size_change_rec {
struct llog_rec_hdr lsc_hdr;
@@ -3045,16 +2913,7 @@
__u64 lsc_padding2;
__u64 lsc_padding3;
struct llog_rec_tail lsc_tail;
-} __attribute__((packed));
-
-#define CHANGELOG_MAGIC 0xca103000
-
-/** \a changelog_rec_type's that can't be masked */
-#define CHANGELOG_MINMASK (1 << CL_MARK)
-/** bits covering all \a changelog_rec_type's */
-#define CHANGELOG_ALLMASK 0XFFFFFFFF
-/** default \a changelog_rec_type mask */
-#define CHANGELOG_DEFMASK CHANGELOG_ALLMASK & ~(1 << CL_ATIME | 1 << CL_CLOSE)
+} __packed;
/* changelog llog name, needed by client replicators */
#define CHANGELOG_CATALOG "changelog_catalog"
@@ -3062,22 +2921,20 @@
struct changelog_setinfo {
__u64 cs_recno;
__u32 cs_id;
-} __attribute__((packed));
+} __packed;
/** changelog record */
struct llog_changelog_rec {
struct llog_rec_hdr cr_hdr;
struct changelog_rec cr;
struct llog_rec_tail cr_tail; /**< for_sizezof_only */
-} __attribute__((packed));
+} __packed;
struct llog_changelog_ext_rec {
struct llog_rec_hdr cr_hdr;
struct changelog_ext_rec cr;
struct llog_rec_tail cr_tail; /**< for_sizezof_only */
-} __attribute__((packed));
-
-#define CHANGELOG_USER_PREFIX "cl"
+} __packed;
struct llog_changelog_user_rec {
struct llog_rec_hdr cur_hdr;
@@ -3085,7 +2942,7 @@
__u32 cur_padding;
__u64 cur_endrec;
struct llog_rec_tail cur_tail;
-} __attribute__((packed));
+} __packed;
enum agent_req_status {
ARS_WAITING,
@@ -3123,21 +2980,22 @@
struct llog_rec_hdr arr_hdr; /**< record header */
__u32 arr_status; /**< status of the request */
/* must match enum
- * agent_req_status */
+ * agent_req_status
+ */
__u32 arr_archive_id; /**< backend archive number */
__u64 arr_flags; /**< req flags */
- __u64 arr_compound_id; /**< compound cookie */
+ __u64 arr_compound_id;/**< compound cookie */
__u64 arr_req_create; /**< req. creation time */
__u64 arr_req_change; /**< req. status change time */
struct hsm_action_item arr_hai; /**< req. to the agent */
- struct llog_rec_tail arr_tail; /**< record tail for_sizezof_only */
-} __attribute__((packed));
+ struct llog_rec_tail arr_tail; /**< record tail for_sizezof_only */
+} __packed;
/* Old llog gen for compatibility */
struct llog_gen {
__u64 mnt_cnt;
__u64 conn_cnt;
-} __attribute__((packed));
+} __packed;
struct llog_gen_rec {
struct llog_rec_hdr lgr_hdr;
@@ -3175,19 +3033,21 @@
__u32 llh_reserved[LLOG_HEADER_SIZE/sizeof(__u32) - 23];
__u32 llh_bitmap[LLOG_BITMAP_BYTES/sizeof(__u32)];
struct llog_rec_tail llh_tail;
-} __attribute__((packed));
+} __packed;
#define LLOG_BITMAP_SIZE(llh) (__u32)((llh->llh_hdr.lrh_len - \
llh->llh_bitmap_offset - \
sizeof(llh->llh_tail)) * 8)
-/** log cookies are used to reference a specific log file and a record therein */
+/** log cookies are used to reference a specific log file and a record
+ * therein
+ */
struct llog_cookie {
struct llog_logid lgc_lgl;
__u32 lgc_subsys;
__u32 lgc_index;
__u32 lgc_padding;
-} __attribute__((packed));
+} __packed;
/** llog protocol */
enum llogd_rpc_ops {
@@ -3196,7 +3056,7 @@
LLOG_ORIGIN_HANDLE_READ_HEADER = 503,
LLOG_ORIGIN_HANDLE_WRITE_REC = 504,
LLOG_ORIGIN_HANDLE_CLOSE = 505,
- LLOG_ORIGIN_CONNECT = 506,
+ LLOG_ORIGIN_CONNECT = 506,
LLOG_CATINFO = 507, /* deprecated */
LLOG_ORIGIN_HANDLE_PREV_BLOCK = 508,
LLOG_ORIGIN_HANDLE_DESTROY = 509, /* for destroy llog object*/
@@ -3212,13 +3072,13 @@
__u32 lgd_saved_index;
__u32 lgd_len;
__u64 lgd_cur_offset;
-} __attribute__((packed));
+} __packed;
struct llogd_conn_body {
struct llog_gen lgdc_gen;
struct llog_logid lgdc_logid;
__u32 lgdc_ctxt_idx;
-} __attribute__((packed));
+} __packed;
/* Note: 64-bit types are 64-bit aligned in structure */
struct obdo {
@@ -3245,17 +3105,18 @@
__u64 o_ioepoch; /* epoch in ost writes */
__u32 o_stripe_idx; /* holds stripe idx */
__u32 o_parent_ver;
- struct lustre_handle o_handle; /* brw: lock handle to prolong
- * locks */
- struct llog_cookie o_lcookie; /* destroy: unlink cookie from
- * MDS */
+ struct lustre_handle o_handle; /* brw: lock handle to prolong locks
+ */
+ struct llog_cookie o_lcookie; /* destroy: unlink cookie from MDS
+ */
__u32 o_uid_h;
__u32 o_gid_h;
__u64 o_data_version; /* getattr: sum of iversion for
* each stripe.
* brw: grant space consumed on
- * the client for the write */
+ * the client for the write
+ */
__u64 o_padding_4;
__u64 o_padding_5;
__u64 o_padding_6;
@@ -3273,13 +3134,14 @@
{
*wobdo = *lobdo;
wobdo->o_flags &= ~OBD_FL_LOCAL_MASK;
- if (ocd == NULL)
+ if (!ocd)
return;
if (unlikely(!(ocd->ocd_connect_flags & OBD_CONNECT_FID)) &&
fid_seq_is_echo(ostid_seq(&lobdo->o_oi))) {
/* Currently OBD_FL_OSTID will only be used when 2.4 echo
- * client communicate with pre-2.4 server */
+ * client communicate with pre-2.4 server
+ */
wobdo->o_oi.oi.oi_id = fid_oid(&lobdo->o_oi.oi_fid);
wobdo->o_oi.oi.oi_seq = fid_seq(&lobdo->o_oi.oi_fid);
}
@@ -3292,7 +3154,7 @@
__u32 local_flags = 0;
if (lobdo->o_valid & OBD_MD_FLFLAGS)
- local_flags = lobdo->o_flags & OBD_FL_LOCAL_MASK;
+ local_flags = lobdo->o_flags & OBD_FL_LOCAL_MASK;
*lobdo = *wobdo;
if (local_flags != 0) {
@@ -3300,7 +3162,7 @@
lobdo->o_flags &= ~OBD_FL_LOCAL_MASK;
lobdo->o_flags |= local_flags;
}
- if (ocd == NULL)
+ if (!ocd)
return;
if (unlikely(!(ocd->ocd_connect_flags & OBD_CONNECT_FID)) &&
@@ -3349,100 +3211,14 @@
void dump_ost_body(struct ost_body *ob);
void dump_rcs(__u32 *rc);
-#define IDX_INFO_MAGIC 0x3D37CC37
-
-/* Index file transfer through the network. The server serializes the index into
- * a byte stream which is sent to the client via a bulk transfer */
-struct idx_info {
- __u32 ii_magic;
-
- /* reply: see idx_info_flags below */
- __u32 ii_flags;
-
- /* request & reply: number of lu_idxpage (to be) transferred */
- __u16 ii_count;
- __u16 ii_pad0;
-
- /* request: requested attributes passed down to the iterator API */
- __u32 ii_attrs;
-
- /* request & reply: index file identifier (FID) */
- struct lu_fid ii_fid;
-
- /* reply: version of the index file before starting to walk the index.
- * Please note that the version can be modified at any time during the
- * transfer */
- __u64 ii_version;
-
- /* request: hash to start with:
- * reply: hash of the first entry of the first lu_idxpage and hash
- * of the entry to read next if any */
- __u64 ii_hash_start;
- __u64 ii_hash_end;
-
- /* reply: size of keys in lu_idxpages, minimal one if II_FL_VARKEY is
- * set */
- __u16 ii_keysize;
-
- /* reply: size of records in lu_idxpages, minimal one if II_FL_VARREC
- * is set */
- __u16 ii_recsize;
-
- __u32 ii_pad1;
- __u64 ii_pad2;
- __u64 ii_pad3;
-};
-
-void lustre_swab_idx_info(struct idx_info *ii);
-
-#define II_END_OFF MDS_DIR_END_OFF /* all entries have been read */
-
-/* List of flags used in idx_info::ii_flags */
-enum idx_info_flags {
- II_FL_NOHASH = 1 << 0, /* client doesn't care about hash value */
- II_FL_VARKEY = 1 << 1, /* keys can be of variable size */
- II_FL_VARREC = 1 << 2, /* records can be of variable size */
- II_FL_NONUNQ = 1 << 3, /* index supports non-unique keys */
-};
-
-#define LIP_MAGIC 0x8A6D6B6C
-
-/* 4KB (= LU_PAGE_SIZE) container gathering key/record pairs */
-struct lu_idxpage {
- /* 16-byte header */
- __u32 lip_magic;
- __u16 lip_flags;
- __u16 lip_nr; /* number of entries in the container */
- __u64 lip_pad0; /* additional padding for future use */
-
- /* key/record pairs are stored in the remaining 4080 bytes.
- * depending upon the flags in idx_info::ii_flags, each key/record
- * pair might be preceded by:
- * - a hash value
- * - the key size (II_FL_VARKEY is set)
- * - the record size (II_FL_VARREC is set)
- *
- * For the time being, we only support fixed-size key & record. */
- char lip_entries[0];
-};
-
-#define LIP_HDR_SIZE (offsetof(struct lu_idxpage, lip_entries))
-
-/* Gather all possible type associated with a 4KB container */
-union lu_page {
- struct lu_dirpage lp_dir; /* for MDS_READPAGE */
- struct lu_idxpage lp_idx; /* for OBD_IDX_READ */
- char lp_array[LU_PAGE_SIZE];
-};
-
/* security opcodes */
-typedef enum {
+enum sec_cmd {
SEC_CTX_INIT = 801,
SEC_CTX_INIT_CONT = 802,
SEC_CTX_FINI = 803,
SEC_LAST_OPC,
SEC_FIRST_OPC = SEC_CTX_INIT
-} sec_cmd_t;
+};
/*
* capa related definitions
@@ -3451,7 +3227,8 @@
#define CAPA_HMAC_KEY_MAX_LEN 56
/* NB take care when changing the sequence of elements this struct,
- * because the offset info is used in find_capa() */
+ * because the offset info is used in find_capa()
+ */
struct lustre_capa {
struct lu_fid lc_fid; /** fid */
__u64 lc_opc; /** operations allowed */
@@ -3463,7 +3240,7 @@
/* FIXME: y2038 time_t overflow: */
__u32 lc_expiry; /** expiry time (sec) */
__u8 lc_hmac[CAPA_HMAC_MAX_LEN]; /** HMAC */
-} __attribute__((packed));
+} __packed;
void lustre_swab_lustre_capa(struct lustre_capa *c);
@@ -3497,7 +3274,7 @@
__u32 lk_keyid; /**< key# */
__u32 lk_padding;
__u8 lk_key[CAPA_HMAC_KEY_MAX_LEN]; /**< key */
-} __attribute__((packed));
+} __packed;
/** The link ea holds 1 \a link_ea_entry for each hardlink */
#define LINK_EA_MAGIC 0x11EAF1DFUL
@@ -3518,7 +3295,7 @@
unsigned char lee_reclen[2];
unsigned char lee_parent_fid[sizeof(struct lu_fid)];
char lee_name[0];
-} __attribute__((packed));
+} __packed;
/** fid2path request/reply structure */
struct getinfo_fid2path {
@@ -3527,7 +3304,7 @@
__u32 gf_linkno;
__u32 gf_pathlen;
char gf_path[0];
-} __attribute__((packed));
+} __packed;
void lustre_swab_fid2path (struct getinfo_fid2path *gf);
@@ -3558,7 +3335,7 @@
*/
struct hsm_progress_kernel {
/* Field taken from struct hsm_progress */
- lustre_fid hpk_fid;
+ struct lu_fid hpk_fid;
__u64 hpk_cookie;
struct hsm_extent hpk_extent;
__u16 hpk_flags;
@@ -3567,7 +3344,7 @@
/* Additional fields */
__u64 hpk_data_version;
__u64 hpk_padding2;
-} __attribute__((packed));
+} __packed;
void lustre_swab_hsm_user_state(struct hsm_user_state *hus);
void lustre_swab_hsm_current_action(struct hsm_current_action *action);
@@ -3576,92 +3353,6 @@
void lustre_swab_hsm_user_item(struct hsm_user_item *hui);
void lustre_swab_hsm_request(struct hsm_request *hr);
-/**
- * These are object update opcode under UPDATE_OBJ, which is currently
- * being used by cross-ref operations between MDT.
- *
- * During the cross-ref operation, the Master MDT, which the client send the
- * request to, will disassembly the operation into object updates, then OSP
- * will send these updates to the remote MDT to be executed.
- *
- * Update request format
- * magic: UPDATE_BUFFER_MAGIC_V1
- * Count: How many updates in the req.
- * bufs[0] : following are packets of object.
- * update[0]:
- * type: object_update_op, the op code of update
- * fid: The object fid of the update.
- * lens/bufs: other parameters of the update.
- * update[1]:
- * type: object_update_op, the op code of update
- * fid: The object fid of the update.
- * lens/bufs: other parameters of the update.
- * ..........
- * update[7]: type: object_update_op, the op code of update
- * fid: The object fid of the update.
- * lens/bufs: other parameters of the update.
- * Current 8 maxim updates per object update request.
- *
- *******************************************************************
- * update reply format:
- *
- * ur_version: UPDATE_REPLY_V1
- * ur_count: The count of the reply, which is usually equal
- * to the number of updates in the request.
- * ur_lens: The reply lengths of each object update.
- *
- * replies: 1st update reply [4bytes_ret: other body]
- * 2nd update reply [4bytes_ret: other body]
- * .....
- * nth update reply [4bytes_ret: other body]
- *
- * For each reply of the update, the format would be
- * result(4 bytes):Other stuff
- */
-
-#define UPDATE_MAX_OPS 10
-#define UPDATE_BUFFER_MAGIC_V1 0xBDDE0001
-#define UPDATE_BUFFER_MAGIC UPDATE_BUFFER_MAGIC_V1
-#define UPDATE_BUF_COUNT 8
-enum object_update_op {
- OBJ_CREATE = 1,
- OBJ_DESTROY = 2,
- OBJ_REF_ADD = 3,
- OBJ_REF_DEL = 4,
- OBJ_ATTR_SET = 5,
- OBJ_ATTR_GET = 6,
- OBJ_XATTR_SET = 7,
- OBJ_XATTR_GET = 8,
- OBJ_INDEX_LOOKUP = 9,
- OBJ_INDEX_INSERT = 10,
- OBJ_INDEX_DELETE = 11,
- OBJ_LAST
-};
-
-struct update {
- __u32 u_type;
- __u32 u_batchid;
- struct lu_fid u_fid;
- __u32 u_lens[UPDATE_BUF_COUNT];
- __u32 u_bufs[0];
-};
-
-struct update_buf {
- __u32 ub_magic;
- __u32 ub_count;
- __u32 ub_bufs[0];
-};
-
-#define UPDATE_REPLY_V1 0x00BD0001
-struct update_reply {
- __u32 ur_version;
- __u32 ur_count;
- __u32 ur_lens[0];
-};
-
-void lustre_swab_update_buf(struct update_buf *ub);
-void lustre_swab_update_reply_buf(struct update_reply *ur);
-
/** layout swap request structure
* fid1 and fid2 are in mdt_body
*/
diff --git a/drivers/staging/lustre/lustre/include/lustre/lustre_user.h b/drivers/staging/lustre/lustre/include/lustre/lustre_user.h
index 2b4dd65..2e9f025 100644
--- a/drivers/staging/lustre/lustre/include/lustre/lustre_user.h
+++ b/drivers/staging/lustre/lustre/include/lustre/lustre_user.h
@@ -85,9 +85,8 @@
__u32 os_namelen;
__u64 os_maxbytes;
__u32 os_state; /**< obd_statfs_state OS_STATE_* flag */
- __u32 os_fprecreated; /* objs available now to the caller */
- /* used in QoS code to find preferred
- * OSTs */
+ __u32 os_fprecreated; /* objs available now to the caller */
+ /* used in QoS code to find preferred OSTs */
__u32 os_spare2;
__u32 os_spare3;
__u32 os_spare4;
@@ -135,8 +134,9 @@
/* Userspace should treat lu_fid as opaque, and only use the following methods
* to print or parse them. Other functions (e.g. compare, swab) could be moved
- * here from lustre_idl.h if needed. */
-typedef struct lu_fid lustre_fid;
+ * here from lustre_idl.h if needed.
+ */
+struct lu_fid;
/**
* Following struct for object attributes, that will be kept inode's EA.
@@ -266,7 +266,8 @@
/* Define O_LOV_DELAY_CREATE to be a mask that is not useful for regular
* files, but are unlikely to be used in practice and are not harmful if
* used incorrectly. O_NOCTTY and FASYNC are only meaningful for character
- * devices and are safe for use on new files (See LU-812, LU-4209). */
+ * devices and are safe for use on new files (See LU-812, LU-4209).
+ */
#define O_LOV_DELAY_CREATE (O_NOCTTY | FASYNC)
#define LL_FILE_IGNORE_LOCK 0x00000001
@@ -302,7 +303,8 @@
* The limit of 12 pages is somewhat arbitrary, but is a reasonably large
* allocation that is sufficient for the current generation of systems.
*
- * (max buffer size - lov+rpc header) / sizeof(struct lov_ost_data_v1) */
+ * (max buffer size - lov+rpc header) / sizeof(struct lov_ost_data_v1)
+ */
#define LOV_MAX_STRIPE_COUNT 2000 /* ((12 * 4096 - 256) / 24) */
#define LOV_ALL_STRIPES 0xffff /* only valid for directories */
#define LOV_V1_INSANE_STRIPE_COUNT 65532 /* maximum stripe count bz13933 */
@@ -323,9 +325,11 @@
__u16 lmm_stripe_count; /* num stripes in use for this object */
union {
__u16 lmm_stripe_offset; /* starting stripe offset in
- * lmm_objects, use when writing */
+ * lmm_objects, use when writing
+ */
__u16 lmm_layout_gen; /* layout generation number
- * used when reading */
+ * used when reading
+ */
};
struct lov_user_ost_data_v1 lmm_objects[0]; /* per-stripe data */
} __attribute__((packed, __may_alias__));
@@ -338,9 +342,11 @@
__u16 lmm_stripe_count; /* num stripes in use for this object */
union {
__u16 lmm_stripe_offset; /* starting stripe offset in
- * lmm_objects, use when writing */
+ * lmm_objects, use when writing
+ */
__u16 lmm_layout_gen; /* layout generation number
- * used when reading */
+ * used when reading
+ */
};
char lmm_pool_name[LOV_MAXPOOLNAME]; /* pool name */
struct lov_user_ost_data_v1 lmm_objects[0]; /* per-stripe data */
@@ -444,7 +450,8 @@
{
if (uuid->uuid[sizeof(*uuid) - 1] != '\0') {
/* Obviously not safe, but for printfs, no real harm done...
- we're always null-terminated, even in a race. */
+ * we're always null-terminated, even in a race.
+ */
static char temp[sizeof(*uuid)];
memcpy(temp, uuid->uuid, sizeof(*uuid) - 1);
@@ -455,8 +462,9 @@
}
/* Extract fsname from uuid (or target name) of a target
- e.g. (myfs-OST0007_UUID -> myfs)
- see also deuuidify. */
+ * e.g. (myfs-OST0007_UUID -> myfs)
+ * see also deuuidify.
+ */
static inline void obd_uuid2fsname(char *buf, char *uuid, int buflen)
{
char *p;
@@ -465,11 +473,12 @@
buf[buflen - 1] = '\0';
p = strrchr(buf, '-');
if (p)
- *p = '\0';
+ *p = '\0';
}
/* printf display format
- e.g. printf("file FID is "DFID"\n", PFID(fid)); */
+ * e.g. printf("file FID is "DFID"\n", PFID(fid));
+ */
#define FID_NOBRACE_LEN 40
#define FID_LEN (FID_NOBRACE_LEN + 2)
#define DFID_NOBRACE "%#llx:0x%x:0x%x"
@@ -480,7 +489,8 @@
(fid)->f_ver
/* scanf input parse format -- strip '[' first.
- e.g. sscanf(fidstr, SFID, RFID(&fid)); */
+ * e.g. sscanf(fidstr, SFID, RFID(&fid));
+ */
#define SFID "0x%llx:0x%x:0x%x"
#define RFID(fid) \
&((fid)->f_seq), \
@@ -566,9 +576,9 @@
/* hdr + MDT index */
#define LUSTRE_VOLATILE_IDX LUSTRE_VOLATILE_HDR":%.4X:"
-typedef enum lustre_quota_version {
+enum lustre_quota_version {
LUSTRE_QUOTA_V2 = 1
-} lustre_quota_version_t;
+};
/* XXX: same as if_dqinfo struct in kernel */
struct obd_dqinfo {
@@ -698,7 +708,8 @@
#define CLF_HSM_LAST 15
/* Remove bits higher than _h, then extract the value
- * between _h and _l by shifting lower weigth to bit 0. */
+ * between _h and _l by shifting lower weigth to bit 0.
+ */
#define CLF_GET_BITS(_b, _h, _l) (((_b << (CLF_HSM_LAST - _h)) & 0xFFFF) \
>> (CLF_HSM_LAST - _h + _l))
@@ -761,10 +772,10 @@
__u64 cr_prev; /**< last index for this target fid */
__u64 cr_time;
union {
- lustre_fid cr_tfid; /**< target fid */
+ struct lu_fid cr_tfid; /**< target fid */
__u32 cr_markerflags; /**< CL_MARK flags */
};
- lustre_fid cr_pfid; /**< parent fid */
+ struct lu_fid cr_pfid; /**< parent fid */
char cr_name[0]; /**< last element */
} __packed;
@@ -775,18 +786,19 @@
struct changelog_ext_rec {
__u16 cr_namelen;
__u16 cr_flags; /**< (flags & CLF_FLAGMASK) |
- CLF_EXT_VERSION */
+ * CLF_EXT_VERSION
+ */
__u32 cr_type; /**< \a changelog_rec_type */
__u64 cr_index; /**< changelog record number */
__u64 cr_prev; /**< last index for this target fid */
__u64 cr_time;
union {
- lustre_fid cr_tfid; /**< target fid */
+ struct lu_fid cr_tfid; /**< target fid */
__u32 cr_markerflags; /**< CL_MARK flags */
};
- lustre_fid cr_pfid; /**< target parent fid */
- lustre_fid cr_sfid; /**< source fid, or zero */
- lustre_fid cr_spfid; /**< source parent fid, or zero */
+ struct lu_fid cr_pfid; /**< target parent fid */
+ struct lu_fid cr_sfid; /**< source fid, or zero */
+ struct lu_fid cr_spfid; /**< source parent fid, or zero */
char cr_name[0]; /**< last element */
} __packed;
@@ -835,7 +847,8 @@
};
#define LL_DV_NOFLUSH 0x01 /* Do not take READ EXTENT LOCK before sampling
- version. Dirty caches are left unchanged. */
+ * version. Dirty caches are left unchanged.
+ */
#ifndef offsetof
# define offsetof(typ, memb) ((unsigned long)((char *)&(((typ *)0)->memb)))
@@ -976,8 +989,8 @@
};
struct hsm_user_item {
- lustre_fid hui_fid;
- struct hsm_extent hui_extent;
+ struct lu_fid hui_fid;
+ struct hsm_extent hui_extent;
} __packed;
struct hsm_user_request {
@@ -1046,8 +1059,8 @@
struct hsm_action_item {
__u32 hai_len; /* valid size of this struct */
__u32 hai_action; /* hsm_copytool_action, but use known size */
- lustre_fid hai_fid; /* Lustre FID to operated on */
- lustre_fid hai_dfid; /* fid used for data access */
+ struct lu_fid hai_fid; /* Lustre FID to operated on */
+ struct lu_fid hai_dfid; /* fid used for data access */
struct hsm_extent hai_extent; /* byte range to operate on */
__u64 hai_cookie; /* action cookie from coordinator */
__u64 hai_gid; /* grouplock id */
@@ -1095,7 +1108,8 @@
__u32 padding1;
char hal_fsname[0]; /* null-terminated */
/* struct hsm_action_item[hal_count] follows, aligned on 8-byte
- boundaries. See hai_zero */
+ * boundaries. See hai_zero
+ */
} __packed;
#ifndef HAVE_CFS_SIZE_ROUND
@@ -1157,7 +1171,7 @@
#define HP_FLAG_RETRY 0x02
struct hsm_progress {
- lustre_fid hp_fid;
+ struct lu_fid hp_fid;
__u64 hp_cookie;
struct hsm_extent hp_extent;
__u16 hp_flags;
diff --git a/drivers/staging/lustre/lustre/include/lustre_cfg.h b/drivers/staging/lustre/lustre/include/lustre_cfg.h
index d30d8b0..23ae832 100644
--- a/drivers/staging/lustre/lustre/include/lustre_cfg.h
+++ b/drivers/staging/lustre/lustre/include/lustre_cfg.h
@@ -55,7 +55,8 @@
/** If the LCFG_REQUIRED bit is set in a configuration command,
* then the client is required to understand this parameter
* in order to mount the filesystem. If it does not understand
- * a REQUIRED command the client mount will fail. */
+ * a REQUIRED command the client mount will fail.
+ */
#define LCFG_REQUIRED 0x0001000
enum lcfg_command_type {
@@ -87,9 +88,11 @@
LCFG_POOL_DEL = 0x00ce023, /**< destroy an ost pool name */
LCFG_SET_LDLM_TIMEOUT = 0x00ce030, /**< set ldlm_timeout */
LCFG_PRE_CLEANUP = 0x00cf031, /**< call type-specific pre
- * cleanup cleanup */
+ * cleanup cleanup
+ */
LCFG_SET_PARAM = 0x00ce032, /**< use set_param syntax to set
- *a proc parameters */
+ * a proc parameters
+ */
};
struct lustre_cfg_bufs {
@@ -128,7 +131,7 @@
{
if (index >= LUSTRE_CFG_MAX_BUFCOUNT)
return;
- if (bufs == NULL)
+ if (!bufs)
return;
if (bufs->lcfg_bufcount <= index)
@@ -158,7 +161,6 @@
int offset;
int bufcount;
- LASSERT (lcfg != NULL);
LASSERT (index >= 0);
bufcount = lcfg->lcfg_bufcount;
@@ -191,7 +193,7 @@
return NULL;
s = lustre_cfg_buf(lcfg, index);
- if (s == NULL)
+ if (!s)
return NULL;
/*
diff --git a/drivers/staging/lustre/lustre/include/lustre_disk.h b/drivers/staging/lustre/lustre/include/lustre_disk.h
index 7c6933f..95fd360 100644
--- a/drivers/staging/lustre/lustre/include/lustre_disk.h
+++ b/drivers/staging/lustre/lustre/include/lustre_disk.h
@@ -65,7 +65,8 @@
/****************** mount command *********************/
/* The lmd is only used internally by Lustre; mount simply passes
- everything as string options */
+ * everything as string options
+ */
#define LMD_MAGIC 0xbdacbd03
#define LMD_PARAMS_MAXLEN 4096
@@ -79,23 +80,26 @@
int lmd_recovery_time_soft;
int lmd_recovery_time_hard;
char *lmd_dev; /* device name */
- char *lmd_profile; /* client only */
+ char *lmd_profile; /* client only */
char *lmd_mgssec; /* sptlrpc flavor to mgs */
- char *lmd_opts; /* lustre mount options (as opposed to
- _device_ mount options) */
+ char *lmd_opts; /* lustre mount options (as opposed to
+ * _device_ mount options)
+ */
char *lmd_params; /* lustre params */
- __u32 *lmd_exclude; /* array of OSTs to ignore */
- char *lmd_mgs; /* MGS nid */
- char *lmd_osd_type; /* OSD type */
+ __u32 *lmd_exclude; /* array of OSTs to ignore */
+ char *lmd_mgs; /* MGS nid */
+ char *lmd_osd_type; /* OSD type */
};
#define LMD_FLG_SERVER 0x0001 /* Mounting a server */
#define LMD_FLG_CLIENT 0x0002 /* Mounting a client */
#define LMD_FLG_ABORT_RECOV 0x0008 /* Abort recovery */
#define LMD_FLG_NOSVC 0x0010 /* Only start MGS/MGC for servers,
- no other services */
-#define LMD_FLG_NOMGS 0x0020 /* Only start target for servers, reusing
- existing MGS services */
+ * no other services
+ */
+#define LMD_FLG_NOMGS 0x0020 /* Only start target for servers,
+ * reusing existing MGS services
+ */
#define LMD_FLG_WRITECONF 0x0040 /* Rewrite config log */
#define LMD_FLG_NOIR 0x0080 /* NO imperative recovery */
#define LMD_FLG_NOSCRUB 0x0100 /* Do not trigger scrub automatically */
@@ -116,231 +120,6 @@
#define LR_EXPIRE_INTERVALS 16 /**< number of intervals to track transno */
#define ENOENT_VERSION 1 /** 'virtual' version of non-existent object */
-#define LR_SERVER_SIZE 512
-#define LR_CLIENT_START 8192
-#define LR_CLIENT_SIZE 128
-#if LR_CLIENT_START < LR_SERVER_SIZE
-#error "Can't have LR_CLIENT_START < LR_SERVER_SIZE"
-#endif
-
-/*
- * This limit is arbitrary (131072 clients on x86), but it is convenient to use
- * 2^n * PAGE_CACHE_SIZE * 8 for the number of bits that fit an order-n allocation.
- * If we need more than 131072 clients (order-2 allocation on x86) then this
- * should become an array of single-page pointers that are allocated on demand.
- */
-#if (128 * 1024UL) > (PAGE_CACHE_SIZE * 8)
-#define LR_MAX_CLIENTS (128 * 1024UL)
-#else
-#define LR_MAX_CLIENTS (PAGE_CACHE_SIZE * 8)
-#endif
-
-/** COMPAT_146: this is an OST (temporary) */
-#define OBD_COMPAT_OST 0x00000002
-/** COMPAT_146: this is an MDT (temporary) */
-#define OBD_COMPAT_MDT 0x00000004
-/** 2.0 server, interop flag to show server version is changed */
-#define OBD_COMPAT_20 0x00000008
-
-/** MDS handles LOV_OBJID file */
-#define OBD_ROCOMPAT_LOVOBJID 0x00000001
-
-/** OST handles group subdirs */
-#define OBD_INCOMPAT_GROUPS 0x00000001
-/** this is an OST */
-#define OBD_INCOMPAT_OST 0x00000002
-/** this is an MDT */
-#define OBD_INCOMPAT_MDT 0x00000004
-/** common last_rvcd format */
-#define OBD_INCOMPAT_COMMON_LR 0x00000008
-/** FID is enabled */
-#define OBD_INCOMPAT_FID 0x00000010
-/** Size-on-MDS is enabled */
-#define OBD_INCOMPAT_SOM 0x00000020
-/** filesystem using iam format to store directory entries */
-#define OBD_INCOMPAT_IAM_DIR 0x00000040
-/** LMA attribute contains per-inode incompatible flags */
-#define OBD_INCOMPAT_LMA 0x00000080
-/** lmm_stripe_count has been shrunk from __u32 to __u16 and the remaining 16
- * bits are now used to store a generation. Once we start changing the layout
- * and bumping the generation, old versions expecting a 32-bit lmm_stripe_count
- * will be confused by interpreting stripe_count | gen << 16 as the actual
- * stripe count */
-#define OBD_INCOMPAT_LMM_VER 0x00000100
-/** multiple OI files for MDT */
-#define OBD_INCOMPAT_MULTI_OI 0x00000200
-
-/* Data stored per server at the head of the last_rcvd file. In le32 order.
- This should be common to filter_internal.h, lustre_mds.h */
-struct lr_server_data {
- __u8 lsd_uuid[40]; /* server UUID */
- __u64 lsd_last_transno; /* last completed transaction ID */
- __u64 lsd_compat14; /* reserved - compat with old last_rcvd */
- __u64 lsd_mount_count; /* incarnation number */
- __u32 lsd_feature_compat; /* compatible feature flags */
- __u32 lsd_feature_rocompat;/* read-only compatible feature flags */
- __u32 lsd_feature_incompat;/* incompatible feature flags */
- __u32 lsd_server_size; /* size of server data area */
- __u32 lsd_client_start; /* start of per-client data area */
- __u16 lsd_client_size; /* size of per-client data area */
- __u16 lsd_subdir_count; /* number of subdirectories for objects */
- __u64 lsd_catalog_oid; /* recovery catalog object id */
- __u32 lsd_catalog_ogen; /* recovery catalog inode generation */
- __u8 lsd_peeruuid[40]; /* UUID of MDS associated with this OST */
- __u32 lsd_osd_index; /* index number of OST in LOV */
- __u32 lsd_padding1; /* was lsd_mdt_index, unused in 2.4.0 */
- __u32 lsd_start_epoch; /* VBR: start epoch from last boot */
- /** transaction values since lsd_trans_table_time */
- __u64 lsd_trans_table[LR_EXPIRE_INTERVALS];
- /** start point of transno table below */
- __u32 lsd_trans_table_time; /* time of first slot in table above */
- __u32 lsd_expire_intervals; /* LR_EXPIRE_INTERVALS */
- __u8 lsd_padding[LR_SERVER_SIZE - 288];
-};
-
-/* Data stored per client in the last_rcvd file. In le32 order. */
-struct lsd_client_data {
- __u8 lcd_uuid[40]; /* client UUID */
- __u64 lcd_last_transno; /* last completed transaction ID */
- __u64 lcd_last_xid; /* xid for the last transaction */
- __u32 lcd_last_result; /* result from last RPC */
- __u32 lcd_last_data; /* per-op data (disposition for open &c.) */
- /* for MDS_CLOSE requests */
- __u64 lcd_last_close_transno; /* last completed transaction ID */
- __u64 lcd_last_close_xid; /* xid for the last transaction */
- __u32 lcd_last_close_result; /* result from last RPC */
- __u32 lcd_last_close_data; /* per-op data */
- /* VBR: last versions */
- __u64 lcd_pre_versions[4];
- __u32 lcd_last_epoch;
- /** orphans handling for delayed export rely on that */
- __u32 lcd_first_epoch;
- __u8 lcd_padding[LR_CLIENT_SIZE - 128];
-};
-
-/* bug20354: the lcd_uuid for export of clients may be wrong */
-static inline void check_lcd(char *obd_name, int index,
- struct lsd_client_data *lcd)
-{
- int length = sizeof(lcd->lcd_uuid);
-
- if (strnlen((char *)lcd->lcd_uuid, length) == length) {
- lcd->lcd_uuid[length - 1] = '\0';
-
- LCONSOLE_ERROR("the client UUID (%s) on %s for exports stored in last_rcvd(index = %d) is bad!\n",
- lcd->lcd_uuid, obd_name, index);
- }
-}
-
-/* last_rcvd handling */
-static inline void lsd_le_to_cpu(struct lr_server_data *buf,
- struct lr_server_data *lsd)
-{
- int i;
-
- memcpy(lsd->lsd_uuid, buf->lsd_uuid, sizeof(lsd->lsd_uuid));
- lsd->lsd_last_transno = le64_to_cpu(buf->lsd_last_transno);
- lsd->lsd_compat14 = le64_to_cpu(buf->lsd_compat14);
- lsd->lsd_mount_count = le64_to_cpu(buf->lsd_mount_count);
- lsd->lsd_feature_compat = le32_to_cpu(buf->lsd_feature_compat);
- lsd->lsd_feature_rocompat = le32_to_cpu(buf->lsd_feature_rocompat);
- lsd->lsd_feature_incompat = le32_to_cpu(buf->lsd_feature_incompat);
- lsd->lsd_server_size = le32_to_cpu(buf->lsd_server_size);
- lsd->lsd_client_start = le32_to_cpu(buf->lsd_client_start);
- lsd->lsd_client_size = le16_to_cpu(buf->lsd_client_size);
- lsd->lsd_subdir_count = le16_to_cpu(buf->lsd_subdir_count);
- lsd->lsd_catalog_oid = le64_to_cpu(buf->lsd_catalog_oid);
- lsd->lsd_catalog_ogen = le32_to_cpu(buf->lsd_catalog_ogen);
- memcpy(lsd->lsd_peeruuid, buf->lsd_peeruuid, sizeof(lsd->lsd_peeruuid));
- lsd->lsd_osd_index = le32_to_cpu(buf->lsd_osd_index);
- lsd->lsd_padding1 = le32_to_cpu(buf->lsd_padding1);
- lsd->lsd_start_epoch = le32_to_cpu(buf->lsd_start_epoch);
- for (i = 0; i < LR_EXPIRE_INTERVALS; i++)
- lsd->lsd_trans_table[i] = le64_to_cpu(buf->lsd_trans_table[i]);
- lsd->lsd_trans_table_time = le32_to_cpu(buf->lsd_trans_table_time);
- lsd->lsd_expire_intervals = le32_to_cpu(buf->lsd_expire_intervals);
-}
-
-static inline void lsd_cpu_to_le(struct lr_server_data *lsd,
- struct lr_server_data *buf)
-{
- int i;
-
- memcpy(buf->lsd_uuid, lsd->lsd_uuid, sizeof(buf->lsd_uuid));
- buf->lsd_last_transno = cpu_to_le64(lsd->lsd_last_transno);
- buf->lsd_compat14 = cpu_to_le64(lsd->lsd_compat14);
- buf->lsd_mount_count = cpu_to_le64(lsd->lsd_mount_count);
- buf->lsd_feature_compat = cpu_to_le32(lsd->lsd_feature_compat);
- buf->lsd_feature_rocompat = cpu_to_le32(lsd->lsd_feature_rocompat);
- buf->lsd_feature_incompat = cpu_to_le32(lsd->lsd_feature_incompat);
- buf->lsd_server_size = cpu_to_le32(lsd->lsd_server_size);
- buf->lsd_client_start = cpu_to_le32(lsd->lsd_client_start);
- buf->lsd_client_size = cpu_to_le16(lsd->lsd_client_size);
- buf->lsd_subdir_count = cpu_to_le16(lsd->lsd_subdir_count);
- buf->lsd_catalog_oid = cpu_to_le64(lsd->lsd_catalog_oid);
- buf->lsd_catalog_ogen = cpu_to_le32(lsd->lsd_catalog_ogen);
- memcpy(buf->lsd_peeruuid, lsd->lsd_peeruuid, sizeof(buf->lsd_peeruuid));
- buf->lsd_osd_index = cpu_to_le32(lsd->lsd_osd_index);
- buf->lsd_padding1 = cpu_to_le32(lsd->lsd_padding1);
- buf->lsd_start_epoch = cpu_to_le32(lsd->lsd_start_epoch);
- for (i = 0; i < LR_EXPIRE_INTERVALS; i++)
- buf->lsd_trans_table[i] = cpu_to_le64(lsd->lsd_trans_table[i]);
- buf->lsd_trans_table_time = cpu_to_le32(lsd->lsd_trans_table_time);
- buf->lsd_expire_intervals = cpu_to_le32(lsd->lsd_expire_intervals);
-}
-
-static inline void lcd_le_to_cpu(struct lsd_client_data *buf,
- struct lsd_client_data *lcd)
-{
- memcpy(lcd->lcd_uuid, buf->lcd_uuid, sizeof (lcd->lcd_uuid));
- lcd->lcd_last_transno = le64_to_cpu(buf->lcd_last_transno);
- lcd->lcd_last_xid = le64_to_cpu(buf->lcd_last_xid);
- lcd->lcd_last_result = le32_to_cpu(buf->lcd_last_result);
- lcd->lcd_last_data = le32_to_cpu(buf->lcd_last_data);
- lcd->lcd_last_close_transno = le64_to_cpu(buf->lcd_last_close_transno);
- lcd->lcd_last_close_xid = le64_to_cpu(buf->lcd_last_close_xid);
- lcd->lcd_last_close_result = le32_to_cpu(buf->lcd_last_close_result);
- lcd->lcd_last_close_data = le32_to_cpu(buf->lcd_last_close_data);
- lcd->lcd_pre_versions[0] = le64_to_cpu(buf->lcd_pre_versions[0]);
- lcd->lcd_pre_versions[1] = le64_to_cpu(buf->lcd_pre_versions[1]);
- lcd->lcd_pre_versions[2] = le64_to_cpu(buf->lcd_pre_versions[2]);
- lcd->lcd_pre_versions[3] = le64_to_cpu(buf->lcd_pre_versions[3]);
- lcd->lcd_last_epoch = le32_to_cpu(buf->lcd_last_epoch);
- lcd->lcd_first_epoch = le32_to_cpu(buf->lcd_first_epoch);
-}
-
-static inline void lcd_cpu_to_le(struct lsd_client_data *lcd,
- struct lsd_client_data *buf)
-{
- memcpy(buf->lcd_uuid, lcd->lcd_uuid, sizeof (lcd->lcd_uuid));
- buf->lcd_last_transno = cpu_to_le64(lcd->lcd_last_transno);
- buf->lcd_last_xid = cpu_to_le64(lcd->lcd_last_xid);
- buf->lcd_last_result = cpu_to_le32(lcd->lcd_last_result);
- buf->lcd_last_data = cpu_to_le32(lcd->lcd_last_data);
- buf->lcd_last_close_transno = cpu_to_le64(lcd->lcd_last_close_transno);
- buf->lcd_last_close_xid = cpu_to_le64(lcd->lcd_last_close_xid);
- buf->lcd_last_close_result = cpu_to_le32(lcd->lcd_last_close_result);
- buf->lcd_last_close_data = cpu_to_le32(lcd->lcd_last_close_data);
- buf->lcd_pre_versions[0] = cpu_to_le64(lcd->lcd_pre_versions[0]);
- buf->lcd_pre_versions[1] = cpu_to_le64(lcd->lcd_pre_versions[1]);
- buf->lcd_pre_versions[2] = cpu_to_le64(lcd->lcd_pre_versions[2]);
- buf->lcd_pre_versions[3] = cpu_to_le64(lcd->lcd_pre_versions[3]);
- buf->lcd_last_epoch = cpu_to_le32(lcd->lcd_last_epoch);
- buf->lcd_first_epoch = cpu_to_le32(lcd->lcd_first_epoch);
-}
-
-static inline __u64 lcd_last_transno(struct lsd_client_data *lcd)
-{
- return (lcd->lcd_last_transno > lcd->lcd_last_close_transno ?
- lcd->lcd_last_transno : lcd->lcd_last_close_transno);
-}
-
-static inline __u64 lcd_last_xid(struct lsd_client_data *lcd)
-{
- return (lcd->lcd_last_xid > lcd->lcd_last_close_xid ?
- lcd->lcd_last_xid : lcd->lcd_last_close_xid);
-}
-
/****************** superblock additional info *********************/
struct ll_sb_info;
@@ -360,7 +139,8 @@
char lsi_osd_type[16];
char lsi_fstype[16];
struct backing_dev_info lsi_bdi; /* each client mountpoint needs
- own backing_dev_info */
+ * own backing_dev_info
+ */
};
#define LSI_UMOUNT_FAILOVER 0x00200000
diff --git a/drivers/staging/lustre/lustre/include/lustre_dlm.h b/drivers/staging/lustre/lustre/include/lustre_dlm.h
index 9b319f1..144b5af 100644
--- a/drivers/staging/lustre/lustre/include/lustre_dlm.h
+++ b/drivers/staging/lustre/lustre/include/lustre_dlm.h
@@ -69,7 +69,7 @@
/**
* LDLM non-error return states
*/
-typedef enum {
+enum ldlm_error {
ELDLM_OK = 0,
ELDLM_LOCK_CHANGED = 300,
@@ -80,7 +80,7 @@
ELDLM_NAMESPACE_EXISTS = 400,
ELDLM_BAD_NAMESPACE = 401
-} ldlm_error_t;
+};
/**
* LDLM namespace type.
@@ -145,14 +145,15 @@
#define LCK_COMPAT_COS (LCK_COS)
/** @} Lock Compatibility Matrix */
-extern ldlm_mode_t lck_compat_array[];
+extern enum ldlm_mode lck_compat_array[];
-static inline void lockmode_verify(ldlm_mode_t mode)
+static inline void lockmode_verify(enum ldlm_mode mode)
{
LASSERT(mode > LCK_MINMODE && mode < LCK_MAXMODE);
}
-static inline int lockmode_compat(ldlm_mode_t exist_mode, ldlm_mode_t new_mode)
+static inline int lockmode_compat(enum ldlm_mode exist_mode,
+ enum ldlm_mode new_mode)
{
return (lck_compat_array[exist_mode] & new_mode);
}
@@ -249,7 +250,8 @@
/** Current biggest client lock volume. Protected by pl_lock. */
__u64 pl_client_lock_volume;
/** Lock volume factor. SLV on client is calculated as following:
- * server_slv * lock_volume_factor. */
+ * server_slv * lock_volume_factor.
+ */
atomic_t pl_lock_volume_factor;
/** Time when last SLV from server was obtained. */
time64_t pl_recalc_time;
@@ -295,10 +297,10 @@
* LDLM pools related, type of lock pool in the namespace.
* Greedy means release cached locks aggressively
*/
-typedef enum {
+enum ldlm_appetite {
LDLM_NAMESPACE_GREEDY = 1 << 0,
LDLM_NAMESPACE_MODEST = 1 << 1
-} ldlm_appetite_t;
+};
struct ldlm_ns_bucket {
/** back pointer to namespace */
@@ -317,7 +319,7 @@
LDLM_NSS_LAST
};
-typedef enum {
+enum ldlm_ns_type {
/** invalid type */
LDLM_NS_TYPE_UNKNOWN = 0,
/** mdc namespace */
@@ -332,7 +334,7 @@
LDLM_NS_TYPE_MGC,
/** mgs namespace */
LDLM_NS_TYPE_MGT,
-} ldlm_ns_type_t;
+};
/**
* LDLM Namespace.
@@ -373,7 +375,7 @@
/**
* Namespace connect flags supported by server (may be changed via
- * /proc, LRU resize may be disabled/enabled).
+ * sysfs, LRU resize may be disabled/enabled).
*/
__u64 ns_connect_flags;
@@ -439,7 +441,7 @@
/** LDLM pool structure for this namespace */
struct ldlm_pool ns_pool;
/** Definition of how eagerly unused locks will be released from LRU */
- ldlm_appetite_t ns_appetite;
+ enum ldlm_appetite ns_appetite;
/** Limit of parallel AST RPC count. */
unsigned ns_max_parallel_ast;
@@ -465,7 +467,6 @@
*/
static inline int ns_connect_cancelset(struct ldlm_namespace *ns)
{
- LASSERT(ns != NULL);
return !!(ns->ns_connect_flags & OBD_CONNECT_CANCELSET);
}
@@ -474,14 +475,12 @@
*/
static inline int ns_connect_lru_resize(struct ldlm_namespace *ns)
{
- LASSERT(ns != NULL);
return !!(ns->ns_connect_flags & OBD_CONNECT_LRU_RESIZE);
}
static inline void ns_register_cancel(struct ldlm_namespace *ns,
ldlm_cancel_for_recovery arg)
{
- LASSERT(ns != NULL);
ns->ns_cancel_for_recovery = arg;
}
@@ -503,7 +502,8 @@
struct list_head gl_list; /* linkage to other gl work structs */
__u32 gl_flags;/* see LDLM_GL_WORK_* below */
union ldlm_gl_desc *gl_desc; /* glimpse descriptor to be packed in
- * glimpse callback request */
+ * glimpse callback request
+ */
};
/** The ldlm_glimpse_work is allocated on the stack and should not be freed. */
@@ -512,8 +512,9 @@
/** Interval node data for each LDLM_EXTENT lock. */
struct ldlm_interval {
struct interval_node li_node; /* node for tree management */
- struct list_head li_group; /* the locks which have the same
- * policy - group of the policy */
+ struct list_head li_group; /* the locks which have the same
+ * policy - group of the policy
+ */
};
#define to_ldlm_interval(n) container_of(n, struct ldlm_interval, li_node)
@@ -527,7 +528,7 @@
struct ldlm_interval_tree {
/** Tree size. */
int lit_size;
- ldlm_mode_t lit_mode; /* lock mode */
+ enum ldlm_mode lit_mode; /* lock mode */
struct interval_node *lit_root; /* actual ldlm_interval */
};
@@ -535,12 +536,13 @@
#define LUSTRE_TRACKS_LOCK_EXP_REFS (0)
/** Cancel flags. */
-typedef enum {
+enum ldlm_cancel_flags {
LCF_ASYNC = 0x1, /* Cancel locks asynchronously. */
LCF_LOCAL = 0x2, /* Cancel locks locally, not notifing server */
LCF_BL_AST = 0x4, /* Cancel locks marked as LDLM_FL_BL_AST
- * in the same RPC */
-} ldlm_cancel_flags_t;
+ * in the same RPC
+ */
+};
struct ldlm_flock {
__u64 start;
@@ -559,7 +561,7 @@
struct ldlm_inodebits l_inodebits;
} ldlm_policy_data_t;
-void ldlm_convert_policy_to_local(struct obd_export *exp, ldlm_type_t type,
+void ldlm_convert_policy_to_local(struct obd_export *exp, enum ldlm_type type,
const ldlm_wire_policy_data_t *wpolicy,
ldlm_policy_data_t *lpolicy);
@@ -637,11 +639,11 @@
* Requested mode.
* Protected by lr_lock.
*/
- ldlm_mode_t l_req_mode;
+ enum ldlm_mode l_req_mode;
/**
* Granted mode, also protected by lr_lock.
*/
- ldlm_mode_t l_granted_mode;
+ enum ldlm_mode l_granted_mode;
/** Lock completion handler pointer. Called when lock is granted. */
ldlm_completion_callback l_completion_ast;
/**
@@ -841,20 +843,19 @@
/**
* protected by lr_lock
- * @{ */
+ * @{
+ */
/** List of locks in granted state */
struct list_head lr_granted;
/**
* List of locks that could not be granted due to conflicts and
- * that are waiting for conflicts to go away */
+ * that are waiting for conflicts to go away
+ */
struct list_head lr_waiting;
/** @} */
- /* XXX No longer needed? Remove ASAP */
- ldlm_mode_t lr_most_restr;
-
/** Type of locks this resource can hold. Only one type per resource. */
- ldlm_type_t lr_type; /* LDLM_{PLAIN,EXTENT,FLOCK,IBITS} */
+ enum ldlm_type lr_type; /* LDLM_{PLAIN,EXTENT,FLOCK,IBITS} */
/** Resource name */
struct ldlm_res_id lr_name;
@@ -921,7 +922,7 @@
{
struct ldlm_namespace *ns = ldlm_res_to_ns(res);
- if (ns->ns_lvbo != NULL && ns->ns_lvbo->lvbo_init != NULL)
+ if (ns->ns_lvbo && ns->ns_lvbo->lvbo_init)
return ns->ns_lvbo->lvbo_init(res);
return 0;
@@ -931,7 +932,7 @@
{
struct ldlm_namespace *ns = ldlm_lock_to_ns(lock);
- if (ns->ns_lvbo != NULL && ns->ns_lvbo->lvbo_size != NULL)
+ if (ns->ns_lvbo && ns->ns_lvbo->lvbo_size)
return ns->ns_lvbo->lvbo_size(lock);
return 0;
@@ -941,10 +942,9 @@
{
struct ldlm_namespace *ns = ldlm_lock_to_ns(lock);
- if (ns->ns_lvbo != NULL) {
- LASSERT(ns->ns_lvbo->lvbo_fill != NULL);
+ if (ns->ns_lvbo)
return ns->ns_lvbo->lvbo_fill(lock, buf, len);
- }
+
return 0;
}
@@ -1015,7 +1015,7 @@
/** Non-rate-limited lock printing function for debugging purposes. */
#define LDLM_DEBUG(lock, fmt, a...) do { \
- if (likely(lock != NULL)) { \
+ if (likely(lock)) { \
LIBCFS_DEBUG_MSG_DATA_DECL(msgdata, D_DLMTRACE, NULL); \
ldlm_lock_debug(&msgdata, D_DLMTRACE, NULL, lock, \
"### " fmt, ##a); \
@@ -1025,7 +1025,7 @@
} while (0)
typedef int (*ldlm_processing_policy)(struct ldlm_lock *lock, __u64 *flags,
- int first_enq, ldlm_error_t *err,
+ int first_enq, enum ldlm_error *err,
struct list_head *work_list);
/**
@@ -1042,7 +1042,8 @@
*
* LDLM provides for a way to iterate through every lock on a resource or
* namespace or every resource in a namespace.
- * @{ */
+ * @{
+ */
int ldlm_resource_iterate(struct ldlm_namespace *, const struct ldlm_res_id *,
ldlm_iterator_t iter, void *data);
/** @} ldlm_iterator */
@@ -1091,7 +1092,7 @@
struct ldlm_lock *lock;
lock = __ldlm_handle2lock(h, flags);
- if (lock != NULL)
+ if (lock)
LDLM_LOCK_REF_DEL(lock);
return lock;
}
@@ -1111,7 +1112,7 @@
return 0;
}
-int ldlm_error2errno(ldlm_error_t error);
+int ldlm_error2errno(enum ldlm_error error);
#if LUSTRE_TRACKS_LOCK_EXP_REFS
void ldlm_dump_export_locks(struct obd_export *exp);
@@ -1168,12 +1169,13 @@
void ldlm_lock_fail_match_locked(struct ldlm_lock *lock);
void ldlm_lock_allow_match(struct ldlm_lock *lock);
void ldlm_lock_allow_match_locked(struct ldlm_lock *lock);
-ldlm_mode_t ldlm_lock_match(struct ldlm_namespace *ns, __u64 flags,
- const struct ldlm_res_id *, ldlm_type_t type,
- ldlm_policy_data_t *, ldlm_mode_t mode,
- struct lustre_handle *, int unref);
-ldlm_mode_t ldlm_revalidate_lock_handle(struct lustre_handle *lockh,
- __u64 *bits);
+enum ldlm_mode ldlm_lock_match(struct ldlm_namespace *ns, __u64 flags,
+ const struct ldlm_res_id *,
+ enum ldlm_type type, ldlm_policy_data_t *,
+ enum ldlm_mode mode, struct lustre_handle *,
+ int unref);
+enum ldlm_mode ldlm_revalidate_lock_handle(struct lustre_handle *lockh,
+ __u64 *bits);
void ldlm_lock_cancel(struct ldlm_lock *lock);
void ldlm_lock_dump_handle(int level, struct lustre_handle *);
void ldlm_unlink_lock_skiplist(struct ldlm_lock *req);
@@ -1181,8 +1183,8 @@
/* resource.c */
struct ldlm_namespace *
ldlm_namespace_new(struct obd_device *obd, char *name,
- ldlm_side_t client, ldlm_appetite_t apt,
- ldlm_ns_type_t ns_type);
+ ldlm_side_t client, enum ldlm_appetite apt,
+ enum ldlm_ns_type ns_type);
int ldlm_namespace_cleanup(struct ldlm_namespace *ns, __u64 flags);
void ldlm_namespace_get(struct ldlm_namespace *ns);
void ldlm_namespace_put(struct ldlm_namespace *ns);
@@ -1193,7 +1195,7 @@
struct ldlm_resource *ldlm_resource_get(struct ldlm_namespace *ns,
struct ldlm_resource *parent,
const struct ldlm_res_id *,
- ldlm_type_t type, int create);
+ enum ldlm_type type, int create);
int ldlm_resource_putref(struct ldlm_resource *res);
void ldlm_resource_add_lock(struct ldlm_resource *res,
struct list_head *head,
@@ -1219,7 +1221,8 @@
* These AST handlers are typically used for server-side local locks and are
* also used by client-side lock handlers to perform minimum level base
* processing.
- * @{ */
+ * @{
+ */
int ldlm_completion_ast_async(struct ldlm_lock *lock, __u64 flags, void *data);
int ldlm_completion_ast(struct ldlm_lock *lock, __u64 flags, void *data);
/** @} ldlm_local_ast */
@@ -1227,7 +1230,8 @@
/** \defgroup ldlm_cli_api API to operate on locks from actual LDLM users.
* These are typically used by client and server (*_local versions)
* to obtain and release locks.
- * @{ */
+ * @{
+ */
int ldlm_cli_enqueue(struct obd_export *exp, struct ptlrpc_request **reqp,
struct ldlm_enqueue_info *einfo,
const struct ldlm_res_id *res_id,
@@ -1244,29 +1248,32 @@
struct list_head *cancels, int count);
int ldlm_cli_enqueue_fini(struct obd_export *exp, struct ptlrpc_request *req,
- ldlm_type_t type, __u8 with_policy, ldlm_mode_t mode,
+ enum ldlm_type type, __u8 with_policy,
+ enum ldlm_mode mode,
__u64 *flags, void *lvb, __u32 lvb_len,
struct lustre_handle *lockh, int rc);
int ldlm_cli_update_pool(struct ptlrpc_request *req);
int ldlm_cli_cancel(struct lustre_handle *lockh,
- ldlm_cancel_flags_t cancel_flags);
+ enum ldlm_cancel_flags cancel_flags);
int ldlm_cli_cancel_unused(struct ldlm_namespace *, const struct ldlm_res_id *,
- ldlm_cancel_flags_t flags, void *opaque);
+ enum ldlm_cancel_flags flags, void *opaque);
int ldlm_cli_cancel_unused_resource(struct ldlm_namespace *ns,
const struct ldlm_res_id *res_id,
ldlm_policy_data_t *policy,
- ldlm_mode_t mode,
- ldlm_cancel_flags_t flags,
+ enum ldlm_mode mode,
+ enum ldlm_cancel_flags flags,
void *opaque);
int ldlm_cancel_resource_local(struct ldlm_resource *res,
struct list_head *cancels,
ldlm_policy_data_t *policy,
- ldlm_mode_t mode, __u64 lock_flags,
- ldlm_cancel_flags_t cancel_flags, void *opaque);
+ enum ldlm_mode mode, __u64 lock_flags,
+ enum ldlm_cancel_flags cancel_flags,
+ void *opaque);
int ldlm_cli_cancel_list_local(struct list_head *cancels, int count,
- ldlm_cancel_flags_t flags);
+ enum ldlm_cancel_flags flags);
int ldlm_cli_cancel_list(struct list_head *head, int count,
- struct ptlrpc_request *req, ldlm_cancel_flags_t flags);
+ struct ptlrpc_request *req,
+ enum ldlm_cancel_flags flags);
/** @} ldlm_cli_api */
/* mds/handler.c */
diff --git a/drivers/staging/lustre/lustre/include/lustre_dlm_flags.h b/drivers/staging/lustre/lustre/include/lustre_dlm_flags.h
index 0d3ed87..7f2ba2f 100644
--- a/drivers/staging/lustre/lustre/include/lustre_dlm_flags.h
+++ b/drivers/staging/lustre/lustre/include/lustre_dlm_flags.h
@@ -57,7 +57,8 @@
/**
* Server placed lock on granted list, or a recovering client wants the
- * lock added to the granted list, no questions asked. */
+ * lock added to the granted list, no questions asked.
+ */
#define LDLM_FL_BLOCK_GRANTED 0x0000000000000002ULL /* bit 1 */
#define ldlm_is_block_granted(_l) LDLM_TEST_FLAG((_l), 1ULL << 1)
#define ldlm_set_block_granted(_l) LDLM_SET_FLAG((_l), 1ULL << 1)
@@ -65,7 +66,8 @@
/**
* Server placed lock on conv list, or a recovering client wants the lock
- * added to the conv list, no questions asked. */
+ * added to the conv list, no questions asked.
+ */
#define LDLM_FL_BLOCK_CONV 0x0000000000000004ULL /* bit 2 */
#define ldlm_is_block_conv(_l) LDLM_TEST_FLAG((_l), 1ULL << 2)
#define ldlm_set_block_conv(_l) LDLM_SET_FLAG((_l), 1ULL << 2)
@@ -73,7 +75,8 @@
/**
* Server placed lock on wait list, or a recovering client wants the lock
- * added to the wait list, no questions asked. */
+ * added to the wait list, no questions asked.
+ */
#define LDLM_FL_BLOCK_WAIT 0x0000000000000008ULL /* bit 3 */
#define ldlm_is_block_wait(_l) LDLM_TEST_FLAG((_l), 1ULL << 3)
#define ldlm_set_block_wait(_l) LDLM_SET_FLAG((_l), 1ULL << 3)
@@ -87,7 +90,8 @@
/**
* Lock is being replayed. This could probably be implied by the fact that
- * one of BLOCK_{GRANTED,CONV,WAIT} is set, but that is pretty dangerous. */
+ * one of BLOCK_{GRANTED,CONV,WAIT} is set, but that is pretty dangerous.
+ */
#define LDLM_FL_REPLAY 0x0000000000000100ULL /* bit 8 */
#define ldlm_is_replay(_l) LDLM_TEST_FLAG((_l), 1ULL << 8)
#define ldlm_set_replay(_l) LDLM_SET_FLAG((_l), 1ULL << 8)
@@ -125,7 +129,8 @@
/**
* Server told not to wait if blocked. For AGL, OST will not send glimpse
- * callback. */
+ * callback.
+ */
#define LDLM_FL_BLOCK_NOWAIT 0x0000000000040000ULL /* bit 18 */
#define ldlm_is_block_nowait(_l) LDLM_TEST_FLAG((_l), 1ULL << 18)
#define ldlm_set_block_nowait(_l) LDLM_SET_FLAG((_l), 1ULL << 18)
@@ -141,7 +146,8 @@
* Immediately cancel such locks when they block some other locks. Send
* cancel notification to original lock holder, but expect no reply. This
* is for clients (like liblustre) that cannot be expected to reliably
- * response to blocking AST. */
+ * response to blocking AST.
+ */
#define LDLM_FL_CANCEL_ON_BLOCK 0x0000000000800000ULL /* bit 23 */
#define ldlm_is_cancel_on_block(_l) LDLM_TEST_FLAG((_l), 1ULL << 23)
#define ldlm_set_cancel_on_block(_l) LDLM_SET_FLAG((_l), 1ULL << 23)
@@ -164,7 +170,8 @@
/**
* Used for marking lock as a target for -EINTR while cp_ast sleep emulation
- * + race with upcoming bl_ast. */
+ * + race with upcoming bl_ast.
+ */
#define LDLM_FL_FAIL_LOC 0x0000000100000000ULL /* bit 32 */
#define ldlm_is_fail_loc(_l) LDLM_TEST_FLAG((_l), 1ULL << 32)
#define ldlm_set_fail_loc(_l) LDLM_SET_FLAG((_l), 1ULL << 32)
@@ -172,7 +179,8 @@
/**
* Used while processing the unused list to know that we have already
- * handled this lock and decided to skip it. */
+ * handled this lock and decided to skip it.
+ */
#define LDLM_FL_SKIPPED 0x0000000200000000ULL /* bit 33 */
#define ldlm_is_skipped(_l) LDLM_TEST_FLAG((_l), 1ULL << 33)
#define ldlm_set_skipped(_l) LDLM_SET_FLAG((_l), 1ULL << 33)
@@ -231,7 +239,8 @@
* The proper fix is to do the granting inside of the completion AST,
* which can be replaced with a LVB-aware wrapping function for OSC locks.
* That change is pretty high-risk, though, and would need a lot more
- * testing. */
+ * testing.
+ */
#define LDLM_FL_LVB_READY 0x0000020000000000ULL /* bit 41 */
#define ldlm_is_lvb_ready(_l) LDLM_TEST_FLAG((_l), 1ULL << 41)
#define ldlm_set_lvb_ready(_l) LDLM_SET_FLAG((_l), 1ULL << 41)
@@ -243,7 +252,8 @@
* dirty pages. It can remain on the granted list during this whole time.
* Threads racing to update the KMS after performing their writeback need
* to know to exclude each other's locks from the calculation as they walk
- * the granted list. */
+ * the granted list.
+ */
#define LDLM_FL_KMS_IGNORE 0x0000040000000000ULL /* bit 42 */
#define ldlm_is_kms_ignore(_l) LDLM_TEST_FLAG((_l), 1ULL << 42)
#define ldlm_set_kms_ignore(_l) LDLM_SET_FLAG((_l), 1ULL << 42)
@@ -263,7 +273,8 @@
/**
* optimization hint: LDLM can run blocking callback from current context
- * w/o involving separate thread. in order to decrease cs rate */
+ * w/o involving separate thread. in order to decrease cs rate
+ */
#define LDLM_FL_ATOMIC_CB 0x0000200000000000ULL /* bit 45 */
#define ldlm_is_atomic_cb(_l) LDLM_TEST_FLAG((_l), 1ULL << 45)
#define ldlm_set_atomic_cb(_l) LDLM_SET_FLAG((_l), 1ULL << 45)
@@ -280,7 +291,8 @@
* LDLM_FL_BL_DONE is to be set by ldlm_cancel_callback() when lock cache is
* dropped to let ldlm_callback_handler() return EINVAL to the server. It
* is used when ELC RPC is already prepared and is waiting for rpc_lock,
- * too late to send a separate CANCEL RPC. */
+ * too late to send a separate CANCEL RPC.
+ */
#define LDLM_FL_BL_AST 0x0000400000000000ULL /* bit 46 */
#define ldlm_is_bl_ast(_l) LDLM_TEST_FLAG((_l), 1ULL << 46)
#define ldlm_set_bl_ast(_l) LDLM_SET_FLAG((_l), 1ULL << 46)
@@ -295,7 +307,8 @@
/**
* Don't put lock into the LRU list, so that it is not canceled due
* to aging. Used by MGC locks, they are cancelled only at unmount or
- * by callback. */
+ * by callback.
+ */
#define LDLM_FL_NO_LRU 0x0001000000000000ULL /* bit 48 */
#define ldlm_is_no_lru(_l) LDLM_TEST_FLAG((_l), 1ULL << 48)
#define ldlm_set_no_lru(_l) LDLM_SET_FLAG((_l), 1ULL << 48)
@@ -304,7 +317,8 @@
/**
* Set for locks that failed and where the server has been notified.
*
- * Protected by lock and resource locks. */
+ * Protected by lock and resource locks.
+ */
#define LDLM_FL_FAIL_NOTIFIED 0x0002000000000000ULL /* bit 49 */
#define ldlm_is_fail_notified(_l) LDLM_TEST_FLAG((_l), 1ULL << 49)
#define ldlm_set_fail_notified(_l) LDLM_SET_FLAG((_l), 1ULL << 49)
@@ -315,7 +329,8 @@
* be destroyed when last reference to them is released. Set by
* ldlm_lock_destroy_internal().
*
- * Protected by lock and resource locks. */
+ * Protected by lock and resource locks.
+ */
#define LDLM_FL_DESTROYED 0x0004000000000000ULL /* bit 50 */
#define ldlm_is_destroyed(_l) LDLM_TEST_FLAG((_l), 1ULL << 50)
#define ldlm_set_destroyed(_l) LDLM_SET_FLAG((_l), 1ULL << 50)
@@ -333,7 +348,8 @@
* NB: compared with check_res_locked(), checking this bit is cheaper.
* Also, spin_is_locked() is deprecated for kernel code; one reason is
* because it works only for SMP so user needs to add extra macros like
- * LASSERT_SPIN_LOCKED for uniprocessor kernels. */
+ * LASSERT_SPIN_LOCKED for uniprocessor kernels.
+ */
#define LDLM_FL_RES_LOCKED 0x0010000000000000ULL /* bit 52 */
#define ldlm_is_res_locked(_l) LDLM_TEST_FLAG((_l), 1ULL << 52)
#define ldlm_set_res_locked(_l) LDLM_SET_FLAG((_l), 1ULL << 52)
@@ -343,7 +359,8 @@
* It's set once we call ldlm_add_waiting_lock_res_locked() to start the
* lock-timeout timer and it will never be reset.
*
- * Protected by lock and resource locks. */
+ * Protected by lock and resource locks.
+ */
#define LDLM_FL_WAITED 0x0020000000000000ULL /* bit 53 */
#define ldlm_is_waited(_l) LDLM_TEST_FLAG((_l), 1ULL << 53)
#define ldlm_set_waited(_l) LDLM_SET_FLAG((_l), 1ULL << 53)
@@ -365,10 +382,10 @@
#define LDLM_TEST_FLAG(_l, _b) (((_l)->l_flags & (_b)) != 0)
/** set a ldlm_lock flag bit */
-#define LDLM_SET_FLAG(_l, _b) (((_l)->l_flags |= (_b))
+#define LDLM_SET_FLAG(_l, _b) ((_l)->l_flags |= (_b))
/** clear a ldlm_lock flag bit */
-#define LDLM_CLEAR_FLAG(_l, _b) (((_l)->l_flags &= ~(_b))
+#define LDLM_CLEAR_FLAG(_l, _b) ((_l)->l_flags &= ~(_b))
/** Mask of flags inherited from parent lock when doing intents. */
#define LDLM_INHERIT_FLAGS LDLM_FL_INHERIT_MASK
diff --git a/drivers/staging/lustre/lustre/include/lustre_export.h b/drivers/staging/lustre/lustre/include/lustre_export.h
index a030a98f..3014d27 100644
--- a/drivers/staging/lustre/lustre/include/lustre_export.h
+++ b/drivers/staging/lustre/lustre/include/lustre_export.h
@@ -50,62 +50,6 @@
#include "lustre/lustre_idl.h"
#include "lustre_dlm.h"
-struct mds_client_data;
-struct mdt_client_data;
-struct mds_idmap_table;
-struct mdt_idmap_table;
-
-/**
- * Target-specific export data
- */
-struct tg_export_data {
- /** Protects led_lcd below */
- struct mutex ted_lcd_lock;
- /** Per-client data for each export */
- struct lsd_client_data *ted_lcd;
- /** Offset of record in last_rcvd file */
- loff_t ted_lr_off;
- /** Client index in last_rcvd file */
- int ted_lr_idx;
-};
-
-/**
- * MDT-specific export data
- */
-struct mdt_export_data {
- struct tg_export_data med_ted;
- /** List of all files opened by client on this MDT */
- struct list_head med_open_head;
- spinlock_t med_open_lock; /* med_open_head, mfd_list */
- /** Bitmask of all ibit locks this MDT understands */
- __u64 med_ibits_known;
- struct mutex med_idmap_mutex;
- struct lustre_idmap_table *med_idmap;
-};
-
-struct ec_export_data { /* echo client */
- struct list_head eced_locks;
-};
-
-/* In-memory access to client data from OST struct */
-/** Filter (oss-side) specific import data */
-struct filter_export_data {
- struct tg_export_data fed_ted;
- spinlock_t fed_lock; /**< protects fed_mod_list */
- long fed_dirty; /* in bytes */
- long fed_grant; /* in bytes */
- struct list_head fed_mod_list; /* files being modified */
- int fed_mod_count;/* items in fed_writing list */
- long fed_pending; /* bytes just being written */
- __u32 fed_group;
- __u8 fed_pagesize; /* log2 of client page size */
-};
-
-struct mgs_export_data {
- struct list_head med_clients; /* mgc fs client via this exp */
- spinlock_t med_lock; /* protect med_clients */
-};
-
enum obd_option {
OBD_OPT_FORCE = 0x0001,
OBD_OPT_FAILOVER = 0x0002,
@@ -179,7 +123,8 @@
*/
spinlock_t exp_lock;
/** Compatibility flags for this export are embedded into
- * exp_connect_data */
+ * exp_connect_data
+ */
struct obd_connect_data exp_connect_data;
enum obd_option exp_flags;
unsigned long exp_failed:1,
@@ -200,22 +145,8 @@
/** blocking dlm lock list, protected by exp_bl_list_lock */
struct list_head exp_bl_list;
spinlock_t exp_bl_list_lock;
-
- /** Target specific data */
- union {
- struct tg_export_data eu_target_data;
- struct mdt_export_data eu_mdt_data;
- struct filter_export_data eu_filter_data;
- struct ec_export_data eu_ec_data;
- struct mgs_export_data eu_mgs_data;
- } u;
};
-#define exp_target_data u.eu_target_data
-#define exp_mdt_data u.eu_mdt_data
-#define exp_filter_data u.eu_filter_data
-#define exp_ec_data u.eu_ec_data
-
static inline __u64 *exp_connect_flags_ptr(struct obd_export *exp)
{
return &exp->exp_connect_data.ocd_connect_flags;
@@ -228,7 +159,6 @@
static inline int exp_max_brw_size(struct obd_export *exp)
{
- LASSERT(exp != NULL);
if (exp_connect_flags(exp) & OBD_CONNECT_BRW_SIZE)
return exp->exp_connect_data.ocd_brw_size;
@@ -242,19 +172,16 @@
static inline int exp_connect_cancelset(struct obd_export *exp)
{
- LASSERT(exp != NULL);
return !!(exp_connect_flags(exp) & OBD_CONNECT_CANCELSET);
}
static inline int exp_connect_lru_resize(struct obd_export *exp)
{
- LASSERT(exp != NULL);
return !!(exp_connect_flags(exp) & OBD_CONNECT_LRU_RESIZE);
}
static inline int exp_connect_rmtclient(struct obd_export *exp)
{
- LASSERT(exp != NULL);
return !!(exp_connect_flags(exp) & OBD_CONNECT_RMT_CLIENT);
}
@@ -268,14 +195,11 @@
static inline int exp_connect_vbr(struct obd_export *exp)
{
- LASSERT(exp != NULL);
- LASSERT(exp->exp_connection);
return !!(exp_connect_flags(exp) & OBD_CONNECT_VBR);
}
static inline int exp_connect_som(struct obd_export *exp)
{
- LASSERT(exp != NULL);
return !!(exp_connect_flags(exp) & OBD_CONNECT_SOM);
}
@@ -288,7 +212,6 @@
{
struct obd_connect_data *ocd;
- LASSERT(imp != NULL);
ocd = &imp->imp_connect_data;
return !!(ocd->ocd_connect_flags & OBD_CONNECT_LRU_RESIZE);
}
@@ -300,7 +223,6 @@
static inline bool exp_connect_lvb_type(struct obd_export *exp)
{
- LASSERT(exp != NULL);
if (exp_connect_flags(exp) & OBD_CONNECT_LVB_TYPE)
return true;
else
@@ -311,7 +233,6 @@
{
struct obd_connect_data *ocd;
- LASSERT(imp != NULL);
ocd = &imp->imp_connect_data;
if (ocd->ocd_connect_flags & OBD_CONNECT_LVB_TYPE)
return true;
@@ -331,7 +252,6 @@
{
struct obd_connect_data *ocd;
- LASSERT(imp != NULL);
ocd = &imp->imp_connect_data;
return ocd->ocd_connect_flags & OBD_CONNECT_DISP_STRIPE;
}
diff --git a/drivers/staging/lustre/lustre/include/lustre_fid.h b/drivers/staging/lustre/lustre/include/lustre_fid.h
index 9b1a9c6..ab91879 100644
--- a/drivers/staging/lustre/lustre/include/lustre_fid.h
+++ b/drivers/staging/lustre/lustre/include/lustre_fid.h
@@ -251,7 +251,8 @@
/* For new FS (>= 2.4), the root FID will be changed to
* [FID_SEQ_ROOT:1:0], for existing FS, (upgraded to 2.4),
- * the root FID will still be IGIF */
+ * the root FID will still be IGIF
+ */
static inline int fid_is_root(const struct lu_fid *fid)
{
return unlikely((fid_seq(fid) == FID_SEQ_ROOT &&
@@ -294,7 +295,8 @@
const __u64 seq = fid_seq(fid);
/* Here, we cannot distinguish whether the normal FID is for OST
- * object or not. It is caller's duty to check more if needed. */
+ * object or not. It is caller's duty to check more if needed.
+ */
return (!fid_is_last_id(fid) &&
(fid_seq_is_norm(seq) || fid_seq_is_igif(seq))) ||
fid_is_root(fid) || fid_is_dot_lustre(fid);
@@ -516,7 +518,8 @@
struct ldlm_res_id *name)
{
/* Note: it is just a trick here to save some effort, probably the
- * correct way would be turn them into the FID and compare */
+ * correct way would be turn them into the FID and compare
+ */
if (fid_seq_is_mdt0(ostid_seq(oi))) {
return name->name[LUSTRE_RES_ID_SEQ_OFF] == ostid_id(oi) &&
name->name[LUSTRE_RES_ID_VER_OID_OFF] == ostid_seq(oi);
@@ -589,12 +592,14 @@
static inline __u32 fid_hash(const struct lu_fid *f, int bits)
{
/* all objects with same id and different versions will belong to same
- * collisions list. */
+ * collisions list.
+ */
return hash_long(fid_flatten(f), bits);
}
/**
- * map fid to 32 bit value for ino on 32bit systems. */
+ * map fid to 32 bit value for ino on 32bit systems.
+ */
static inline __u32 fid_flatten32(const struct lu_fid *fid)
{
__u32 ino;
@@ -611,7 +616,8 @@
* that inodes generated at about the same time have a reduced chance
* of collisions. This will give a period of 2^12 = 1024 unique clients
* (from SEQ) and up to min(LUSTRE_SEQ_MAX_WIDTH, 2^20) = 128k objects
- * (from OID), or up to 128M inodes without collisions for new files. */
+ * (from OID), or up to 128M inodes without collisions for new files.
+ */
ino = ((seq & 0x000fffffULL) << 12) + ((seq >> 8) & 0xfffff000) +
(seq >> (64 - (40-8)) & 0xffffff00) +
(fid_oid(fid) & 0xff000fff) + ((fid_oid(fid) & 0x00fff000) << 8);
diff --git a/drivers/staging/lustre/lustre/include/lustre_fld.h b/drivers/staging/lustre/lustre/include/lustre_fld.h
index 5511626..4cf2b0e 100644
--- a/drivers/staging/lustre/lustre/include/lustre_fld.h
+++ b/drivers/staging/lustre/lustre/include/lustre_fld.h
@@ -71,50 +71,41 @@
struct lu_server_fld {
/**
* super sequence controller export, needed to forward fld
- * lookup request. */
+ * lookup request.
+ */
struct obd_export *lsf_control_exp;
- /**
- * Client FLD cache. */
+ /** Client FLD cache. */
struct fld_cache *lsf_cache;
- /**
- * Protect index modifications */
+ /** Protect index modifications */
struct mutex lsf_lock;
- /**
- * Fld service name in form "fld-srv-lustre-MDTXXX" */
+ /** Fld service name in form "fld-srv-lustre-MDTXXX" */
char lsf_name[LUSTRE_MDT_MAXNAMELEN];
};
struct lu_client_fld {
- /**
- * Client side debugfs entry. */
+ /** Client side debugfs entry. */
struct dentry *lcf_debugfs_entry;
- /**
- * List of exports client FLD knows about. */
+ /** List of exports client FLD knows about. */
struct list_head lcf_targets;
- /**
- * Current hash to be used to chose an export. */
+ /** Current hash to be used to chose an export. */
struct lu_fld_hash *lcf_hash;
- /**
- * Exports count. */
+ /** Exports count. */
int lcf_count;
- /**
- * Lock protecting exports list and fld_hash. */
+ /** Lock protecting exports list and fld_hash. */
spinlock_t lcf_lock;
- /**
- * Client FLD cache. */
+ /** Client FLD cache. */
struct fld_cache *lcf_cache;
- /**
- * Client fld debugfs entry name. */
+ /** Client fld debugfs entry name. */
char lcf_name[LUSTRE_MDT_MAXNAMELEN];
int lcf_flags;
diff --git a/drivers/staging/lustre/lustre/include/lustre_handles.h b/drivers/staging/lustre/lustre/include/lustre_handles.h
index f39780a..27f169d 100644
--- a/drivers/staging/lustre/lustre/include/lustre_handles.h
+++ b/drivers/staging/lustre/lustre/include/lustre_handles.h
@@ -65,7 +65,8 @@
*
* Now you're able to assign the results of cookie2handle directly to an
* ldlm_lock. If it's not at the top, you'll want to use container_of()
- * to compute the start of the structure based on the handle field. */
+ * to compute the start of the structure based on the handle field.
+ */
struct portals_handle {
struct list_head h_link;
__u64 h_cookie;
diff --git a/drivers/staging/lustre/lustre/include/lustre_import.h b/drivers/staging/lustre/lustre/include/lustre_import.h
index 4e4230e..dac2d84 100644
--- a/drivers/staging/lustre/lustre/include/lustre_import.h
+++ b/drivers/staging/lustre/lustre/include/lustre_import.h
@@ -292,7 +292,8 @@
/* need IR MNE swab */
imp_need_mne_swab:1,
/* import must be reconnected instead of
- * chose new connection */
+ * chosing new connection
+ */
imp_force_reconnect:1,
/* import has tried to connect with server */
imp_connect_tried:1;
diff --git a/drivers/staging/lustre/lustre/include/lustre_lib.h b/drivers/staging/lustre/lustre/include/lustre_lib.h
index cfccf7c..f2223d5 100644
--- a/drivers/staging/lustre/lustre/include/lustre_lib.h
+++ b/drivers/staging/lustre/lustre/include/lustre_lib.h
@@ -387,7 +387,8 @@
*/
/* Until such time as we get_info the per-stripe maximum from the OST,
- * we define this to be 2T - 4k, which is the ext3 maxbytes. */
+ * we define this to be 2T - 4k, which is the ext3 maxbytes.
+ */
#define LUSTRE_STRIPE_MAXBYTES 0x1fffffff000ULL
/* Special values for remove LOV EA from disk */
@@ -540,7 +541,7 @@
l_add_wait(&wq, &__wait); \
\
/* Block all signals (just the non-fatal ones if no timeout). */ \
- if (info->lwi_on_signal != NULL && (__timeout == 0 || __allow_intr)) \
+ if (info->lwi_on_signal && (__timeout == 0 || __allow_intr)) \
__blocked = cfs_block_sigsinv(LUSTRE_FATAL_SIGS); \
else \
__blocked = cfs_block_sigsinv(0); \
@@ -562,13 +563,13 @@
__timeout = cfs_time_sub(__timeout, \
cfs_time_sub(interval, remaining));\
if (__timeout == 0) { \
- if (info->lwi_on_timeout == NULL || \
+ if (!info->lwi_on_timeout || \
info->lwi_on_timeout(info->lwi_cb_data)) { \
ret = -ETIMEDOUT; \
break; \
} \
/* Take signals after the timeout expires. */ \
- if (info->lwi_on_signal != NULL) \
+ if (info->lwi_on_signal) \
(void)cfs_block_sigsinv(LUSTRE_FATAL_SIGS);\
} \
} \
@@ -578,7 +579,7 @@
if (condition) \
break; \
if (cfs_signal_pending()) { \
- if (info->lwi_on_signal != NULL && \
+ if (info->lwi_on_signal && \
(__timeout == 0 || __allow_intr)) { \
if (info->lwi_on_signal != LWI_ON_SIGNAL_NOOP) \
info->lwi_on_signal(info->lwi_cb_data);\
diff --git a/drivers/staging/lustre/lustre/include/lustre_log.h b/drivers/staging/lustre/lustre/include/lustre_log.h
index e4fc8b5..49618e1 100644
--- a/drivers/staging/lustre/lustre/include/lustre_log.h
+++ b/drivers/staging/lustre/lustre/include/lustre_log.h
@@ -241,7 +241,8 @@
struct obd_llog_group *loc_olg; /* group containing that ctxt */
struct obd_export *loc_exp; /* parent "disk" export (e.g. MDS) */
struct obd_import *loc_imp; /* to use in RPC's: can be backward
- pointing import */
+ * pointing import
+ */
struct llog_operations *loc_logops;
struct llog_handle *loc_handle;
struct mutex loc_mutex; /* protect loc_imp */
@@ -255,7 +256,7 @@
static inline int llog_handle2ops(struct llog_handle *loghandle,
struct llog_operations **lop)
{
- if (loghandle == NULL || loghandle->lgh_logops == NULL)
+ if (!loghandle || !loghandle->lgh_logops)
return -EINVAL;
*lop = loghandle->lgh_logops;
@@ -272,7 +273,7 @@
static inline void llog_ctxt_put(struct llog_ctxt *ctxt)
{
- if (ctxt == NULL)
+ if (!ctxt)
return;
LASSERT_ATOMIC_GT_LT(&ctxt->loc_refcount, 0, LI_POISON);
CDEBUG(D_INFO, "PUTting ctxt %p : new refcount %d\n", ctxt,
@@ -294,7 +295,7 @@
LASSERT(index >= 0 && index < LLOG_MAX_CTXTS);
spin_lock(&olg->olg_lock);
- if (olg->olg_ctxts[index] != NULL) {
+ if (olg->olg_ctxts[index]) {
spin_unlock(&olg->olg_lock);
return -EEXIST;
}
@@ -311,7 +312,7 @@
LASSERT(index >= 0 && index < LLOG_MAX_CTXTS);
spin_lock(&olg->olg_lock);
- if (olg->olg_ctxts[index] == NULL)
+ if (!olg->olg_ctxts[index])
ctxt = NULL;
else
ctxt = llog_ctxt_get(olg->olg_ctxts[index]);
@@ -335,7 +336,7 @@
static inline int llog_group_ctxt_null(struct obd_llog_group *olg, int index)
{
- return (olg->olg_ctxts[index] == NULL);
+ return (!olg->olg_ctxts[index]);
}
static inline int llog_ctxt_null(struct obd_device *obd, int index)
@@ -354,7 +355,7 @@
rc = llog_handle2ops(loghandle, &lop);
if (rc)
return rc;
- if (lop->lop_next_block == NULL)
+ if (!lop->lop_next_block)
return -EOPNOTSUPP;
rc = lop->lop_next_block(env, loghandle, cur_idx, next_idx,
diff --git a/drivers/staging/lustre/lustre/include/lustre_mdc.h b/drivers/staging/lustre/lustre/include/lustre_mdc.h
index 3da3733..df94f9f 100644
--- a/drivers/staging/lustre/lustre/include/lustre_mdc.h
+++ b/drivers/staging/lustre/lustre/include/lustre_mdc.h
@@ -81,8 +81,8 @@
static inline void mdc_get_rpc_lock(struct mdc_rpc_lock *lck,
struct lookup_intent *it)
{
- if (it != NULL && (it->it_op == IT_GETATTR || it->it_op == IT_LOOKUP ||
- it->it_op == IT_LAYOUT))
+ if (it && (it->it_op == IT_GETATTR || it->it_op == IT_LOOKUP ||
+ it->it_op == IT_LAYOUT))
return;
/* This would normally block until the existing request finishes.
@@ -90,7 +90,8 @@
* done, then set rpcl_it to MDC_FAKE_RPCL_IT. Once that is set
* it will only be cleared when all fake requests are finished.
* Only when all fake requests are finished can normal requests
- * be sent, to ensure they are recoverable again. */
+ * be sent, to ensure they are recoverable again.
+ */
again:
mutex_lock(&lck->rpcl_mutex);
@@ -105,22 +106,23 @@
* just turned off but there are still requests in progress.
* Wait until they finish. It doesn't need to be efficient
* in this extremely rare case, just have low overhead in
- * the common case when it isn't true. */
+ * the common case when it isn't true.
+ */
while (unlikely(lck->rpcl_it == MDC_FAKE_RPCL_IT)) {
mutex_unlock(&lck->rpcl_mutex);
schedule_timeout(cfs_time_seconds(1) / 4);
goto again;
}
- LASSERT(lck->rpcl_it == NULL);
+ LASSERT(!lck->rpcl_it);
lck->rpcl_it = it;
}
static inline void mdc_put_rpc_lock(struct mdc_rpc_lock *lck,
struct lookup_intent *it)
{
- if (it != NULL && (it->it_op == IT_GETATTR || it->it_op == IT_LOOKUP ||
- it->it_op == IT_LAYOUT))
+ if (it && (it->it_op == IT_GETATTR || it->it_op == IT_LOOKUP ||
+ it->it_op == IT_LAYOUT))
return;
if (lck->rpcl_it == MDC_FAKE_RPCL_IT) { /* OBD_FAIL_MDC_RPCS_SEM */
diff --git a/drivers/staging/lustre/lustre/include/lustre_net.h b/drivers/staging/lustre/lustre/include/lustre_net.h
index d834ddd..ac08b25 100644
--- a/drivers/staging/lustre/lustre/include/lustre_net.h
+++ b/drivers/staging/lustre/lustre/include/lustre_net.h
@@ -76,7 +76,8 @@
* In order for the client and server to properly negotiate the maximum
* possible transfer size, PTLRPC_BULK_OPS_COUNT must be a power-of-two
* value. The client is free to limit the actual RPC size for any bulk
- * transfer via cl_max_pages_per_rpc to some non-power-of-two value. */
+ * transfer via cl_max_pages_per_rpc to some non-power-of-two value.
+ */
#define PTLRPC_BULK_OPS_BITS 2
#define PTLRPC_BULK_OPS_COUNT (1U << PTLRPC_BULK_OPS_BITS)
/**
@@ -85,7 +86,8 @@
* protocol limitation on the maximum RPC size that can be used by any
* RPC sent to that server in the future. Instead, the server should
* use the negotiated per-client ocd_brw_size to determine the bulk
- * RPC count. */
+ * RPC count.
+ */
#define PTLRPC_BULK_OPS_MASK (~((__u64)PTLRPC_BULK_OPS_COUNT - 1))
/**
@@ -419,16 +421,18 @@
/** A spinlock to protect the reply state flags */
spinlock_t rs_lock;
/** Reply state flags */
- unsigned long rs_difficult:1; /* ACK/commit stuff */
+ unsigned long rs_difficult:1; /* ACK/commit stuff */
unsigned long rs_no_ack:1; /* no ACK, even for
- difficult requests */
+ * difficult requests
+ */
unsigned long rs_scheduled:1; /* being handled? */
unsigned long rs_scheduled_ever:1;/* any schedule attempts? */
unsigned long rs_handled:1; /* been handled yet? */
unsigned long rs_on_net:1; /* reply_out_callback pending? */
unsigned long rs_prealloc:1; /* rs from prealloc list */
unsigned long rs_committed:1;/* the transaction was committed
- * and the rs was dispatched */
+ * and the rs was dispatched
+ */
/** Size of the state */
int rs_size;
/** opcode */
@@ -463,7 +467,7 @@
/** Handles of locks awaiting client reply ACK */
struct lustre_handle rs_locks[RS_MAX_LOCKS];
/** Lock modes of locks in \a rs_locks */
- ldlm_mode_t rs_modes[RS_MAX_LOCKS];
+ enum ldlm_mode rs_modes[RS_MAX_LOCKS];
};
struct ptlrpc_thread;
@@ -1181,7 +1185,7 @@
* purpose of this object is to hold references to the request's resources
* for the lifetime of the request, and to hold properties that policies use
* use for determining the request's scheduling priority.
- * */
+ */
struct ptlrpc_nrs_request {
/**
* The request's resource hierarchy.
@@ -1321,15 +1325,17 @@
/* do not resend request on -EINPROGRESS */
rq_no_retry_einprogress:1,
/* allow the req to be sent if the import is in recovery
- * status */
+ * status
+ */
rq_allow_replay:1;
unsigned int rq_nr_resend;
enum rq_phase rq_phase; /* one of RQ_PHASE_* */
enum rq_phase rq_next_phase; /* one of RQ_PHASE_* to be used next */
- atomic_t rq_refcount;/* client-side refcount for SENT race,
- server-side refcount for multiple replies */
+ atomic_t rq_refcount; /* client-side refcount for SENT race,
+ * server-side refcount for multiple replies
+ */
/** Portal to which this request would be sent */
short rq_request_portal; /* XXX FIXME bug 249 */
@@ -1363,7 +1369,8 @@
/**
* security and encryption data
- * @{ */
+ * @{
+ */
struct ptlrpc_cli_ctx *rq_cli_ctx; /**< client's half ctx */
struct ptlrpc_svc_ctx *rq_svc_ctx; /**< server's half ctx */
struct list_head rq_ctx_chain; /**< link to waited ctx */
@@ -1477,7 +1484,8 @@
/** when request must finish. volatile
* so that servers' early reply updates to the deadline aren't
- * kept in per-cpu cache */
+ * kept in per-cpu cache
+ */
volatile time64_t rq_deadline;
/** when req reply unlink must finish. */
time64_t rq_reply_deadline;
@@ -1518,7 +1526,7 @@
static inline int ptlrpc_req_interpret(const struct lu_env *env,
struct ptlrpc_request *req, int rc)
{
- if (req->rq_interpret_reply != NULL) {
+ if (req->rq_interpret_reply) {
req->rq_status = req->rq_interpret_reply(env, req,
&req->rq_async_args,
rc);
@@ -1678,7 +1686,8 @@
/**
* This is the debug print function you need to use to print request structure
* content into lustre debug log.
- * for most callers (level is a constant) this is resolved at compile time */
+ * for most callers (level is a constant) this is resolved at compile time
+ */
#define DEBUG_REQ(level, req, fmt, args...) \
do { \
if ((level) & (D_ERROR | D_WARNING)) { \
@@ -1947,7 +1956,7 @@
* or general metadata service for MDS.
*/
struct ptlrpc_service {
- /** serialize /proc operations */
+ /** serialize sysfs operations */
spinlock_t srv_lock;
/** most often accessed fields */
/** chain thru all services */
@@ -2101,7 +2110,8 @@
/** NRS head for regular requests */
struct ptlrpc_nrs scp_nrs_reg;
/** NRS head for HP requests; this is only valid for services that can
- * handle HP requests */
+ * handle HP requests
+ */
struct ptlrpc_nrs *scp_nrs_hp;
/** AT stuff */
@@ -2141,8 +2151,8 @@
#define ptlrpc_service_for_each_part(part, i, svc) \
for (i = 0; \
i < (svc)->srv_ncpts && \
- (svc)->srv_parts != NULL && \
- ((part) = (svc)->srv_parts[i]) != NULL; i++)
+ (svc)->srv_parts && \
+ ((part) = (svc)->srv_parts[i]); i++)
/**
* Declaration of ptlrpcd control structure
@@ -2259,7 +2269,6 @@
static inline bool nrs_policy_compat_one(const struct ptlrpc_service *svc,
const struct ptlrpc_nrs_pol_desc *desc)
{
- LASSERT(desc->pd_compat_svc_name != NULL);
return strcmp(svc->srv_name, desc->pd_compat_svc_name) == 0;
}
@@ -2303,7 +2312,6 @@
struct ptlrpc_bulk_desc *desc;
int rc;
- LASSERT(req != NULL);
desc = req->rq_bulk;
if (OBD_FAIL_CHECK(OBD_FAIL_PTLRPC_LONG_BULK_UNLINK) &&
@@ -2462,7 +2470,8 @@
/* "soft" limit for total threads number */
unsigned int tc_nthrs_max;
/* user specified threads number, it will be validated due to
- * other members of this structure. */
+ * other members of this structure.
+ */
unsigned int tc_nthrs_user;
/* set NUMA node affinity for service threads */
unsigned int tc_cpu_affinity;
@@ -2726,7 +2735,7 @@
static inline void
ptlrpc_client_wake_req(struct ptlrpc_request *req)
{
- if (req->rq_set == NULL)
+ if (!req->rq_set)
wake_up(&req->rq_reply_waitq);
else
wake_up(&req->rq_set->set_waitq);
@@ -2750,7 +2759,7 @@
/* Should only be called once per req */
static inline void ptlrpc_req_drop_rs(struct ptlrpc_request *req)
{
- if (req->rq_reply_state == NULL)
+ if (!req->rq_reply_state)
return; /* shouldn't occur */
ptlrpc_rs_decref(req->rq_reply_state);
req->rq_reply_state = NULL;
@@ -2807,7 +2816,6 @@
static inline struct ptlrpc_service *
ptlrpc_req2svc(struct ptlrpc_request *req)
{
- LASSERT(req->rq_rqbd != NULL);
return req->rq_rqbd->rqbd_svcpt->scp_service;
}
diff --git a/drivers/staging/lustre/lustre/include/lustre_req_layout.h b/drivers/staging/lustre/lustre/include/lustre_req_layout.h
index 46a662f..b5128e4 100644
--- a/drivers/staging/lustre/lustre/include/lustre_req_layout.h
+++ b/drivers/staging/lustre/lustre/include/lustre_req_layout.h
@@ -130,7 +130,6 @@
extern struct req_format RQF_OBD_PING;
extern struct req_format RQF_OBD_SET_INFO;
extern struct req_format RQF_SEC_CTX;
-extern struct req_format RQF_OBD_IDX_READ;
/* MGS req_format */
extern struct req_format RQF_MGS_TARGET_REG;
extern struct req_format RQF_MGS_SET_INFO;
@@ -146,7 +145,6 @@
extern struct req_format RQF_MDS_SYNC;
extern struct req_format RQF_MDS_GETXATTR;
extern struct req_format RQF_MDS_GETATTR;
-extern struct req_format RQF_UPDATE_OBJ;
/*
* This is format of direct (non-intent) MDS_GETATTR_NAME request.
@@ -177,7 +175,6 @@
extern struct req_format RQF_MDS_QUOTACHECK;
extern struct req_format RQF_MDS_QUOTACTL;
extern struct req_format RQF_QC_CALLBACK;
-extern struct req_format RQF_QUOTA_DQACQ;
extern struct req_format RQF_MDS_SWAP_LAYOUTS;
/* MDS hsm formats */
extern struct req_format RQF_MDS_HSM_STATE_GET;
@@ -220,7 +217,6 @@
extern struct req_format RQF_LDLM_INTENT_CREATE;
extern struct req_format RQF_LDLM_INTENT_UNLINK;
extern struct req_format RQF_LDLM_INTENT_GETXATTR;
-extern struct req_format RQF_LDLM_INTENT_QUOTA;
extern struct req_format RQF_LDLM_CANCEL;
extern struct req_format RQF_LDLM_CALLBACK;
extern struct req_format RQF_LDLM_CP_CALLBACK;
@@ -252,7 +248,6 @@
extern struct req_msg_field RMF_GETINFO_VAL;
extern struct req_msg_field RMF_GETINFO_VALLEN;
extern struct req_msg_field RMF_GETINFO_KEY;
-extern struct req_msg_field RMF_IDX_INFO;
extern struct req_msg_field RMF_CLOSE_DATA;
/*
@@ -277,7 +272,6 @@
extern struct req_msg_field RMF_CAPA2;
extern struct req_msg_field RMF_OBD_QUOTACHECK;
extern struct req_msg_field RMF_OBD_QUOTACTL;
-extern struct req_msg_field RMF_QUOTA_BODY;
extern struct req_msg_field RMF_STRING;
extern struct req_msg_field RMF_SWAP_LAYOUTS;
extern struct req_msg_field RMF_MDS_HSM_PROGRESS;
@@ -322,9 +316,6 @@
/* generic uint32 */
extern struct req_msg_field RMF_U32;
-/* OBJ update format */
-extern struct req_msg_field RMF_UPDATE;
-extern struct req_msg_field RMF_UPDATE_REPLY;
/** @} req_layout */
#endif /* _LUSTRE_REQ_LAYOUT_H__ */
diff --git a/drivers/staging/lustre/lustre/include/obd.h b/drivers/staging/lustre/lustre/include/obd.h
index f00d9a2..e63366b 100644
--- a/drivers/staging/lustre/lustre/include/obd.h
+++ b/drivers/staging/lustre/lustre/include/obd.h
@@ -90,7 +90,8 @@
pid_t lsm_lock_owner; /* debugging */
/* maximum possible file size, might change as OSTs status changes,
- * e.g. disconnected, deactivated */
+ * e.g. disconnected, deactivated
+ */
__u64 lsm_maxbytes;
struct {
/* Public members. */
@@ -123,7 +124,7 @@
static inline bool lsm_has_objects(struct lov_stripe_md *lsm)
{
- if (lsm == NULL)
+ if (!lsm)
return false;
if (lsm_is_released(lsm))
return false;
@@ -159,7 +160,8 @@
/* An update callback which is called to update some data on upper
* level. E.g. it is used for update lsm->lsm_oinfo at every received
* request in osc level for enqueue requests. It is also possible to
- * update some caller data from LOV layer if needed. */
+ * update some caller data from LOV layer if needed.
+ */
obd_enqueue_update_f oi_cb_up;
};
@@ -216,7 +218,6 @@
};
#define OSC_MAX_RIF_DEFAULT 8
-#define MDS_OSC_MAX_RIF_DEFAULT 50
#define OSC_MAX_RIF_MAX 256
#define OSC_MAX_DIRTY_DEFAULT (OSC_MAX_RIF_DEFAULT * 4)
#define OSC_MAX_DIRTY_MB_MAX 2048 /* arbitrary, but < MAX_LONG bytes */
@@ -241,7 +242,8 @@
struct obd_import *cl_import; /* ptlrpc connection state */
int cl_conn_count;
/* max_mds_easize is purely a performance thing so we don't have to
- * call obd_size_diskmd() all the time. */
+ * call obd_size_diskmd() all the time.
+ */
int cl_default_mds_easize;
int cl_max_mds_easize;
int cl_default_mds_cookiesize;
@@ -261,7 +263,8 @@
/* since we allocate grant by blocks, we don't know how many grant will
* be used to add a page into cache. As a solution, we reserve maximum
* grant before trying to dirty a page and unreserve the rest.
- * See osc_{reserve|unreserve}_grant for details. */
+ * See osc_{reserve|unreserve}_grant for details.
+ */
long cl_reserved_grant;
struct list_head cl_cache_waiters; /* waiting for cache/grant */
unsigned long cl_next_shrink_grant; /* jiffies */
@@ -269,14 +272,16 @@
int cl_grant_shrink_interval; /* seconds */
/* A chunk is an optimal size used by osc_extent to determine
- * the extent size. A chunk is max(PAGE_CACHE_SIZE, OST block size) */
+ * the extent size. A chunk is max(PAGE_CACHE_SIZE, OST block size)
+ */
int cl_chunkbits;
int cl_chunk;
int cl_extent_tax; /* extent overhead, by bytes */
/* keep track of objects that have lois that contain pages which
* have been queued for async brw. this lock also protects the
- * lists of osc_client_pages that hang off of the loi */
+ * lists of osc_client_pages that hang off of the loi
+ */
/*
* ->cl_loi_list_lock protects consistency of
* ->cl_loi_{ready,read,write}_list. ->ap_make_ready() and
@@ -295,14 +300,14 @@
* NB by Jinshan: though field names are still _loi_, but actually
* osc_object{}s are in the list.
*/
- client_obd_lock_t cl_loi_list_lock;
+ struct client_obd_lock cl_loi_list_lock;
struct list_head cl_loi_ready_list;
struct list_head cl_loi_hp_ready_list;
struct list_head cl_loi_write_list;
struct list_head cl_loi_read_list;
int cl_r_in_flight;
int cl_w_in_flight;
- /* just a sum of the loi/lop pending numbers to be exported by /proc */
+ /* just a sum of the loi/lop pending numbers to be exported by sysfs */
atomic_t cl_pending_w_pages;
atomic_t cl_pending_r_pages;
__u32 cl_max_pages_per_rpc;
@@ -322,7 +327,7 @@
atomic_t cl_lru_shrinkers;
atomic_t cl_lru_in_list;
struct list_head cl_lru_list; /* lru page list */
- client_obd_lock_t cl_lru_list_lock; /* page list protector */
+ struct client_obd_lock cl_lru_list_lock; /* page list protector */
/* number of in flight destroy rpcs is limited to max_rpcs_in_flight */
atomic_t cl_destroy_in_flight;
@@ -340,7 +345,7 @@
/* supported checksum types that are worked out at connect time */
__u32 cl_supp_cksum_types;
/* checksum algorithm to be used */
- cksum_type_t cl_cksum_type;
+ enum cksum_type cl_cksum_type;
/* also protected by the poorly named _loi_list_lock lock above */
struct osc_async_rc cl_ar;
@@ -380,8 +385,7 @@
/* Generic subset of OSTs */
struct ost_pool {
- __u32 *op_array; /* array of index of
- lov_obd->lov_tgts */
+ __u32 *op_array; /* array of index of lov_obd->lov_tgts */
unsigned int op_count; /* number of OSTs in the array */
unsigned int op_size; /* allocated size of lp_array */
struct rw_semaphore op_rw_sem; /* to protect ost_pool use */
@@ -414,14 +418,16 @@
struct lov_qos_rr lq_rr; /* round robin qos data */
unsigned long lq_dirty:1, /* recalc qos data */
lq_same_space:1,/* the ost's all have approx.
- the same space avail */
+ * the same space avail
+ */
lq_reset:1, /* zero current penalties */
lq_statfs_in_progress:1; /* statfs op in
progress */
/* qos statfs data */
struct lov_statfs_data *lq_statfs_data;
- wait_queue_head_t lq_statfs_waitq; /* waitqueue to notify statfs
- * requests completion */
+ wait_queue_head_t lq_statfs_waitq; /* waitqueue to notify statfs
+ * requests completion
+ */
};
struct lov_tgt_desc {
@@ -449,16 +455,16 @@
struct lov_qos_rr pool_rr; /* round robin qos */
struct hlist_node pool_hash; /* access by poolname */
struct list_head pool_list; /* serial access */
- struct dentry *pool_debugfs_entry; /* file in /proc */
+ struct dentry *pool_debugfs_entry; /* file in debugfs */
struct obd_device *pool_lobd; /* obd of the lov/lod to which
- * this pool belongs */
+ * this pool belongs
+ */
};
struct lov_obd {
struct lov_desc desc;
struct lov_tgt_desc **lov_tgts; /* sparse array */
- struct ost_pool lov_packed; /* all OSTs in a packed
- array */
+ struct ost_pool lov_packed; /* all OSTs in a packed array */
struct mutex lov_lock;
struct obd_connect_data lov_ocd;
atomic_t lov_refcount;
@@ -595,34 +601,6 @@
struct obd_uuid *oti_ost_uuid;
};
-static inline void oti_init(struct obd_trans_info *oti,
- struct ptlrpc_request *req)
-{
- if (oti == NULL)
- return;
- memset(oti, 0, sizeof(*oti));
-
- if (req == NULL)
- return;
-
- oti->oti_xid = req->rq_xid;
- /** VBR: take versions from request */
- if (req->rq_reqmsg != NULL &&
- lustre_msg_get_flags(req->rq_reqmsg) & MSG_REPLAY) {
- __u64 *pre_version = lustre_msg_get_versions(req->rq_reqmsg);
-
- oti->oti_pre_version = pre_version ? pre_version[0] : 0;
- oti->oti_transno = lustre_msg_get_transno(req->rq_reqmsg);
- }
-
- /** called from mds_create_objects */
- if (req->rq_repmsg != NULL)
- oti->oti_transno = lustre_msg_get_transno(req->rq_repmsg);
- oti->oti_thread = req->rq_svc_thread;
- if (req->rq_reqmsg != NULL)
- oti->oti_conn_cnt = lustre_msg_get_conn_cnt(req->rq_reqmsg);
-}
-
static inline void oti_alloc_cookies(struct obd_trans_info *oti,
int num_cookies)
{
@@ -727,21 +705,23 @@
unsigned long obd_attached:1, /* finished attach */
obd_set_up:1, /* finished setup */
obd_version_recov:1, /* obd uses version checking */
- obd_replayable:1, /* recovery is enabled; inform clients */
- obd_no_transno:1, /* no committed-transno notification */
+ obd_replayable:1,/* recovery is enabled; inform clients */
+ obd_no_transno:1, /* no committed-transno notification */
obd_no_recov:1, /* fail instead of retry messages */
obd_stopping:1, /* started cleanup */
obd_starting:1, /* started setup */
obd_force:1, /* cleanup with > 0 obd refcount */
- obd_fail:1, /* cleanup with failover */
- obd_async_recov:1, /* allow asynchronous orphan cleanup */
+ obd_fail:1, /* cleanup with failover */
+ obd_async_recov:1, /* allow asynchronous orphan cleanup */
obd_no_conn:1, /* deny new connections */
obd_inactive:1, /* device active/inactive
- * (for /proc/status only!!) */
+ * (for sysfs status only!!)
+ */
obd_no_ir:1, /* no imperative recovery. */
obd_process_conf:1; /* device is processing mgs config */
/* use separate field as it is set in interrupt to don't mess with
- * protection of other bits using _bh lock */
+ * protection of other bits using _bh lock
+ */
unsigned long obd_recovery_expired:1;
/* uuid-export hash body */
struct cfs_hash *obd_uuid_hash;
@@ -934,7 +914,8 @@
__u32 op_npages;
/* used to transfer info between the stacks of MD client
- * see enum op_cli_flags */
+ * see enum op_cli_flags
+ */
__u32 op_cli_flags;
/* File object data version for HSM release, on client */
@@ -986,7 +967,8 @@
/* connect to the target device with given connection
* data. @ocd->ocd_connect_flags is modified to reflect flags actually
* granted by the target, which are guaranteed to be a subset of flags
- * asked for. If @ocd == NULL, use default parameters. */
+ * asked for. If @ocd == NULL, use default parameters.
+ */
int (*connect)(const struct lu_env *env,
struct obd_export **exp, struct obd_device *src,
struct obd_uuid *cluuid, struct obd_connect_data *ocd,
@@ -1082,7 +1064,8 @@
/*
* NOTE: If adding ops, add another LPROCFS_OBD_OP_INIT() line
* to lprocfs_alloc_obd_stats() in obdclass/lprocfs_status.c.
- * Also, add a wrapper function in include/linux/obd_class.h. */
+ * Also, add a wrapper function in include/linux/obd_class.h.
+ */
};
enum {
@@ -1188,14 +1171,14 @@
struct obd_client_handle *);
int (*set_lock_data)(struct obd_export *, __u64 *, void *, __u64 *);
- ldlm_mode_t (*lock_match)(struct obd_export *, __u64,
- const struct lu_fid *, ldlm_type_t,
- ldlm_policy_data_t *, ldlm_mode_t,
- struct lustre_handle *);
+ enum ldlm_mode (*lock_match)(struct obd_export *, __u64,
+ const struct lu_fid *, enum ldlm_type,
+ ldlm_policy_data_t *, enum ldlm_mode,
+ struct lustre_handle *);
int (*cancel_unused)(struct obd_export *, const struct lu_fid *,
- ldlm_policy_data_t *, ldlm_mode_t,
- ldlm_cancel_flags_t flags, void *opaque);
+ ldlm_policy_data_t *, enum ldlm_mode,
+ enum ldlm_cancel_flags flags, void *opaque);
int (*get_remote_perm)(struct obd_export *, const struct lu_fid *,
__u32, struct ptlrpc_request **);
@@ -1252,7 +1235,7 @@
struct md_open_data *mod;
mod = kzalloc(sizeof(*mod), GFP_NOFS);
- if (mod == NULL)
+ if (!mod)
return NULL;
atomic_set(&mod->mod_refcount, 1);
return mod;
@@ -1299,7 +1282,7 @@
return false;
/* caller does not care of idx */
- if (idx == NULL)
+ if (!idx)
return true;
/* volatile file, the MDT can be set from name */
@@ -1326,7 +1309,8 @@
return true;
bad_format:
/* bad format of mdt idx, we cannot return an error
- * to caller so we use hash algo */
+ * to caller so we use hash algo
+ */
CERROR("Bad volatile file name format: %s\n",
name + LUSTRE_VOLATILE_HDR_LEN);
return false;
@@ -1334,7 +1318,6 @@
static inline int cli_brw_size(struct obd_device *obd)
{
- LASSERT(obd != NULL);
return obd->u.cli.cl_max_pages_per_rpc << PAGE_CACHE_SHIFT;
}
diff --git a/drivers/staging/lustre/lustre/include/obd_cksum.h b/drivers/staging/lustre/lustre/include/obd_cksum.h
index 01db604..637fa22 100644
--- a/drivers/staging/lustre/lustre/include/obd_cksum.h
+++ b/drivers/staging/lustre/lustre/include/obd_cksum.h
@@ -37,7 +37,7 @@
#include "../../include/linux/libcfs/libcfs.h"
#include "lustre/lustre_idl.h"
-static inline unsigned char cksum_obd2cfs(cksum_type_t cksum_type)
+static inline unsigned char cksum_obd2cfs(enum cksum_type cksum_type)
{
switch (cksum_type) {
case OBD_CKSUM_CRC32:
@@ -63,8 +63,9 @@
* In case of an unsupported types/flags we fall back to ADLER
* because that is supported by all clients since 1.8
*
- * In case multiple algorithms are supported the best one is used. */
-static inline u32 cksum_type_pack(cksum_type_t cksum_type)
+ * In case multiple algorithms are supported the best one is used.
+ */
+static inline u32 cksum_type_pack(enum cksum_type cksum_type)
{
unsigned int performance = 0, tmp;
u32 flag = OBD_FL_CKSUM_ADLER;
@@ -98,7 +99,7 @@
return flag;
}
-static inline cksum_type_t cksum_type_unpack(u32 o_flags)
+static inline enum cksum_type cksum_type_unpack(u32 o_flags)
{
switch (o_flags & OBD_FL_CKSUM_ALL) {
case OBD_FL_CKSUM_CRC32C:
@@ -116,9 +117,9 @@
* 1.8 supported ADLER it is base and not depend on hw
* Client uses all available local algos
*/
-static inline cksum_type_t cksum_types_supported_client(void)
+static inline enum cksum_type cksum_types_supported_client(void)
{
- cksum_type_t ret = OBD_CKSUM_ADLER;
+ enum cksum_type ret = OBD_CKSUM_ADLER;
CDEBUG(D_INFO, "Crypto hash speed: crc %d, crc32c %d, adler %d\n",
cfs_crypto_hash_speed(cksum_obd2cfs(OBD_CKSUM_CRC32)),
@@ -139,14 +140,16 @@
* Currently, calling cksum_type_pack() with a mask will return the fastest
* checksum type due to its benchmarking at libcfs module load.
* Caution is advised, however, since what is fastest on a single client may
- * not be the fastest or most efficient algorithm on the server. */
-static inline cksum_type_t cksum_type_select(cksum_type_t cksum_types)
+ * not be the fastest or most efficient algorithm on the server.
+ */
+static inline enum cksum_type cksum_type_select(enum cksum_type cksum_types)
{
return cksum_type_unpack(cksum_type_pack(cksum_types));
}
/* Checksum algorithm names. Must be defined in the same order as the
- * OBD_CKSUM_* flags. */
+ * OBD_CKSUM_* flags.
+ */
#define DECLARE_CKSUM_NAME char *cksum_name[] = {"crc32", "adler", "crc32c"}
#endif /* __OBD_H */
diff --git a/drivers/staging/lustre/lustre/include/obd_class.h b/drivers/staging/lustre/lustre/include/obd_class.h
index 4f631e6..8515218 100644
--- a/drivers/staging/lustre/lustre/include/obd_class.h
+++ b/drivers/staging/lustre/lustre/include/obd_class.h
@@ -45,18 +45,22 @@
#include "lprocfs_status.h"
#define OBD_STATFS_NODELAY 0x0001 /* requests should be send without delay
- * and resends for avoid deadlocks */
+ * and resends for avoid deadlocks
+ */
#define OBD_STATFS_FROM_CACHE 0x0002 /* the statfs callback should not update
- * obd_osfs_age */
+ * obd_osfs_age
+ */
#define OBD_STATFS_PTLRPCD 0x0004 /* requests will be sent via ptlrpcd
* instead of a specific set. This
* means that we cannot rely on the set
* interpret routine to be called.
* lov_statfs_fini() must thus be called
- * by the request interpret routine */
+ * by the request interpret routine
+ */
#define OBD_STATFS_FOR_MDT0 0x0008 /* The statfs is only for retrieving
- * information from MDT0. */
-#define OBD_FL_PUNCH 0x00000001 /* To indicate it is punch operation */
+ * information from MDT0.
+ */
+#define OBD_FL_PUNCH 0x00000001 /* To indicate it is punch operation */
/* OBD Device Declarations */
extern struct obd_device *obd_devs[MAX_OBD_DEVICES];
@@ -160,8 +164,9 @@
struct mutex cld_lock;
int cld_type;
unsigned int cld_stopping:1, /* we were told to stop
- * watching */
- cld_lostlock:1; /* lock not requeued */
+ * watching
+ */
+ cld_lostlock:1; /* lock not requeued */
char cld_logname[0];
};
@@ -275,7 +280,8 @@
#define CTXTP(ctxt, op) (ctxt)->loc_logops->lop_##op
/* Ensure obd_setup: used for cleanup which must be called
- while obd is stopping */
+ * while obd is stopping
+ */
static inline int obd_check_dev(struct obd_device *obd)
{
if (!obd) {
@@ -306,7 +312,7 @@
/ sizeof(((struct obd_ops *)(0))->iocontrol))
#define OBD_COUNTER_INCREMENT(obdx, op) \
- if ((obdx)->obd_stats != NULL) { \
+ if ((obdx)->obd_stats) { \
unsigned int coffset; \
coffset = (unsigned int)((obdx)->obd_cntr_base) + \
OBD_COUNTER_OFFSET(op); \
@@ -315,7 +321,7 @@
}
#define EXP_COUNTER_INCREMENT(export, op) \
- if ((export)->exp_obd->obd_stats != NULL) { \
+ if ((export)->exp_obd->obd_stats) { \
unsigned int coffset; \
coffset = (unsigned int)((export)->exp_obd->obd_cntr_base) + \
OBD_COUNTER_OFFSET(op); \
@@ -329,7 +335,7 @@
/ sizeof(((struct md_ops *)(0))->getstatus))
#define MD_COUNTER_INCREMENT(obdx, op) \
- if ((obd)->md_stats != NULL) { \
+ if ((obd)->md_stats) { \
unsigned int coffset; \
coffset = (unsigned int)((obdx)->md_cntr_base) + \
MD_COUNTER_OFFSET(op); \
@@ -338,24 +344,24 @@
}
#define EXP_MD_COUNTER_INCREMENT(export, op) \
- if ((export)->exp_obd->obd_stats != NULL) { \
+ if ((export)->exp_obd->obd_stats) { \
unsigned int coffset; \
coffset = (unsigned int)((export)->exp_obd->md_cntr_base) + \
MD_COUNTER_OFFSET(op); \
LASSERT(coffset < (export)->exp_obd->md_stats->ls_num); \
lprocfs_counter_incr((export)->exp_obd->md_stats, coffset); \
- if ((export)->exp_md_stats != NULL) \
+ if ((export)->exp_md_stats) \
lprocfs_counter_incr( \
(export)->exp_md_stats, coffset); \
}
#define EXP_CHECK_MD_OP(exp, op) \
do { \
- if ((exp) == NULL) { \
+ if (!(exp)) { \
CERROR("obd_" #op ": NULL export\n"); \
return -ENODEV; \
} \
- if ((exp)->exp_obd == NULL || !OBT((exp)->exp_obd)) { \
+ if (!(exp)->exp_obd || !OBT((exp)->exp_obd)) { \
CERROR("obd_" #op ": cleaned up obd\n"); \
return -EOPNOTSUPP; \
} \
@@ -379,11 +385,11 @@
#define EXP_CHECK_DT_OP(exp, op) \
do { \
- if ((exp) == NULL) { \
+ if (!(exp)) { \
CERROR("obd_" #op ": NULL export\n"); \
return -ENODEV; \
} \
- if ((exp)->exp_obd == NULL || !OBT((exp)->exp_obd)) { \
+ if (!(exp)->exp_obd || !OBT((exp)->exp_obd)) { \
CERROR("obd_" #op ": cleaned up obd\n"); \
return -EOPNOTSUPP; \
} \
@@ -467,7 +473,7 @@
DECLARE_LU_VARS(ldt, d);
ldt = obd->obd_type->typ_lu;
- if (ldt != NULL) {
+ if (ldt) {
struct lu_context session_ctx;
struct lu_env env;
@@ -509,7 +515,7 @@
return rc;
ldt = obd->obd_type->typ_lu;
d = obd->obd_lu_dev;
- if (ldt != NULL && d != NULL) {
+ if (ldt && d) {
if (cleanup_stage == OBD_CLEANUP_EXPORTS) {
struct lu_env env;
@@ -538,7 +544,7 @@
ldt = obd->obd_type->typ_lu;
d = obd->obd_lu_dev;
- if (ldt != NULL && d != NULL) {
+ if (ldt && d) {
struct lu_env env;
rc = lu_env_init(&env, ldt->ldt_ctx_tags);
@@ -558,7 +564,8 @@
static inline void obd_cleanup_client_import(struct obd_device *obd)
{
/* If we set up but never connected, the
- client import will not have been cleaned. */
+ * client import will not have been cleaned.
+ */
down_write(&obd->u.cli.cl_sem);
if (obd->u.cli.cl_import) {
struct obd_import *imp;
@@ -586,7 +593,7 @@
obd->obd_process_conf = 1;
ldt = obd->obd_type->typ_lu;
d = obd->obd_lu_dev;
- if (ldt != NULL && d != NULL) {
+ if (ldt && d) {
struct lu_env env;
rc = lu_env_init(&env, ldt->ldt_ctx_tags);
@@ -674,7 +681,7 @@
struct lov_stripe_md **mem_tgt)
{
LASSERT(mem_tgt);
- LASSERT(*mem_tgt == NULL);
+ LASSERT(!*mem_tgt);
return obd_unpackmd(exp, mem_tgt, NULL, 0);
}
@@ -767,7 +774,7 @@
EXP_COUNTER_INCREMENT(exp, setattr_async);
set = ptlrpc_prep_set();
- if (set == NULL)
+ if (!set)
return -ENOMEM;
rc = OBP(exp->exp_obd, setattr_async)(exp, oinfo, oti, set);
@@ -778,7 +785,8 @@
}
/* This adds all the requests into @set if @set != NULL, otherwise
- all requests are sent asynchronously without waiting for response. */
+ * all requests are sent asynchronously without waiting for response.
+ */
static inline int obd_setattr_async(struct obd_export *exp,
struct obd_info *oinfo,
struct obd_trans_info *oti,
@@ -848,7 +856,8 @@
{
int rc;
__u64 ocf = data ? data->ocd_connect_flags : 0; /* for post-condition
- * check */
+ * check
+ */
rc = obd_check_dev_active(obd);
if (rc)
@@ -858,7 +867,7 @@
rc = OBP(obd, connect)(env, exp, obd, cluuid, data, localdata);
/* check that only subset is granted */
- LASSERT(ergo(data != NULL, (data->ocd_connect_flags & ocf) ==
+ LASSERT(ergo(data, (data->ocd_connect_flags & ocf) ==
data->ocd_connect_flags));
return rc;
}
@@ -871,8 +880,7 @@
void *localdata)
{
int rc;
- __u64 ocf = d ? d->ocd_connect_flags : 0; /* for post-condition
- * check */
+ __u64 ocf = d ? d->ocd_connect_flags : 0; /* for post-condition check */
rc = obd_check_dev_active(obd);
if (rc)
@@ -882,8 +890,7 @@
rc = OBP(obd, reconnect)(env, exp, obd, cluuid, d, localdata);
/* check that only subset is granted */
- LASSERT(ergo(d != NULL,
- (d->ocd_connect_flags & ocf) == d->ocd_connect_flags));
+ LASSERT(ergo(d, (d->ocd_connect_flags & ocf) == d->ocd_connect_flags));
return rc;
}
@@ -998,7 +1005,7 @@
{
int rc = 0;
- if ((exp)->exp_obd != NULL && OBT((exp)->exp_obd) &&
+ if ((exp)->exp_obd && OBT((exp)->exp_obd) &&
OBP((exp)->exp_obd, init_export))
rc = OBP(exp->exp_obd, init_export)(exp);
return rc;
@@ -1006,7 +1013,7 @@
static inline int obd_destroy_export(struct obd_export *exp)
{
- if ((exp)->exp_obd != NULL && OBT((exp)->exp_obd) &&
+ if ((exp)->exp_obd && OBT((exp)->exp_obd) &&
OBP((exp)->exp_obd, destroy_export))
OBP(exp->exp_obd, destroy_export)(exp);
return 0;
@@ -1014,7 +1021,8 @@
/* @max_age is the oldest time in jiffies that we accept using a cached data.
* If the cache is older than @max_age we will get a new value from the
- * target. Use a value of "cfs_time_current() + HZ" to guarantee freshness. */
+ * target. Use a value of "cfs_time_current() + HZ" to guarantee freshness.
+ */
static inline int obd_statfs_async(struct obd_export *exp,
struct obd_info *oinfo,
__u64 max_age,
@@ -1023,7 +1031,7 @@
int rc = 0;
struct obd_device *obd;
- if (exp == NULL || exp->exp_obd == NULL)
+ if (!exp || !exp->exp_obd)
return -EINVAL;
obd = exp->exp_obd;
@@ -1059,7 +1067,7 @@
int rc = 0;
set = ptlrpc_prep_set();
- if (set == NULL)
+ if (!set)
return -ENOMEM;
oinfo.oi_osfs = osfs;
@@ -1073,7 +1081,8 @@
/* @max_age is the oldest time in jiffies that we accept using a cached data.
* If the cache is older than @max_age we will get a new value from the
- * target. Use a value of "cfs_time_current() + HZ" to guarantee freshness. */
+ * target. Use a value of "cfs_time_current() + HZ" to guarantee freshness.
+ */
static inline int obd_statfs(const struct lu_env *env, struct obd_export *exp,
struct obd_statfs *osfs, __u64 max_age,
__u32 flags)
@@ -1081,7 +1090,7 @@
int rc = 0;
struct obd_device *obd = exp->exp_obd;
- if (obd == NULL)
+ if (!obd)
return -EINVAL;
OBD_CHECK_DT_OP(obd, statfs, -EOPNOTSUPP);
@@ -1205,9 +1214,10 @@
return rc;
/* the check for async_recov is a complete hack - I'm hereby
- overloading the meaning to also mean "this was called from
- mds_postsetup". I know that my mds is able to handle notifies
- by this point, and it needs to get them to execute mds_postrecov. */
+ * overloading the meaning to also mean "this was called from
+ * mds_postsetup". I know that my mds is able to handle notifies
+ * by this point, and it needs to get them to execute mds_postrecov.
+ */
if (!obd->obd_set_up && !obd->obd_async_recov) {
CDEBUG(D_HA, "obd %s not set up\n", obd->obd_name);
return -EINVAL;
@@ -1241,7 +1251,7 @@
* Also, call non-obd listener, if any
*/
onu = &observer->obd_upcall;
- if (onu->onu_upcall != NULL)
+ if (onu->onu_upcall)
rc2 = onu->onu_upcall(observer, observed, ev,
onu->onu_owner, NULL);
else
@@ -1287,7 +1297,7 @@
int rc;
/* don't use EXP_CHECK_DT_OP, because NULL method is normal here */
- if (obd == NULL || !OBT(obd)) {
+ if (!obd || !OBT(obd)) {
CERROR("cleaned up obd\n");
return -EOPNOTSUPP;
}
@@ -1318,57 +1328,6 @@
return 0;
}
-#if 0
-static inline int obd_register_page_removal_cb(struct obd_export *exp,
- obd_page_removal_cb_t cb,
- obd_pin_extent_cb pin_cb)
-{
- int rc;
-
- OBD_CHECK_DT_OP(exp->exp_obd, register_page_removal_cb, 0);
- OBD_COUNTER_INCREMENT(exp->exp_obd, register_page_removal_cb);
-
- rc = OBP(exp->exp_obd, register_page_removal_cb)(exp, cb, pin_cb);
- return rc;
-}
-
-static inline int obd_unregister_page_removal_cb(struct obd_export *exp,
- obd_page_removal_cb_t cb)
-{
- int rc;
-
- OBD_CHECK_DT_OP(exp->exp_obd, unregister_page_removal_cb, 0);
- OBD_COUNTER_INCREMENT(exp->exp_obd, unregister_page_removal_cb);
-
- rc = OBP(exp->exp_obd, unregister_page_removal_cb)(exp, cb);
- return rc;
-}
-
-static inline int obd_register_lock_cancel_cb(struct obd_export *exp,
- obd_lock_cancel_cb cb)
-{
- int rc;
-
- OBD_CHECK_DT_OP(exp->exp_obd, register_lock_cancel_cb, 0);
- OBD_COUNTER_INCREMENT(exp->exp_obd, register_lock_cancel_cb);
-
- rc = OBP(exp->exp_obd, register_lock_cancel_cb)(exp, cb);
- return rc;
-}
-
-static inline int obd_unregister_lock_cancel_cb(struct obd_export *exp,
- obd_lock_cancel_cb cb)
-{
- int rc;
-
- OBD_CHECK_DT_OP(exp->exp_obd, unregister_lock_cancel_cb, 0);
- OBD_COUNTER_INCREMENT(exp->exp_obd, unregister_lock_cancel_cb);
-
- rc = OBP(exp->exp_obd, unregister_lock_cancel_cb)(exp, cb);
- return rc;
-}
-#endif
-
/* metadata helpers */
static inline int md_getstatus(struct obd_export *exp, struct lu_fid *fid)
{
@@ -1657,8 +1616,8 @@
static inline int md_cancel_unused(struct obd_export *exp,
const struct lu_fid *fid,
ldlm_policy_data_t *policy,
- ldlm_mode_t mode,
- ldlm_cancel_flags_t flags,
+ enum ldlm_mode mode,
+ enum ldlm_cancel_flags flags,
void *opaque)
{
int rc;
@@ -1671,12 +1630,12 @@
return rc;
}
-static inline ldlm_mode_t md_lock_match(struct obd_export *exp, __u64 flags,
- const struct lu_fid *fid,
- ldlm_type_t type,
- ldlm_policy_data_t *policy,
- ldlm_mode_t mode,
- struct lustre_handle *lockh)
+static inline enum ldlm_mode md_lock_match(struct obd_export *exp, __u64 flags,
+ const struct lu_fid *fid,
+ enum ldlm_type type,
+ ldlm_policy_data_t *policy,
+ enum ldlm_mode mode,
+ struct lustre_handle *lockh)
{
EXP_CHECK_MD_OP(exp, lock_match);
EXP_MD_COUNTER_INCREMENT(exp, lock_match);
@@ -1759,7 +1718,8 @@
/* I'm as embarrassed about this as you are.
*
* <shaver> // XXX do not look into _superhack with remaining eye
- * <shaver> // XXX if this were any uglier, I'd get my own show on MTV */
+ * <shaver> // XXX if this were any uglier, I'd get my own show on MTV
+ */
extern int (*ptlrpc_put_connection_superhack)(struct ptlrpc_connection *c);
/* obd_mount.c */
diff --git a/drivers/staging/lustre/lustre/include/obd_support.h b/drivers/staging/lustre/lustre/include/obd_support.h
index d031437..225262fa 100644
--- a/drivers/staging/lustre/lustre/include/obd_support.h
+++ b/drivers/staging/lustre/lustre/include/obd_support.h
@@ -47,7 +47,8 @@
extern unsigned int obd_dump_on_timeout;
extern unsigned int obd_dump_on_eviction;
/* obd_timeout should only be used for recovery, not for
- networking / disk / timings affected by load (use Adaptive Timeouts) */
+ * networking / disk / timings affected by load (use Adaptive Timeouts)
+ */
extern unsigned int obd_timeout; /* seconds */
extern unsigned int obd_timeout_set;
extern unsigned int at_min;
@@ -104,18 +105,21 @@
* failover targets the client only pings one server at a time, and pings
* can be lost on a loaded network. Since eviction has serious consequences,
* and there's no urgent need to evict a client just because it's idle, we
- * should be very conservative here. */
+ * should be very conservative here.
+ */
#define PING_EVICT_TIMEOUT (PING_INTERVAL * 6)
#define DISK_TIMEOUT 50 /* Beyond this we warn about disk speed */
#define CONNECTION_SWITCH_MIN 5U /* Connection switching rate limiter */
- /* Max connect interval for nonresponsive servers; ~50s to avoid building up
- connect requests in the LND queues, but within obd_timeout so we don't
- miss the recovery window */
+/* Max connect interval for nonresponsive servers; ~50s to avoid building up
+ * connect requests in the LND queues, but within obd_timeout so we don't
+ * miss the recovery window
+ */
#define CONNECTION_SWITCH_MAX min(50U, max(CONNECTION_SWITCH_MIN, obd_timeout))
#define CONNECTION_SWITCH_INC 5 /* Connection timeout backoff */
/* In general this should be low to have quick detection of a system
- running on a backup server. (If it's too low, import_select_connection
- will increase the timeout anyhow.) */
+ * running on a backup server. (If it's too low, import_select_connection
+ * will increase the timeout anyhow.)
+ */
#define INITIAL_CONNECT_TIMEOUT max(CONNECTION_SWITCH_MIN, obd_timeout/20)
/* The max delay between connects is SWITCH_MAX + SWITCH_INC + INITIAL */
#define RECONNECT_DELAY_MAX (CONNECTION_SWITCH_MAX + CONNECTION_SWITCH_INC + \
@@ -507,7 +511,6 @@
do { \
struct portals_handle *__h = (handle); \
\
- LASSERT(handle != NULL); \
__h->h_cookie = (unsigned long)(ptr); \
__h->h_size = (size); \
call_rcu(&__h->h_rcu, class_handle_free_cb); \
diff --git a/drivers/staging/lustre/lustre/lclient/glimpse.c b/drivers/staging/lustre/lustre/lclient/glimpse.c
index 8533a1e5..c4e8a08 100644
--- a/drivers/staging/lustre/lustre/lclient/glimpse.c
+++ b/drivers/staging/lustre/lustre/lclient/glimpse.c
@@ -109,7 +109,8 @@
* if there were no conflicting locks. If there
* were conflicting locks, enqueuing or waiting
* fails with -ENAVAIL, but valid inode
- * attributes are returned anyway. */
+ * attributes are returned anyway.
+ */
*descr = whole_file;
descr->cld_obj = clob;
descr->cld_mode = CLM_PHANTOM;
diff --git a/drivers/staging/lustre/lustre/lclient/lcommon_cl.c b/drivers/staging/lustre/lustre/lclient/lcommon_cl.c
index 4dfeb4e..30651f2 100644
--- a/drivers/staging/lustre/lustre/lclient/lcommon_cl.c
+++ b/drivers/staging/lustre/lustre/lclient/lcommon_cl.c
@@ -117,7 +117,7 @@
struct ccc_thread_info *info;
info = kmem_cache_alloc(ccc_thread_kmem, GFP_NOFS | __GFP_ZERO);
- if (info == NULL)
+ if (!info)
info = ERR_PTR(-ENOMEM);
return info;
}
@@ -136,7 +136,7 @@
struct ccc_session *session;
session = kmem_cache_alloc(ccc_session_kmem, GFP_NOFS | __GFP_ZERO);
- if (session == NULL)
+ if (!session)
session = ERR_PTR(-ENOMEM);
return session;
}
@@ -173,7 +173,7 @@
vdv = lu2ccc_dev(d);
vdv->cdv_next = lu2cl_dev(next);
- LASSERT(d->ld_site != NULL && next->ld_type != NULL);
+ LASSERT(d->ld_site && next->ld_type);
next->ld_site = d->ld_site;
rc = next->ld_type->ldt_ops->ldto_device_init(
env, next, next->ld_type->ldt_name, NULL);
@@ -211,12 +211,12 @@
vdv->cdv_cl.cd_ops = clops;
site = kzalloc(sizeof(*site), GFP_NOFS);
- if (site != NULL) {
+ if (site) {
rc = cl_site_init(site, &vdv->cdv_cl);
if (rc == 0)
rc = lu_site_init_finish(&site->cs_lu);
else {
- LASSERT(lud->ld_site == NULL);
+ LASSERT(!lud->ld_site);
CERROR("Cannot init lu_site, rc %d.\n", rc);
kfree(site);
}
@@ -236,7 +236,7 @@
struct cl_site *site = lu2cl_site(d->ld_site);
struct lu_device *next = cl2lu_dev(vdv->cdv_next);
- if (d->ld_site != NULL) {
+ if (d->ld_site) {
cl_site_fini(site);
kfree(site);
}
@@ -252,7 +252,7 @@
int result;
vrq = kmem_cache_alloc(ccc_req_kmem, GFP_NOFS | __GFP_ZERO);
- if (vrq != NULL) {
+ if (vrq) {
cl_req_slice_add(req, &vrq->crq_cl, dev, &ccc_req_ops);
result = 0;
} else
@@ -304,7 +304,7 @@
void ccc_global_fini(struct lu_device_type *device_type)
{
- if (ccc_inode_fini_env != NULL) {
+ if (ccc_inode_fini_env) {
cl_env_put(ccc_inode_fini_env, &dummy_refcheck);
ccc_inode_fini_env = NULL;
}
@@ -328,7 +328,7 @@
struct lu_object *obj;
vob = kmem_cache_alloc(ccc_object_kmem, GFP_NOFS | __GFP_ZERO);
- if (vob != NULL) {
+ if (vob) {
struct cl_object_header *hdr;
obj = ccc2lu(vob);
@@ -365,7 +365,7 @@
under = &dev->cdv_next->cd_lu_dev;
below = under->ld_ops->ldo_object_alloc(env, obj->lo_header, under);
- if (below != NULL) {
+ if (below) {
const struct cl_object_conf *cconf;
cconf = lu2cl_conf(conf);
@@ -397,7 +397,7 @@
CLOBINVRNT(env, obj, ccc_object_invariant(obj));
clk = kmem_cache_alloc(ccc_lock_kmem, GFP_NOFS | __GFP_ZERO);
- if (clk != NULL) {
+ if (clk) {
cl_lock_slice_add(lock, &clk->clk_cl, obj, lkops);
result = 0;
} else
@@ -613,7 +613,8 @@
* stale i_size when doing appending writes and effectively
* cancel the result of the truncate. Getting the
* ll_inode_size_lock() after the enqueue maintains the DLM
- * -> ll_inode_size_lock() acquiring order. */
+ * -> ll_inode_size_lock() acquiring order.
+ */
if (lock->cll_descr.cld_start == 0 &&
lock->cll_descr.cld_end == CL_PAGE_EOF)
cl_merge_lvb(env, inode);
@@ -660,7 +661,7 @@
{
size_t size = io->u.ci_rw.crw_count;
- if (!cl_is_normalio(env, io) || cio->cui_iter == NULL)
+ if (!cl_is_normalio(env, io) || !cio->cui_iter)
return;
iov_iter_truncate(cio->cui_iter, size);
@@ -749,12 +750,13 @@
*/
ccc_object_size_unlock(obj);
result = cl_glimpse_lock(env, io, inode, obj, 0);
- if (result == 0 && exceed != NULL) {
+ if (result == 0 && exceed) {
/* If objective page index exceed end-of-file
* page index, return directly. Do not expect
* kernel will check such case correctly.
* linux-2.6.18-128.1.1 miss to do that.
- * --bug 17336 */
+ * --bug 17336
+ */
loff_t size = cl_isize_read(inode);
loff_t cur_index = start >> PAGE_CACHE_SHIFT;
loff_t size_index = (size - 1) >>
@@ -884,7 +886,8 @@
if (attr->ia_valid & ATTR_FILE)
/* populate the file descriptor for ftruncate to honor
- * group lock - see LU-787 */
+ * group lock - see LU-787
+ */
cio->cui_fd = cl_iattr2fd(inode, attr);
result = cl_io_loop(env, io);
@@ -896,7 +899,8 @@
goto again;
/* HSM import case: file is released, cannot be restored
* no need to fail except if restore registration failed
- * with -ENODATA */
+ * with -ENODATA
+ */
if (result == -ENODATA && io->ci_restore_needed &&
io->ci_result != -ENODATA)
result = 0;
@@ -1022,11 +1026,12 @@
fid = &lli->lli_fid;
LASSERT(fid_is_sane(fid));
- if (lli->lli_clob == NULL) {
+ if (!lli->lli_clob) {
/* clob is slave of inode, empty lli_clob means for new inode,
* there is no clob in cache with the given fid, so it is
* unnecessary to perform lookup-alloc-lookup-insert, just
- * alloc and insert directly. */
+ * alloc and insert directly.
+ */
LASSERT(inode->i_state & I_NEW);
conf.coc_lu.loc_flags = LOC_F_NEW;
clob = cl_object_find(env, lu2cl_dev(site->ls_top_dev),
@@ -1098,7 +1103,7 @@
int refcheck;
int emergency;
- if (clob != NULL) {
+ if (clob) {
void *cookie;
cookie = cl_env_reenter();
@@ -1106,7 +1111,7 @@
emergency = IS_ERR(env);
if (emergency) {
mutex_lock(&ccc_inode_fini_guard);
- LASSERT(ccc_inode_fini_env != NULL);
+ LASSERT(ccc_inode_fini_env);
cl_env_implant(ccc_inode_fini_env, &refcheck);
env = ccc_inode_fini_env;
}
@@ -1151,7 +1156,8 @@
}
/**
- * build inode number from passed @fid */
+ * build inode number from passed @fid
+ */
__u64 cl_fid_build_ino(const struct lu_fid *fid, int api32)
{
if (BITS_PER_LONG == 32 || api32)
@@ -1162,7 +1168,8 @@
/**
* build inode generation from passed @fid. If our FID overflows the 32-bit
- * inode number then return a non-zero generation to distinguish them. */
+ * inode number then return a non-zero generation to distinguish them.
+ */
__u32 cl_fid_build_gen(const struct lu_fid *fid)
{
__u32 gen;
@@ -1183,7 +1190,8 @@
* have to wait for the refcount to become zero to destroy the older layout.
*
* Notice that the lsm returned by this function may not be valid unless called
- * inside layout lock - MDS_INODELOCK_LAYOUT. */
+ * inside layout lock - MDS_INODELOCK_LAYOUT.
+ */
struct lov_stripe_md *ccc_inode_lsm_get(struct inode *inode)
{
return lov_lsm_get(cl_i2info(inode)->lli_clob);
diff --git a/drivers/staging/lustre/lustre/lclient/lcommon_misc.c b/drivers/staging/lustre/lustre/lclient/lcommon_misc.c
index 8389a0e..d80bcedd 100644
--- a/drivers/staging/lustre/lustre/lclient/lcommon_misc.c
+++ b/drivers/staging/lustre/lustre/lclient/lcommon_misc.c
@@ -48,7 +48,8 @@
/* Initialize the default and maximum LOV EA and cookie sizes. This allows
* us to make MDS RPCs with large enough reply buffers to hold the
* maximum-sized (= maximum striped) EA and cookie without having to
- * calculate this (via a call into the LOV + OSCs) each time we make an RPC. */
+ * calculate this (via a call into the LOV + OSCs) each time we make an RPC.
+ */
int cl_init_ea_size(struct obd_export *md_exp, struct obd_export *dt_exp)
{
struct lov_stripe_md lsm = { .lsm_magic = LOV_MAGIC_V3 };
@@ -74,7 +75,8 @@
cookiesize = stripes * sizeof(struct llog_cookie);
/* default cookiesize is 0 because from 2.4 server doesn't send
- * llog cookies to client. */
+ * llog cookies to client.
+ */
CDEBUG(D_HA,
"updating def/max_easize: %d/%d def/max_cookiesize: 0/%d\n",
def_easize, easize, cookiesize);
diff --git a/drivers/staging/lustre/lustre/ldlm/interval_tree.c b/drivers/staging/lustre/lustre/ldlm/interval_tree.c
index a2ea8e5..3230606 100644
--- a/drivers/staging/lustre/lustre/ldlm/interval_tree.c
+++ b/drivers/staging/lustre/lustre/ldlm/interval_tree.c
@@ -49,13 +49,11 @@
static inline int node_is_left_child(struct interval_node *node)
{
- LASSERT(node->in_parent != NULL);
return node == node->in_parent->in_left;
}
static inline int node_is_right_child(struct interval_node *node)
{
- LASSERT(node->in_parent != NULL);
return node == node->in_parent->in_right;
}
@@ -135,7 +133,8 @@
/* The left rotation "pivots" around the link from node to node->right, and
* - node will be linked to node->right's left child, and
- * - node->right's left child will be linked to node's right child. */
+ * - node->right's left child will be linked to node's right child.
+ */
static void __rotate_left(struct interval_node *node,
struct interval_node **root)
{
@@ -164,7 +163,8 @@
/* The right rotation "pivots" around the link from node to node->left, and
* - node will be linked to node->left's right child, and
- * - node->left's right child will be linked to node's left child. */
+ * - node->left's right child will be linked to node's left child.
+ */
static void __rotate_right(struct interval_node *node,
struct interval_node **root)
{
diff --git a/drivers/staging/lustre/lustre/ldlm/ldlm_extent.c b/drivers/staging/lustre/lustre/ldlm/ldlm_extent.c
index 9c70f31..222cb62 100644
--- a/drivers/staging/lustre/lustre/ldlm/ldlm_extent.c
+++ b/drivers/staging/lustre/lustre/ldlm/ldlm_extent.c
@@ -62,7 +62,8 @@
* is the "highest lock". This function returns the new KMS value.
* Caller must hold lr_lock already.
*
- * NB: A lock on [x,y] protects a KMS of up to y + 1 bytes! */
+ * NB: A lock on [x,y] protects a KMS of up to y + 1 bytes!
+ */
__u64 ldlm_extent_shift_kms(struct ldlm_lock *lock, __u64 old_kms)
{
struct ldlm_resource *res = lock->l_resource;
@@ -72,7 +73,8 @@
/* don't let another thread in ldlm_extent_shift_kms race in
* just after we finish and take our lock into account in its
- * calculation of the kms */
+ * calculation of the kms
+ */
lock->l_flags |= LDLM_FL_KMS_IGNORE;
list_for_each(tmp, &res->lr_granted) {
@@ -85,7 +87,8 @@
return old_kms;
/* This extent _has_ to be smaller than old_kms (checked above)
- * so kms can only ever be smaller or the same as old_kms. */
+ * so kms can only ever be smaller or the same as old_kms.
+ */
if (lck->l_policy_data.l_extent.end + 1 > kms)
kms = lck->l_policy_data.l_extent.end + 1;
}
@@ -113,7 +116,7 @@
LASSERT(lock->l_resource->lr_type == LDLM_EXTENT);
node = kmem_cache_alloc(ldlm_interval_slab, GFP_NOFS | __GFP_ZERO);
- if (node == NULL)
+ if (!node)
return NULL;
INIT_LIST_HEAD(&node->li_group);
@@ -134,7 +137,7 @@
{
struct ldlm_interval *n = l->l_tree_node;
- if (n == NULL)
+ if (!n)
return NULL;
LASSERT(!list_empty(&n->li_group));
@@ -144,7 +147,7 @@
return list_empty(&n->li_group) ? n : NULL;
}
-static inline int lock_mode_to_index(ldlm_mode_t mode)
+static inline int lock_mode_to_index(enum ldlm_mode mode)
{
int index;
@@ -168,7 +171,7 @@
LASSERT(lock->l_granted_mode == lock->l_req_mode);
node = lock->l_tree_node;
- LASSERT(node != NULL);
+ LASSERT(node);
LASSERT(!interval_is_intree(&node->li_node));
idx = lock_mode_to_index(lock->l_granted_mode);
@@ -185,14 +188,14 @@
struct ldlm_interval *tmp;
tmp = ldlm_interval_detach(lock);
- LASSERT(tmp != NULL);
ldlm_interval_free(tmp);
ldlm_interval_attach(to_ldlm_interval(found), lock);
}
res->lr_itree[idx].lit_size++;
/* even though we use interval tree to manage the extent lock, we also
- * add the locks into grant list, for debug purpose, .. */
+ * add the locks into grant list, for debug purpose, ..
+ */
ldlm_resource_add_lock(res, &res->lr_granted, lock);
}
@@ -211,7 +214,7 @@
LASSERT(lock->l_granted_mode == 1 << idx);
tree = &res->lr_itree[idx];
- LASSERT(tree->lit_root != NULL); /* assure the tree is not null */
+ LASSERT(tree->lit_root); /* assure the tree is not null */
tree->lit_size--;
node = ldlm_interval_detach(lock);
diff --git a/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c b/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
index 4310154..b88b786 100644
--- a/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
+++ b/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
@@ -92,7 +92,7 @@
}
static inline void
-ldlm_flock_destroy(struct ldlm_lock *lock, ldlm_mode_t mode, __u64 flags)
+ldlm_flock_destroy(struct ldlm_lock *lock, enum ldlm_mode mode, __u64 flags)
{
LDLM_DEBUG(lock, "ldlm_flock_destroy(mode: %d, flags: 0x%llx)",
mode, flags);
@@ -107,7 +107,8 @@
lock->l_flags |= LDLM_FL_LOCAL_ONLY | LDLM_FL_CBPENDING;
/* when reaching here, it is under lock_res_and_lock(). Thus,
- need call the nolock version of ldlm_lock_decref_internal*/
+ * need call the nolock version of ldlm_lock_decref_internal
+ */
ldlm_lock_decref_internal_nolock(lock, mode);
}
@@ -133,7 +134,7 @@
* would be collected and ASTs sent.
*/
static int ldlm_process_flock_lock(struct ldlm_lock *req, __u64 *flags,
- int first_enq, ldlm_error_t *err,
+ int first_enq, enum ldlm_error *err,
struct list_head *work_list)
{
struct ldlm_resource *res = req->l_resource;
@@ -143,7 +144,7 @@
struct ldlm_lock *lock = NULL;
struct ldlm_lock *new = req;
struct ldlm_lock *new2 = NULL;
- ldlm_mode_t mode = req->l_req_mode;
+ enum ldlm_mode mode = req->l_req_mode;
int added = (mode == LCK_NL);
int overlaps = 0;
int splitted = 0;
@@ -159,13 +160,15 @@
*err = ELDLM_OK;
/* No blocking ASTs are sent to the clients for
- * Posix file & record locks */
+ * Posix file & record locks
+ */
req->l_blocking_ast = NULL;
reprocess:
if ((*flags == LDLM_FL_WAIT_NOREPROC) || (mode == LCK_NL)) {
/* This loop determines where this processes locks start
- * in the resource lr_granted list. */
+ * in the resource lr_granted list.
+ */
list_for_each(tmp, &res->lr_granted) {
lock = list_entry(tmp, struct ldlm_lock,
l_res_link);
@@ -180,7 +183,8 @@
lockmode_verify(mode);
/* This loop determines if there are existing locks
- * that conflict with the new lock request. */
+ * that conflict with the new lock request.
+ */
list_for_each(tmp, &res->lr_granted) {
lock = list_entry(tmp, struct ldlm_lock,
l_res_link);
@@ -238,8 +242,8 @@
}
/* Scan the locks owned by this process that overlap this request.
- * We may have to merge or split existing locks. */
-
+ * We may have to merge or split existing locks.
+ */
if (!ownlocks)
ownlocks = &res->lr_granted;
@@ -253,7 +257,8 @@
/* If the modes are the same then we need to process
* locks that overlap OR adjoin the new lock. The extra
* logic condition is necessary to deal with arithmetic
- * overflow and underflow. */
+ * overflow and underflow.
+ */
if ((new->l_policy_data.l_flock.start >
(lock->l_policy_data.l_flock.end + 1))
&& (lock->l_policy_data.l_flock.end !=
@@ -327,11 +332,13 @@
* with the request but this would complicate the reply
* processing since updates to req get reflected in the
* reply. The client side replays the lock request so
- * it must see the original lock data in the reply. */
+ * it must see the original lock data in the reply.
+ */
/* XXX - if ldlm_lock_new() can sleep we should
* release the lr_lock, allocate the new lock,
- * and restart processing this lock. */
+ * and restart processing this lock.
+ */
if (!new2) {
unlock_res_and_lock(req);
new2 = ldlm_lock_create(ns, &res->lr_name, LDLM_FLOCK,
@@ -361,7 +368,7 @@
lock->l_policy_data.l_flock.start =
new->l_policy_data.l_flock.end + 1;
new2->l_conn_export = lock->l_conn_export;
- if (lock->l_export != NULL) {
+ if (lock->l_export) {
new2->l_export = class_export_lock_get(lock->l_export,
new2);
if (new2->l_export->exp_lock_hash &&
@@ -381,7 +388,7 @@
}
/* if new2 is created but never used, destroy it*/
- if (splitted == 0 && new2 != NULL)
+ if (splitted == 0 && new2)
ldlm_lock_destroy_nolock(new2);
/* At this point we're granting the lock request. */
@@ -396,7 +403,8 @@
if (*flags != LDLM_FL_WAIT_NOREPROC) {
/* The only one possible case for client-side calls flock
* policy function is ldlm_flock_completion_ast inside which
- * carries LDLM_FL_WAIT_NOREPROC flag. */
+ * carries LDLM_FL_WAIT_NOREPROC flag.
+ */
CERROR("Illegal parameter for client-side-only module.\n");
LBUG();
}
@@ -404,7 +412,8 @@
/* In case we're reprocessing the requested lock we can't destroy
* it until after calling ldlm_add_ast_work_item() above so that laawi()
* can bump the reference count on \a req. Otherwise \a req
- * could be freed before the completion AST can be sent. */
+ * could be freed before the completion AST can be sent.
+ */
if (added)
ldlm_flock_destroy(req, mode, *flags);
@@ -449,7 +458,7 @@
struct obd_import *imp = NULL;
struct ldlm_flock_wait_data fwd;
struct l_wait_info lwi;
- ldlm_error_t err;
+ enum ldlm_error err;
int rc = 0;
CDEBUG(D_DLMTRACE, "flags: 0x%llx data: %p getlk: %p\n",
@@ -458,12 +467,12 @@
/* Import invalidation. We need to actually release the lock
* references being held, so that it can go away. No point in
* holding the lock even if app still believes it has it, since
- * server already dropped it anyway. Only for granted locks too. */
+ * server already dropped it anyway. Only for granted locks too.
+ */
if ((lock->l_flags & (LDLM_FL_FAILED|LDLM_FL_LOCAL_ONLY)) ==
(LDLM_FL_FAILED|LDLM_FL_LOCAL_ONLY)) {
if (lock->l_req_mode == lock->l_granted_mode &&
- lock->l_granted_mode != LCK_NL &&
- data == NULL)
+ lock->l_granted_mode != LCK_NL && !data)
ldlm_lock_decref_internal(lock, lock->l_req_mode);
/* Need to wake up the waiter if we were evicted */
@@ -475,7 +484,7 @@
if (!(flags & (LDLM_FL_BLOCK_WAIT | LDLM_FL_BLOCK_GRANTED |
LDLM_FL_BLOCK_CONV))) {
- if (data == NULL)
+ if (!data)
/* mds granted the lock in the reply */
goto granted;
/* CP AST RPC: lock get granted, wake it up */
@@ -488,10 +497,10 @@
obd = class_exp2obd(lock->l_conn_export);
/* if this is a local lock, there is no import */
- if (obd != NULL)
+ if (obd)
imp = obd->u.cli.cl_import;
- if (imp != NULL) {
+ if (imp) {
spin_lock(&imp->imp_lock);
fwd.fwd_generation = imp->imp_generation;
spin_unlock(&imp->imp_lock);
@@ -540,7 +549,8 @@
} else if (flags & LDLM_FL_TEST_LOCK) {
/* fcntl(F_GETLK) request */
/* The old mode was saved in getlk->fl_type so that if the mode
- * in the lock changes we can decref the appropriate refcount.*/
+ * in the lock changes we can decref the appropriate refcount.
+ */
ldlm_flock_destroy(lock, getlk->fl_type, LDLM_FL_WAIT_NOREPROC);
switch (lock->l_granted_mode) {
case LCK_PR:
@@ -559,7 +569,8 @@
__u64 noreproc = LDLM_FL_WAIT_NOREPROC;
/* We need to reprocess the lock to do merges or splits
- * with existing locks owned by this process. */
+ * with existing locks owned by this process.
+ */
ldlm_process_flock_lock(lock, &noreproc, 1, &err, NULL);
}
unlock_res_and_lock(lock);
@@ -576,7 +587,8 @@
lpolicy->l_flock.pid = wpolicy->l_flock.lfw_pid;
/* Compat code, old clients had no idea about owner field and
* relied solely on pid for ownership. Introduced in LU-104, 2.1,
- * April 2011 */
+ * April 2011
+ */
lpolicy->l_flock.owner = wpolicy->l_flock.lfw_pid;
}
diff --git a/drivers/staging/lustre/lustre/ldlm/ldlm_internal.h b/drivers/staging/lustre/lustre/ldlm/ldlm_internal.h
index 849cc98..e21373e 100644
--- a/drivers/staging/lustre/lustre/ldlm/ldlm_internal.h
+++ b/drivers/staging/lustre/lustre/ldlm/ldlm_internal.h
@@ -96,14 +96,15 @@
LDLM_CANCEL_SHRINK = 1 << 2, /* Cancel locks from shrinker. */
LDLM_CANCEL_LRUR = 1 << 3, /* Cancel locks from lru resize. */
LDLM_CANCEL_NO_WAIT = 1 << 4 /* Cancel locks w/o blocking (neither
- * sending nor waiting for any rpcs) */
+ * sending nor waiting for any rpcs)
+ */
};
int ldlm_cancel_lru(struct ldlm_namespace *ns, int nr,
- ldlm_cancel_flags_t sync, int flags);
+ enum ldlm_cancel_flags sync, int flags);
int ldlm_cancel_lru_local(struct ldlm_namespace *ns,
struct list_head *cancels, int count, int max,
- ldlm_cancel_flags_t cancel_flags, int flags);
+ enum ldlm_cancel_flags cancel_flags, int flags);
extern int ldlm_enqueue_min;
/* ldlm_resource.c */
@@ -133,11 +134,11 @@
enum req_location loc, void *data, int size);
struct ldlm_lock *
ldlm_lock_create(struct ldlm_namespace *ns, const struct ldlm_res_id *,
- ldlm_type_t type, ldlm_mode_t,
+ enum ldlm_type type, enum ldlm_mode mode,
const struct ldlm_callback_suite *cbs,
void *data, __u32 lvb_len, enum lvb_type lvb_type);
-ldlm_error_t ldlm_lock_enqueue(struct ldlm_namespace *, struct ldlm_lock **,
- void *cookie, __u64 *flags);
+enum ldlm_error ldlm_lock_enqueue(struct ldlm_namespace *, struct ldlm_lock **,
+ void *cookie, __u64 *flags);
void ldlm_lock_addref_internal(struct ldlm_lock *, __u32 mode);
void ldlm_lock_addref_internal_nolock(struct ldlm_lock *, __u32 mode);
void ldlm_lock_decref_internal(struct ldlm_lock *, __u32 mode);
@@ -154,7 +155,7 @@
int ldlm_bl_to_thread_list(struct ldlm_namespace *ns,
struct ldlm_lock_desc *ld,
struct list_head *cancels, int count,
- ldlm_cancel_flags_t cancel_flags);
+ enum ldlm_cancel_flags cancel_flags);
void ldlm_handle_bl_callback(struct ldlm_namespace *ns,
struct ldlm_lock_desc *ld, struct ldlm_lock *lock);
diff --git a/drivers/staging/lustre/lustre/ldlm/ldlm_lib.c b/drivers/staging/lustre/lustre/ldlm/ldlm_lib.c
index 3c8d441..b586d5a 100644
--- a/drivers/staging/lustre/lustre/ldlm/ldlm_lib.c
+++ b/drivers/staging/lustre/lustre/ldlm/ldlm_lib.c
@@ -219,7 +219,8 @@
void client_destroy_import(struct obd_import *imp)
{
/* Drop security policy instance after all RPCs have finished/aborted
- * to let all busy contexts be released. */
+ * to let all busy contexts be released.
+ */
class_import_get(imp);
class_destroy_import(imp);
sptlrpc_import_sec_put(imp);
@@ -227,29 +228,6 @@
}
EXPORT_SYMBOL(client_destroy_import);
-/**
- * Check whether or not the OSC is on MDT.
- * In the config log,
- * osc on MDT
- * setup 0:{fsname}-OSTxxxx-osc[-MDTxxxx] 1:lustre-OST0000_UUID 2:NID
- * osc on client
- * setup 0:{fsname}-OSTxxxx-osc 1:lustre-OST0000_UUID 2:NID
- *
- **/
-static int osc_on_mdt(char *obdname)
-{
- char *ptr;
-
- ptr = strrchr(obdname, '-');
- if (ptr == NULL)
- return 0;
-
- if (strncmp(ptr + 1, "MDT", 3) == 0)
- return 1;
-
- return 0;
-}
-
/* Configure an RPC client OBD device.
*
* lcfg parameters:
@@ -264,11 +242,12 @@
struct obd_uuid server_uuid;
int rq_portal, rp_portal, connect_op;
char *name = obddev->obd_type->typ_name;
- ldlm_ns_type_t ns_type = LDLM_NS_TYPE_UNKNOWN;
+ enum ldlm_ns_type ns_type = LDLM_NS_TYPE_UNKNOWN;
int rc;
/* In a more perfect world, we would hang a ptlrpc_client off of
- * obd_type and just use the values from there. */
+ * obd_type and just use the values from there.
+ */
if (!strcmp(name, LUSTRE_OSC_NAME)) {
rq_portal = OST_REQUEST_PORTAL;
rp_portal = OSC_REPLY_PORTAL;
@@ -284,22 +263,6 @@
cli->cl_sp_me = LUSTRE_SP_CLI;
cli->cl_sp_to = LUSTRE_SP_MDT;
ns_type = LDLM_NS_TYPE_MDC;
- } else if (!strcmp(name, LUSTRE_OSP_NAME)) {
- if (strstr(lustre_cfg_buf(lcfg, 1), "OST") == NULL) {
- /* OSP_on_MDT for other MDTs */
- connect_op = MDS_CONNECT;
- cli->cl_sp_to = LUSTRE_SP_MDT;
- ns_type = LDLM_NS_TYPE_MDC;
- rq_portal = OUT_PORTAL;
- } else {
- /* OSP on MDT for OST */
- connect_op = OST_CONNECT;
- cli->cl_sp_to = LUSTRE_SP_OST;
- ns_type = LDLM_NS_TYPE_OSC;
- rq_portal = OST_REQUEST_PORTAL;
- }
- rp_portal = OSC_REPLY_PORTAL;
- cli->cl_sp_me = LUSTRE_SP_CLI;
} else if (!strcmp(name, LUSTRE_MGC_NAME)) {
rq_portal = MGS_REQUEST_PORTAL;
rp_portal = MGC_REPLY_PORTAL;
@@ -387,7 +350,8 @@
/* This value may be reduced at connect time in
* ptlrpc_connect_interpret() . We initialize it to only
* 1MB until we know what the performance looks like.
- * In the future this should likely be increased. LU-1431 */
+ * In the future this should likely be increased. LU-1431
+ */
cli->cl_max_pages_per_rpc = min_t(int, PTLRPC_MAX_BRW_PAGES,
LNET_MTU >> PAGE_CACHE_SHIFT);
@@ -400,10 +364,7 @@
} else if (totalram_pages >> (20 - PAGE_CACHE_SHIFT) <= 512 /* MB */) {
cli->cl_max_rpcs_in_flight = 4;
} else {
- if (osc_on_mdt(obddev->obd_name))
- cli->cl_max_rpcs_in_flight = MDS_OSC_MAX_RIF_DEFAULT;
- else
- cli->cl_max_rpcs_in_flight = OSC_MAX_RIF_DEFAULT;
+ cli->cl_max_rpcs_in_flight = OSC_MAX_RIF_DEFAULT;
}
rc = ldlm_get_ref();
if (rc) {
@@ -415,7 +376,7 @@
&obddev->obd_ldlm_client);
imp = class_new_import(obddev);
- if (imp == NULL) {
+ if (!imp) {
rc = -ENOENT;
goto err_ldlm;
}
@@ -451,7 +412,7 @@
LDLM_NAMESPACE_CLIENT,
LDLM_NAMESPACE_GREEDY,
ns_type);
- if (obddev->obd_namespace == NULL) {
+ if (!obddev->obd_namespace) {
CERROR("Unable to create client namespace - %s\n",
obddev->obd_name);
rc = -ENOMEM;
@@ -477,7 +438,7 @@
ldlm_namespace_free_post(obddev->obd_namespace);
obddev->obd_namespace = NULL;
- LASSERT(obddev->u.cli.cl_import == NULL);
+ LASSERT(!obddev->u.cli.cl_import);
ldlm_put_ref();
return 0;
@@ -528,7 +489,7 @@
LASSERT(imp->imp_state == LUSTRE_IMP_DISCON);
goto out_ldlm;
}
- LASSERT(*exp != NULL && (*exp)->exp_connection);
+ LASSERT(*exp && (*exp)->exp_connection);
if (data) {
LASSERTF((ocd->ocd_connect_flags & data->ocd_connect_flags) ==
@@ -587,17 +548,19 @@
/* Mark import deactivated now, so we don't try to reconnect if any
* of the cleanup RPCs fails (e.g. LDLM cancel, etc). We don't
- * fully deactivate the import, or that would drop all requests. */
+ * fully deactivate the import, or that would drop all requests.
+ */
spin_lock(&imp->imp_lock);
imp->imp_deactive = 1;
spin_unlock(&imp->imp_lock);
/* Some non-replayable imports (MDS's OSCs) are pinged, so just
* delete it regardless. (It's safe to delete an import that was
- * never added.) */
+ * never added.)
+ */
(void)ptlrpc_pinger_del_import(imp);
- if (obd->obd_namespace != NULL) {
+ if (obd->obd_namespace) {
/* obd_force == local only */
ldlm_cli_cancel_unused(obd->obd_namespace, NULL,
obd->obd_force ? LCF_LOCAL : 0, NULL);
@@ -606,7 +569,8 @@
}
/* There's no need to hold sem while disconnecting an import,
- * and it may actually cause deadlock in GSS. */
+ * and it may actually cause deadlock in GSS.
+ */
up_write(&cli->cl_sem);
rc = ptlrpc_disconnect_import(imp, 0);
down_write(&cli->cl_sem);
@@ -615,7 +579,8 @@
out_disconnect:
/* Use server style - class_disconnect should be always called for
- * o_disconnect. */
+ * o_disconnect.
+ */
err = class_disconnect(exp);
if (!rc && err)
rc = err;
@@ -634,7 +599,8 @@
struct obd_device *obd;
/* Check that we still have all structures alive as this may
- * be some late RPC at shutdown time. */
+ * be some late RPC at shutdown time.
+ */
if (unlikely(!req->rq_export || !req->rq_export->exp_obd ||
!exp_connect_lru_resize(req->rq_export))) {
lustre_msg_set_slv(req->rq_repmsg, 0);
@@ -684,14 +650,14 @@
svcpt = req->rq_rqbd->rqbd_svcpt;
rs = req->rq_reply_state;
- if (rs == NULL || !rs->rs_difficult) {
+ if (!rs || !rs->rs_difficult) {
/* no notifiers */
target_send_reply_msg(req, rc, fail_id);
return;
}
/* must be an export if locks saved */
- LASSERT(req->rq_export != NULL);
+ LASSERT(req->rq_export);
/* req/reply consistent */
LASSERT(rs->rs_svcpt == svcpt);
@@ -700,7 +666,7 @@
LASSERT(!rs->rs_scheduled_ever);
LASSERT(!rs->rs_handled);
LASSERT(!rs->rs_on_net);
- LASSERT(rs->rs_export == NULL);
+ LASSERT(!rs->rs_export);
LASSERT(list_empty(&rs->rs_obd_list));
LASSERT(list_empty(&rs->rs_exp_list));
@@ -739,7 +705,8 @@
* reply ref until ptlrpc_handle_rs() is done
* with the reply state (if the send was successful, there
* would have been +1 ref for the net, which
- * reply_out_callback leaves alone) */
+ * reply_out_callback leaves alone)
+ */
rs->rs_on_net = 0;
ptlrpc_rs_addref(rs);
}
@@ -760,7 +727,7 @@
}
EXPORT_SYMBOL(target_send_reply);
-ldlm_mode_t lck_compat_array[] = {
+enum ldlm_mode lck_compat_array[] = {
[LCK_EX] = LCK_COMPAT_EX,
[LCK_PW] = LCK_COMPAT_PW,
[LCK_PR] = LCK_COMPAT_PR,
@@ -775,7 +742,7 @@
* Rather arbitrary mapping from LDLM error codes to errno values. This should
* not escape to the user level.
*/
-int ldlm_error2errno(ldlm_error_t error)
+int ldlm_error2errno(enum ldlm_error error)
{
int result;
@@ -803,7 +770,7 @@
break;
default:
if (((int)error) < 0) /* cast to signed type */
- result = error; /* as ldlm_error_t can be unsigned */
+ result = error; /* as enum ldlm_error can be unsigned */
else {
CERROR("Invalid DLM result code: %d\n", error);
result = -EPROTO;
diff --git a/drivers/staging/lustre/lustre/ldlm/ldlm_lock.c b/drivers/staging/lustre/lustre/ldlm/ldlm_lock.c
index cf9ec0c..98975ef 100644
--- a/drivers/staging/lustre/lustre/ldlm/ldlm_lock.c
+++ b/drivers/staging/lustre/lustre/ldlm/ldlm_lock.c
@@ -91,7 +91,7 @@
/**
* Converts lock policy from local format to on the wire lock_desc format
*/
-static void ldlm_convert_policy_to_wire(ldlm_type_t type,
+static void ldlm_convert_policy_to_wire(enum ldlm_type type,
const ldlm_policy_data_t *lpolicy,
ldlm_wire_policy_data_t *wpolicy)
{
@@ -105,7 +105,7 @@
/**
* Converts lock policy from on the wire lock_desc format to local format
*/
-void ldlm_convert_policy_to_local(struct obd_export *exp, ldlm_type_t type,
+void ldlm_convert_policy_to_local(struct obd_export *exp, enum ldlm_type type,
const ldlm_wire_policy_data_t *wpolicy,
ldlm_policy_data_t *lpolicy)
{
@@ -326,9 +326,11 @@
if (lock->l_export && lock->l_export->exp_lock_hash) {
/* NB: it's safe to call cfs_hash_del() even lock isn't
- * in exp_lock_hash. */
+ * in exp_lock_hash.
+ */
/* In the function below, .hs_keycmp resolves to
- * ldlm_export_lock_keycmp() */
+ * ldlm_export_lock_keycmp()
+ */
/* coverity[overrun-buffer-val] */
cfs_hash_del(lock->l_export->exp_lock_hash,
&lock->l_remote_handle, &lock->l_exp_hash);
@@ -337,16 +339,6 @@
ldlm_lock_remove_from_lru(lock);
class_handle_unhash(&lock->l_handle);
-#if 0
- /* Wake anyone waiting for this lock */
- /* FIXME: I should probably add yet another flag, instead of using
- * l_export to only call this on clients */
- if (lock->l_export)
- class_export_put(lock->l_export);
- lock->l_export = NULL;
- if (lock->l_export && lock->l_completion_ast)
- lock->l_completion_ast(lock, 0);
-#endif
return 1;
}
@@ -412,11 +404,10 @@
{
struct ldlm_lock *lock;
- if (resource == NULL)
- LBUG();
+ LASSERT(resource);
lock = kmem_cache_alloc(ldlm_lock_slab, GFP_NOFS | __GFP_ZERO);
- if (lock == NULL)
+ if (!lock)
return NULL;
spin_lock_init(&lock->l_lock);
@@ -485,7 +476,7 @@
unlock_res_and_lock(lock);
newres = ldlm_resource_get(ns, NULL, new_resid, type, 1);
- if (newres == NULL)
+ if (!newres)
return -ENOMEM;
lu_ref_add(&newres->lr_reference, "lock", lock);
@@ -547,11 +538,12 @@
LASSERT(handle);
lock = class_handle2object(handle->cookie);
- if (lock == NULL)
+ if (!lock)
return NULL;
/* It's unlikely but possible that someone marked the lock as
- * destroyed after we did handle2object on it */
+ * destroyed after we did handle2object on it
+ */
if (flags == 0 && ((lock->l_flags & LDLM_FL_DESTROYED) == 0)) {
lu_ref_add(&lock->l_reference, "handle", current);
return lock;
@@ -559,7 +551,7 @@
lock_res_and_lock(lock);
- LASSERT(lock->l_resource != NULL);
+ LASSERT(lock->l_resource);
lu_ref_add_atomic(&lock->l_reference, "handle", current);
if (unlikely(lock->l_flags & LDLM_FL_DESTROYED)) {
@@ -611,13 +603,14 @@
LDLM_DEBUG(lock, "lock incompatible; sending blocking AST.");
lock->l_flags |= LDLM_FL_AST_SENT;
/* If the enqueuing client said so, tell the AST recipient to
- * discard dirty data, rather than writing back. */
+ * discard dirty data, rather than writing back.
+ */
if (new->l_flags & LDLM_FL_AST_DISCARD_DATA)
lock->l_flags |= LDLM_FL_DISCARD_DATA;
LASSERT(list_empty(&lock->l_bl_ast));
list_add(&lock->l_bl_ast, work_list);
LDLM_LOCK_GET(lock);
- LASSERT(lock->l_blocking_lock == NULL);
+ LASSERT(!lock->l_blocking_lock);
lock->l_blocking_lock = LDLM_LOCK_GET(new);
}
}
@@ -664,7 +657,7 @@
struct ldlm_lock *lock;
lock = ldlm_handle2lock(lockh);
- LASSERT(lock != NULL);
+ LASSERT(lock);
ldlm_lock_addref_internal(lock, mode);
LDLM_LOCK_PUT(lock);
}
@@ -708,7 +701,7 @@
result = -EAGAIN;
lock = ldlm_handle2lock(lockh);
- if (lock != NULL) {
+ if (lock) {
lock_res_and_lock(lock);
if (lock->l_readers != 0 || lock->l_writers != 0 ||
!(lock->l_flags & LDLM_FL_CBPENDING)) {
@@ -780,7 +773,8 @@
if (lock->l_flags & LDLM_FL_LOCAL &&
!lock->l_readers && !lock->l_writers) {
/* If this is a local lock on a server namespace and this was
- * the last reference, cancel the lock. */
+ * the last reference, cancel the lock.
+ */
CDEBUG(D_INFO, "forcing cancel of local lock\n");
lock->l_flags |= LDLM_FL_CBPENDING;
}
@@ -788,7 +782,8 @@
if (!lock->l_readers && !lock->l_writers &&
(lock->l_flags & LDLM_FL_CBPENDING)) {
/* If we received a blocked AST and this was the last reference,
- * run the callback. */
+ * run the callback.
+ */
LDLM_DEBUG(lock, "final decref done on cbpending lock");
@@ -809,7 +804,8 @@
LDLM_DEBUG(lock, "add lock into lru list");
/* If this is a client-side namespace and this was the last
- * reference, put it on the LRU. */
+ * reference, put it on the LRU.
+ */
ldlm_lock_add_to_lru(lock);
unlock_res_and_lock(lock);
@@ -818,7 +814,8 @@
/* Call ldlm_cancel_lru() only if EARLY_CANCEL and LRU RESIZE
* are not supported by the server, otherwise, it is done on
- * enqueue. */
+ * enqueue.
+ */
if (!exp_connect_cancelset(lock->l_conn_export) &&
!ns_connect_lru_resize(ns))
ldlm_cancel_lru(ns, 0, LCF_ASYNC, 0);
@@ -835,7 +832,7 @@
{
struct ldlm_lock *lock = __ldlm_handle2lock(lockh, 0);
- LASSERTF(lock != NULL, "Non-existing lock: %#llx\n", lockh->cookie);
+ LASSERTF(lock, "Non-existing lock: %#llx\n", lockh->cookie);
ldlm_lock_decref_internal(lock, mode);
LDLM_LOCK_PUT(lock);
}
@@ -852,7 +849,7 @@
{
struct ldlm_lock *lock = __ldlm_handle2lock(lockh, 0);
- LASSERT(lock != NULL);
+ LASSERT(lock);
LDLM_DEBUG(lock, "ldlm_lock_decref(%s)", ldlm_lockname[mode]);
lock_res_and_lock(lock);
@@ -921,7 +918,8 @@
if (lock->l_policy_data.l_inodebits.bits ==
req->l_policy_data.l_inodebits.bits) {
/* insert point is last lock of
- * the policy group */
+ * the policy group
+ */
prev->res_link =
&policy_end->l_res_link;
prev->mode_link =
@@ -942,7 +940,8 @@
} /* loop over policy groups within the mode group */
/* insert point is last lock of the mode group,
- * new policy group is started */
+ * new policy group is started
+ */
prev->res_link = &mode_end->l_res_link;
prev->mode_link = &mode_end->l_sl_mode;
prev->policy_link = &req->l_sl_policy;
@@ -954,7 +953,8 @@
}
/* insert point is last lock on the queue,
- * new mode group and new policy group are started */
+ * new mode group and new policy group are started
+ */
prev->res_link = queue->prev;
prev->mode_link = &req->l_sl_mode;
prev->policy_link = &req->l_sl_policy;
@@ -1034,10 +1034,7 @@
else
ldlm_resource_add_lock(res, &res->lr_granted, lock);
- if (lock->l_granted_mode < res->lr_most_restr)
- res->lr_most_restr = lock->l_granted_mode;
-
- if (work_list && lock->l_completion_ast != NULL)
+ if (work_list && lock->l_completion_ast)
ldlm_add_ast_work_item(lock, NULL, work_list);
ldlm_pool_add(&ldlm_res_to_ns(res)->ns_pool, lock);
@@ -1050,7 +1047,7 @@
* comment above ldlm_lock_match
*/
static struct ldlm_lock *search_queue(struct list_head *queue,
- ldlm_mode_t *mode,
+ enum ldlm_mode *mode,
ldlm_policy_data_t *policy,
struct ldlm_lock *old_lock,
__u64 flags, int unref)
@@ -1059,7 +1056,7 @@
struct list_head *tmp;
list_for_each(tmp, queue) {
- ldlm_mode_t match;
+ enum ldlm_mode match;
lock = list_entry(tmp, struct ldlm_lock, l_res_link);
@@ -1067,7 +1064,8 @@
break;
/* Check if this lock can be matched.
- * Used by LU-2919(exclusive open) for open lease lock */
+ * Used by LU-2919(exclusive open) for open lease lock
+ */
if (ldlm_is_excl(lock))
continue;
@@ -1076,7 +1074,8 @@
* if it passes in CBPENDING and the lock still has users.
* this is generally only going to be used by children
* whose parents already hold a lock so forward progress
- * can still happen. */
+ * can still happen.
+ */
if (lock->l_flags & LDLM_FL_CBPENDING &&
!(flags & LDLM_FL_CBPENDING))
continue;
@@ -1100,7 +1099,8 @@
continue;
/* We match if we have existing lock with same or wider set
- of bits. */
+ * of bits.
+ */
if (lock->l_resource->lr_type == LDLM_IBITS &&
((lock->l_policy_data.l_inodebits.bits &
policy->l_inodebits.bits) !=
@@ -1192,16 +1192,18 @@
* keep caller code unchanged), the context failure will be discovered by
* caller sometime later.
*/
-ldlm_mode_t ldlm_lock_match(struct ldlm_namespace *ns, __u64 flags,
- const struct ldlm_res_id *res_id, ldlm_type_t type,
- ldlm_policy_data_t *policy, ldlm_mode_t mode,
- struct lustre_handle *lockh, int unref)
+enum ldlm_mode ldlm_lock_match(struct ldlm_namespace *ns, __u64 flags,
+ const struct ldlm_res_id *res_id,
+ enum ldlm_type type,
+ ldlm_policy_data_t *policy,
+ enum ldlm_mode mode,
+ struct lustre_handle *lockh, int unref)
{
struct ldlm_resource *res;
struct ldlm_lock *lock, *old_lock = NULL;
int rc = 0;
- if (ns == NULL) {
+ if (!ns) {
old_lock = ldlm_handle2lock(lockh);
LASSERT(old_lock);
@@ -1212,8 +1214,8 @@
}
res = ldlm_resource_get(ns, NULL, res_id, type, 0);
- if (res == NULL) {
- LASSERT(old_lock == NULL);
+ if (!res) {
+ LASSERT(!old_lock);
return 0;
}
@@ -1222,7 +1224,7 @@
lock = search_queue(&res->lr_granted, &mode, policy, old_lock,
flags, unref);
- if (lock != NULL) {
+ if (lock) {
rc = 1;
goto out;
}
@@ -1232,7 +1234,7 @@
}
lock = search_queue(&res->lr_waiting, &mode, policy, old_lock,
flags, unref);
- if (lock != NULL) {
+ if (lock) {
rc = 1;
goto out;
}
@@ -1317,14 +1319,14 @@
}
EXPORT_SYMBOL(ldlm_lock_match);
-ldlm_mode_t ldlm_revalidate_lock_handle(struct lustre_handle *lockh,
- __u64 *bits)
+enum ldlm_mode ldlm_revalidate_lock_handle(struct lustre_handle *lockh,
+ __u64 *bits)
{
struct ldlm_lock *lock;
- ldlm_mode_t mode = 0;
+ enum ldlm_mode mode = 0;
lock = ldlm_handle2lock(lockh);
- if (lock != NULL) {
+ if (lock) {
lock_res_and_lock(lock);
if (lock->l_flags & LDLM_FL_GONE_MASK)
goto out;
@@ -1340,7 +1342,7 @@
}
out:
- if (lock != NULL) {
+ if (lock) {
unlock_res_and_lock(lock);
LDLM_LOCK_PUT(lock);
}
@@ -1354,7 +1356,7 @@
{
void *lvb;
- LASSERT(data != NULL);
+ LASSERT(data);
LASSERT(size >= 0);
switch (lock->l_lvb_type) {
@@ -1368,7 +1370,7 @@
lvb = req_capsule_server_swab_get(pill,
&RMF_DLM_LVB,
lustre_swab_ost_lvb);
- if (unlikely(lvb == NULL)) {
+ if (unlikely(!lvb)) {
LDLM_ERROR(lock, "no LVB");
return -EPROTO;
}
@@ -1385,7 +1387,7 @@
lvb = req_capsule_server_sized_swab_get(pill,
&RMF_DLM_LVB, size,
lustre_swab_ost_lvb_v1);
- if (unlikely(lvb == NULL)) {
+ if (unlikely(!lvb)) {
LDLM_ERROR(lock, "no LVB");
return -EPROTO;
}
@@ -1410,7 +1412,7 @@
lvb = req_capsule_server_swab_get(pill,
&RMF_DLM_LVB,
lustre_swab_lquota_lvb);
- if (unlikely(lvb == NULL)) {
+ if (unlikely(!lvb)) {
LDLM_ERROR(lock, "no LVB");
return -EPROTO;
}
@@ -1431,7 +1433,7 @@
lvb = req_capsule_client_get(pill, &RMF_DLM_LVB);
else
lvb = req_capsule_server_get(pill, &RMF_DLM_LVB);
- if (unlikely(lvb == NULL)) {
+ if (unlikely(!lvb)) {
LDLM_ERROR(lock, "no LVB");
return -EPROTO;
}
@@ -1453,8 +1455,8 @@
*/
struct ldlm_lock *ldlm_lock_create(struct ldlm_namespace *ns,
const struct ldlm_res_id *res_id,
- ldlm_type_t type,
- ldlm_mode_t mode,
+ enum ldlm_type type,
+ enum ldlm_mode mode,
const struct ldlm_callback_suite *cbs,
void *data, __u32 lvb_len,
enum lvb_type lvb_type)
@@ -1463,12 +1465,12 @@
struct ldlm_resource *res;
res = ldlm_resource_get(ns, NULL, res_id, type, 1);
- if (res == NULL)
+ if (!res)
return NULL;
lock = ldlm_lock_new(res);
- if (lock == NULL)
+ if (!lock)
return NULL;
lock->l_req_mode = mode;
@@ -1483,7 +1485,7 @@
lock->l_tree_node = NULL;
/* if this is the extent lock, allocate the interval tree node */
if (type == LDLM_EXTENT) {
- if (ldlm_interval_alloc(lock) == NULL)
+ if (!ldlm_interval_alloc(lock))
goto out;
}
@@ -1514,9 +1516,9 @@
* Does not block. As a result of enqueue the lock would be put
* into granted or waiting list.
*/
-ldlm_error_t ldlm_lock_enqueue(struct ldlm_namespace *ns,
- struct ldlm_lock **lockp,
- void *cookie, __u64 *flags)
+enum ldlm_error ldlm_lock_enqueue(struct ldlm_namespace *ns,
+ struct ldlm_lock **lockp,
+ void *cookie, __u64 *flags)
{
struct ldlm_lock *lock = *lockp;
struct ldlm_resource *res = lock->l_resource;
@@ -1527,7 +1529,8 @@
if (lock->l_req_mode == lock->l_granted_mode) {
/* The server returned a blocked lock, but it was granted
* before we got a chance to actually enqueue it. We don't
- * need to do anything else. */
+ * need to do anything else.
+ */
*flags &= ~(LDLM_FL_BLOCK_GRANTED |
LDLM_FL_BLOCK_CONV | LDLM_FL_BLOCK_WAIT);
goto out;
@@ -1540,7 +1543,8 @@
LBUG();
/* Some flags from the enqueue want to make it into the AST, via the
- * lock's l_flags. */
+ * lock's l_flags.
+ */
lock->l_flags |= *flags & LDLM_FL_AST_DISCARD_DATA;
/*
@@ -1621,19 +1625,21 @@
* This can't happen with the blocking_ast, however, because we
* will never call the local blocking_ast until we drop our
* reader/writer reference, which we won't do until we get the
- * reply and finish enqueueing. */
+ * reply and finish enqueueing.
+ */
/* nobody should touch l_cp_ast */
lock_res_and_lock(lock);
list_del_init(&lock->l_cp_ast);
LASSERT(lock->l_flags & LDLM_FL_CP_REQD);
/* save l_completion_ast since it can be changed by
- * mds_intent_policy(), see bug 14225 */
+ * mds_intent_policy(), see bug 14225
+ */
completion_callback = lock->l_completion_ast;
lock->l_flags &= ~LDLM_FL_CP_REQD;
unlock_res_and_lock(lock);
- if (completion_callback != NULL)
+ if (completion_callback)
rc = completion_callback(lock, 0, (void *)arg);
LDLM_LOCK_RELEASE(lock);
@@ -1749,10 +1755,11 @@
/* We create a ptlrpc request set with flow control extension.
* This request set will use the work_ast_lock function to produce new
* requests and will send a new request each time one completes in order
- * to keep the number of requests in flight to ns_max_parallel_ast */
+ * to keep the number of requests in flight to ns_max_parallel_ast
+ */
arg->set = ptlrpc_prep_fcset(ns->ns_max_parallel_ast ? : UINT_MAX,
work_ast_lock, arg);
- if (arg->set == NULL) {
+ if (!arg->set) {
rc = -ENOMEM;
goto out;
}
@@ -1815,7 +1822,8 @@
ns = ldlm_res_to_ns(res);
/* Please do not, no matter how tempting, remove this LBUG without
- * talking to me first. -phik */
+ * talking to me first. -phik
+ */
if (lock->l_readers || lock->l_writers) {
LDLM_ERROR(lock, "lock still has references");
LBUG();
@@ -1831,7 +1839,8 @@
ldlm_pool_del(&ns->ns_pool, lock);
/* Make sure we will not be called again for same lock what is possible
- * if not to zero out lock->l_granted_mode */
+ * if not to zero out lock->l_granted_mode
+ */
lock->l_granted_mode = LCK_MINMODE;
unlock_res_and_lock(lock);
}
@@ -1846,7 +1855,7 @@
int rc = -EINVAL;
if (lock) {
- if (lock->l_ast_data == NULL)
+ if (!lock->l_ast_data)
lock->l_ast_data = data;
if (lock->l_ast_data == data)
rc = 0;
@@ -1874,7 +1883,7 @@
return;
lock = ldlm_handle2lock(lockh);
- if (lock == NULL)
+ if (!lock)
return;
LDLM_DEBUG_LIMIT(level, lock, "###");
@@ -1900,13 +1909,13 @@
if (exp && exp->exp_connection) {
nid = libcfs_nid2str(exp->exp_connection->c_peer.nid);
- } else if (exp && exp->exp_obd != NULL) {
+ } else if (exp && exp->exp_obd) {
struct obd_import *imp = exp->exp_obd->u.cli.cl_import;
nid = libcfs_nid2str(imp->imp_connection->c_peer.nid);
}
- if (resource == NULL) {
+ if (!resource) {
libcfs_debug_vmsg2(msgdata, fmt, args,
" ns: \?\? lock: %p/%#llx lrc: %d/%d,%d mode: %s/%s res: \?\? rrc=\?\? type: \?\?\? flags: %#llx nid: %s remote: %#llx expref: %d pid: %u timeout: %lu lvb_type: %d\n",
lock,
diff --git a/drivers/staging/lustre/lustre/ldlm/ldlm_lockd.c b/drivers/staging/lustre/lustre/ldlm/ldlm_lockd.c
index 79aeb2bf..ebe9042 100644
--- a/drivers/staging/lustre/lustre/ldlm/ldlm_lockd.c
+++ b/drivers/staging/lustre/lustre/ldlm/ldlm_lockd.c
@@ -107,7 +107,7 @@
struct list_head blwi_head;
int blwi_count;
struct completion blwi_comp;
- ldlm_cancel_flags_t blwi_flags;
+ enum ldlm_cancel_flags blwi_flags;
int blwi_mem_pressure;
};
@@ -136,7 +136,7 @@
CDEBUG(D_DLMTRACE,
"Lock %p already unused, calling callback (%p)\n", lock,
lock->l_blocking_ast);
- if (lock->l_blocking_ast != NULL)
+ if (lock->l_blocking_ast)
lock->l_blocking_ast(lock, ld, lock->l_ast_data,
LDLM_CB_BLOCKING);
} else {
@@ -185,7 +185,7 @@
} else if (lvb_len > 0) {
if (lock->l_lvb_len > 0) {
/* for extent lock, lvb contains ost_lvb{}. */
- LASSERT(lock->l_lvb_data != NULL);
+ LASSERT(lock->l_lvb_data);
if (unlikely(lock->l_lvb_len < lvb_len)) {
LDLM_ERROR(lock, "Replied LVB is larger than expectation, expected = %d, replied = %d",
@@ -194,7 +194,8 @@
goto out;
}
} else if (ldlm_has_layout(lock)) { /* for layout lock, lvb has
- * variable length */
+ * variable length
+ */
void *lvb_data;
lvb_data = kzalloc(lvb_len, GFP_NOFS);
@@ -205,7 +206,7 @@
}
lock_res_and_lock(lock);
- LASSERT(lock->l_lvb_data == NULL);
+ LASSERT(!lock->l_lvb_data);
lock->l_lvb_type = LVB_T_LAYOUT;
lock->l_lvb_data = lvb_data;
lock->l_lvb_len = lvb_len;
@@ -224,7 +225,8 @@
}
/* If we receive the completion AST before the actual enqueue returned,
- * then we might need to switch lock modes, resources, or extents. */
+ * then we might need to switch lock modes, resources, or extents.
+ */
if (dlm_req->lock_desc.l_granted_mode != lock->l_req_mode) {
lock->l_req_mode = dlm_req->lock_desc.l_granted_mode;
LDLM_DEBUG(lock, "completion AST, new lock mode");
@@ -256,7 +258,8 @@
if (dlm_req->lock_flags & LDLM_FL_AST_SENT) {
/* BL_AST locks are not needed in LRU.
- * Let ldlm_cancel_lru() be fast. */
+ * Let ldlm_cancel_lru() be fast.
+ */
ldlm_lock_remove_from_lru(lock);
lock->l_flags |= LDLM_FL_CBPENDING | LDLM_FL_BL_AST;
LDLM_DEBUG(lock, "completion AST includes blocking AST");
@@ -276,8 +279,7 @@
LDLM_DEBUG(lock, "callback handler finished, about to run_ast_work");
- /* Let Enqueue to call osc_lock_upcall() and initialize
- * l_ast_data */
+ /* Let Enqueue to call osc_lock_upcall() and initialize l_ast_data */
OBD_FAIL_TIMEOUT(OBD_FAIL_OSC_CP_ENQ_RACE, 2);
ldlm_run_ast_work(ns, &ast_list, LDLM_WORK_CP_AST);
@@ -312,10 +314,10 @@
LDLM_DEBUG(lock, "client glimpse AST callback handler");
- if (lock->l_glimpse_ast != NULL)
+ if (lock->l_glimpse_ast)
rc = lock->l_glimpse_ast(lock, req);
- if (req->rq_repmsg != NULL) {
+ if (req->rq_repmsg) {
ptlrpc_reply(req);
} else {
req->rq_status = rc;
@@ -353,7 +355,7 @@
}
static int __ldlm_bl_to_thread(struct ldlm_bl_work_item *blwi,
- ldlm_cancel_flags_t cancel_flags)
+ enum ldlm_cancel_flags cancel_flags)
{
struct ldlm_bl_pool *blp = ldlm_state->ldlm_bl_pool;
@@ -371,7 +373,8 @@
wake_up(&blp->blp_waitq);
/* can not check blwi->blwi_flags as blwi could be already freed in
- LCF_ASYNC mode */
+ * LCF_ASYNC mode
+ */
if (!(cancel_flags & LCF_ASYNC))
wait_for_completion(&blwi->blwi_comp);
@@ -383,7 +386,7 @@
struct ldlm_lock_desc *ld,
struct list_head *cancels, int count,
struct ldlm_lock *lock,
- ldlm_cancel_flags_t cancel_flags)
+ enum ldlm_cancel_flags cancel_flags)
{
init_completion(&blwi->blwi_comp);
INIT_LIST_HEAD(&blwi->blwi_head);
@@ -393,7 +396,7 @@
blwi->blwi_ns = ns;
blwi->blwi_flags = cancel_flags;
- if (ld != NULL)
+ if (ld)
blwi->blwi_ld = *ld;
if (count) {
list_add(&blwi->blwi_head, cancels);
@@ -417,7 +420,7 @@
struct ldlm_lock_desc *ld,
struct ldlm_lock *lock,
struct list_head *cancels, int count,
- ldlm_cancel_flags_t cancel_flags)
+ enum ldlm_cancel_flags cancel_flags)
{
if (cancels && count == 0)
return 0;
@@ -451,7 +454,7 @@
int ldlm_bl_to_thread_list(struct ldlm_namespace *ns, struct ldlm_lock_desc *ld,
struct list_head *cancels, int count,
- ldlm_cancel_flags_t cancel_flags)
+ enum ldlm_cancel_flags cancel_flags)
{
return ldlm_bl_to_thread(ns, ld, NULL, cancels, count, cancel_flags);
}
@@ -470,14 +473,14 @@
req_capsule_set(&req->rq_pill, &RQF_OBD_SET_INFO);
key = req_capsule_client_get(&req->rq_pill, &RMF_SETINFO_KEY);
- if (key == NULL) {
+ if (!key) {
DEBUG_REQ(D_IOCTL, req, "no set_info key");
return -EFAULT;
}
keylen = req_capsule_get_size(&req->rq_pill, &RMF_SETINFO_KEY,
RCL_CLIENT);
val = req_capsule_client_get(&req->rq_pill, &RMF_SETINFO_VAL);
- if (val == NULL) {
+ if (!val) {
DEBUG_REQ(D_IOCTL, req, "no set_info val");
return -EFAULT;
}
@@ -519,7 +522,7 @@
struct client_obd *cli = &req->rq_export->exp_obd->u.cli;
oqctl = req_capsule_client_get(&req->rq_pill, &RMF_OBD_QUOTACTL);
- if (oqctl == NULL) {
+ if (!oqctl) {
CERROR("Can't unpack obd_quotactl\n");
return -EPROTO;
}
@@ -541,7 +544,8 @@
/* Requests arrive in sender's byte order. The ptlrpc service
* handler has already checked and, if necessary, byte-swapped the
* incoming request message body, but I am responsible for the
- * message buffers. */
+ * message buffers.
+ */
/* do nothing for sec context finalize */
if (lustre_msg_get_opc(req->rq_reqmsg) == SEC_CTX_FINI)
@@ -549,15 +553,14 @@
req_capsule_init(&req->rq_pill, req, RCL_SERVER);
- if (req->rq_export == NULL) {
+ if (!req->rq_export) {
rc = ldlm_callback_reply(req, -ENOTCONN);
ldlm_callback_errmsg(req, "Operate on unconnected server",
rc, NULL);
return 0;
}
- LASSERT(req->rq_export != NULL);
- LASSERT(req->rq_export->exp_obd != NULL);
+ LASSERT(req->rq_export->exp_obd);
switch (lustre_msg_get_opc(req->rq_reqmsg)) {
case LDLM_BL_CALLBACK:
@@ -591,12 +594,12 @@
}
ns = req->rq_export->exp_obd->obd_namespace;
- LASSERT(ns != NULL);
+ LASSERT(ns);
req_capsule_set(&req->rq_pill, &RQF_LDLM_CALLBACK);
dlm_req = req_capsule_client_get(&req->rq_pill, &RMF_DLM_REQ);
- if (dlm_req == NULL) {
+ if (!dlm_req) {
rc = ldlm_callback_reply(req, -EPROTO);
ldlm_callback_errmsg(req, "Operate without parameter", rc,
NULL);
@@ -604,7 +607,8 @@
}
/* Force a known safe race, send a cancel to the server for a lock
- * which the server has already started a blocking callback on. */
+ * which the server has already started a blocking callback on.
+ */
if (OBD_FAIL_CHECK(OBD_FAIL_LDLM_CANCEL_BL_CB_RACE) &&
lustre_msg_get_opc(req->rq_reqmsg) == LDLM_BL_CALLBACK) {
rc = ldlm_cli_cancel(&dlm_req->lock_handle[0], 0);
@@ -634,7 +638,8 @@
/* If somebody cancels lock and cache is already dropped,
* or lock is failed before cp_ast received on client,
* we can tell the server we have no lock. Otherwise, we
- * should send cancel after dropping the cache. */
+ * should send cancel after dropping the cache.
+ */
if (((lock->l_flags & LDLM_FL_CANCELING) &&
(lock->l_flags & LDLM_FL_BL_DONE)) ||
(lock->l_flags & LDLM_FL_FAILED)) {
@@ -648,7 +653,8 @@
return 0;
}
/* BL_AST locks are not needed in LRU.
- * Let ldlm_cancel_lru() be fast. */
+ * Let ldlm_cancel_lru() be fast.
+ */
ldlm_lock_remove_from_lru(lock);
lock->l_flags |= LDLM_FL_BL_AST;
}
@@ -661,7 +667,8 @@
* But we'd also like to be able to indicate in the reply that we're
* cancelling right now, because it's unused, or have an intent result
* in the reply, so we might have to push the responsibility for sending
- * the reply down into the AST handlers, alas. */
+ * the reply down into the AST handlers, alas.
+ */
switch (lustre_msg_get_opc(req->rq_reqmsg)) {
case LDLM_BL_CALLBACK:
@@ -781,17 +788,17 @@
blwi = ldlm_bl_get_work(blp);
- if (blwi == NULL) {
+ if (!blwi) {
atomic_dec(&blp->blp_busy_threads);
l_wait_event_exclusive(blp->blp_waitq,
- (blwi = ldlm_bl_get_work(blp)) != NULL,
+ (blwi = ldlm_bl_get_work(blp)),
&lwi);
busy = atomic_inc_return(&blp->blp_busy_threads);
} else {
busy = atomic_read(&blp->blp_busy_threads);
}
- if (blwi->blwi_ns == NULL)
+ if (!blwi->blwi_ns)
/* added by ldlm_cleanup() */
break;
@@ -810,7 +817,8 @@
/* The special case when we cancel locks in LRU
* asynchronously, we pass the list of locks here.
* Thus locks are marked LDLM_FL_CANCELING, but NOT
- * canceled locally yet. */
+ * canceled locally yet.
+ */
count = ldlm_cli_cancel_list_local(&blwi->blwi_head,
blwi->blwi_count,
LCF_BL_AST);
@@ -915,7 +923,7 @@
int rc = 0;
int i;
- if (ldlm_state != NULL)
+ if (ldlm_state)
return -EALREADY;
ldlm_state = kzalloc(sizeof(*ldlm_state), GFP_NOFS);
@@ -1040,7 +1048,7 @@
ldlm_pools_fini();
- if (ldlm_state->ldlm_bl_pool != NULL) {
+ if (ldlm_state->ldlm_bl_pool) {
struct ldlm_bl_pool *blp = ldlm_state->ldlm_bl_pool;
while (atomic_read(&blp->blp_num_threads) > 0) {
@@ -1059,7 +1067,7 @@
kfree(blp);
}
- if (ldlm_state->ldlm_cb_service != NULL)
+ if (ldlm_state->ldlm_cb_service)
ptlrpc_unregister_service(ldlm_state->ldlm_cb_service);
if (ldlm_ns_kset)
@@ -1085,13 +1093,13 @@
ldlm_resource_slab = kmem_cache_create("ldlm_resources",
sizeof(struct ldlm_resource), 0,
SLAB_HWCACHE_ALIGN, NULL);
- if (ldlm_resource_slab == NULL)
+ if (!ldlm_resource_slab)
return -ENOMEM;
ldlm_lock_slab = kmem_cache_create("ldlm_locks",
sizeof(struct ldlm_lock), 0,
SLAB_HWCACHE_ALIGN | SLAB_DESTROY_BY_RCU, NULL);
- if (ldlm_lock_slab == NULL) {
+ if (!ldlm_lock_slab) {
kmem_cache_destroy(ldlm_resource_slab);
return -ENOMEM;
}
@@ -1099,7 +1107,7 @@
ldlm_interval_slab = kmem_cache_create("interval_node",
sizeof(struct ldlm_interval),
0, SLAB_HWCACHE_ALIGN, NULL);
- if (ldlm_interval_slab == NULL) {
+ if (!ldlm_interval_slab) {
kmem_cache_destroy(ldlm_resource_slab);
kmem_cache_destroy(ldlm_lock_slab);
return -ENOMEM;
@@ -1117,7 +1125,8 @@
kmem_cache_destroy(ldlm_resource_slab);
/* ldlm_lock_put() use RCU to call ldlm_lock_free, so need call
* synchronize_rcu() to wait a grace period elapsed, so that
- * ldlm_lock_free() get a chance to be called. */
+ * ldlm_lock_free() get a chance to be called.
+ */
synchronize_rcu();
kmem_cache_destroy(ldlm_lock_slab);
kmem_cache_destroy(ldlm_interval_slab);
diff --git a/drivers/staging/lustre/lustre/ldlm/ldlm_pool.c b/drivers/staging/lustre/lustre/ldlm/ldlm_pool.c
index 3d7c137..3e937b0 100644
--- a/drivers/staging/lustre/lustre/ldlm/ldlm_pool.c
+++ b/drivers/staging/lustre/lustre/ldlm/ldlm_pool.c
@@ -246,7 +246,6 @@
*/
obd = container_of(pl, struct ldlm_namespace,
ns_pool)->ns_obd;
- LASSERT(obd != NULL);
read_lock(&obd->obd_pool_lock);
pl->pl_server_lock_volume = obd->obd_pool_slv;
atomic_set(&pl->pl_limit, obd->obd_pool_limit);
@@ -381,7 +380,7 @@
spin_unlock(&pl->pl_lock);
recalc:
- if (pl->pl_ops->po_recalc != NULL) {
+ if (pl->pl_ops->po_recalc) {
count = pl->pl_ops->po_recalc(pl);
lprocfs_counter_add(pl->pl_stats, LDLM_POOL_RECALC_STAT,
count);
@@ -409,7 +408,7 @@
{
int cancel = 0;
- if (pl->pl_ops->po_shrink != NULL) {
+ if (pl->pl_ops->po_shrink) {
cancel = pl->pl_ops->po_shrink(pl, nr, gfp_mask);
if (nr > 0) {
lprocfs_counter_add(pl->pl_stats,
@@ -643,11 +642,11 @@
static void ldlm_pool_debugfs_fini(struct ldlm_pool *pl)
{
- if (pl->pl_stats != NULL) {
+ if (pl->pl_stats) {
lprocfs_free_stats(&pl->pl_stats);
pl->pl_stats = NULL;
}
- if (pl->pl_debugfs_entry != NULL) {
+ if (pl->pl_debugfs_entry) {
ldebugfs_remove(&pl->pl_debugfs_entry);
pl->pl_debugfs_entry = NULL;
}
@@ -834,7 +833,7 @@
continue;
}
- if (ns_old == NULL)
+ if (!ns_old)
ns_old = ns;
ldlm_namespace_get(ns);
@@ -957,7 +956,7 @@
continue;
}
- if (ns_old == NULL)
+ if (!ns_old)
ns_old = ns;
spin_lock(&ns->ns_lock);
@@ -1040,7 +1039,7 @@
struct l_wait_info lwi = { 0 };
struct task_struct *task;
- if (ldlm_pools_thread != NULL)
+ if (ldlm_pools_thread)
return -EALREADY;
ldlm_pools_thread = kzalloc(sizeof(*ldlm_pools_thread), GFP_NOFS);
@@ -1065,7 +1064,7 @@
static void ldlm_pools_thread_stop(void)
{
- if (ldlm_pools_thread == NULL)
+ if (!ldlm_pools_thread)
return;
thread_set_flags(ldlm_pools_thread, SVC_STOPPING);
diff --git a/drivers/staging/lustre/lustre/ldlm/ldlm_request.c b/drivers/staging/lustre/lustre/ldlm/ldlm_request.c
index b9eb377..2d501a6 100644
--- a/drivers/staging/lustre/lustre/ldlm/ldlm_request.c
+++ b/drivers/staging/lustre/lustre/ldlm/ldlm_request.c
@@ -94,7 +94,7 @@
struct obd_import *imp;
struct obd_device *obd;
- if (lock->l_conn_export == NULL) {
+ if (!lock->l_conn_export) {
static unsigned long next_dump, last_dump;
LCONSOLE_WARN("lock timed out (enqueued at %lld, %llds ago)\n",
@@ -128,7 +128,8 @@
}
/* We use the same basis for both server side and client side functions
- from a single node. */
+ * from a single node.
+ */
static int ldlm_get_enq_timeout(struct ldlm_lock *lock)
{
int timeout = at_get(ldlm_lock_to_ns_at(lock));
@@ -136,8 +137,9 @@
if (AT_OFF)
return obd_timeout / 2;
/* Since these are non-updating timeouts, we should be conservative.
- It would be nice to have some kind of "early reply" mechanism for
- lock callbacks too... */
+ * It would be nice to have some kind of "early reply" mechanism for
+ * lock callbacks too...
+ */
timeout = min_t(int, at_max, timeout + (timeout >> 1)); /* 150% */
return max(timeout, ldlm_enqueue_min);
}
@@ -239,12 +241,13 @@
obd = class_exp2obd(lock->l_conn_export);
/* if this is a local lock, then there is no import */
- if (obd != NULL)
+ if (obd)
imp = obd->u.cli.cl_import;
/* Wait a long time for enqueue - server may have to callback a
- lock from another client. Server will evict the other client if it
- doesn't respond reasonably, and then give us the lock. */
+ * lock from another client. Server will evict the other client if it
+ * doesn't respond reasonably, and then give us the lock.
+ */
timeout = ldlm_get_enq_timeout(lock) * 2;
lwd.lwd_lock = lock;
@@ -258,7 +261,7 @@
interrupted_completion_wait, &lwd);
}
- if (imp != NULL) {
+ if (imp) {
spin_lock(&imp->imp_lock);
lwd.lwd_conn_cnt = imp->imp_conn_cnt;
spin_unlock(&imp->imp_lock);
@@ -296,7 +299,8 @@
!(lock->l_flags & LDLM_FL_FAILED)) {
/* Make sure that this lock will not be found by raced
* bl_ast and -EINVAL reply is sent to server anyways.
- * bug 17645 */
+ * bug 17645
+ */
lock->l_flags |= LDLM_FL_LOCAL_ONLY | LDLM_FL_FAILED |
LDLM_FL_ATOMIC_CB | LDLM_FL_CBPENDING;
need_cancel = 1;
@@ -312,11 +316,13 @@
ldlm_lock_decref_internal(lock, mode);
/* XXX - HACK because we shouldn't call ldlm_lock_destroy()
- * from llite/file.c/ll_file_flock(). */
+ * from llite/file.c/ll_file_flock().
+ */
/* This code makes for the fact that we do not have blocking handler on
* a client for flock locks. As such this is the place where we must
* completely kill failed locks. (interrupted and those that
- * were waiting to be granted when server evicted us. */
+ * were waiting to be granted when server evicted us.
+ */
if (lock->l_resource->lr_type == LDLM_FLOCK) {
lock_res_and_lock(lock);
ldlm_resource_unlink_lock(lock);
@@ -331,7 +337,8 @@
* Called after receiving reply from server.
*/
int ldlm_cli_enqueue_fini(struct obd_export *exp, struct ptlrpc_request *req,
- ldlm_type_t type, __u8 with_policy, ldlm_mode_t mode,
+ enum ldlm_type type, __u8 with_policy,
+ enum ldlm_mode mode,
__u64 *flags, void *lvb, __u32 lvb_len,
struct lustre_handle *lockh, int rc)
{
@@ -363,13 +370,13 @@
/* Before we return, swab the reply */
reply = req_capsule_server_get(&req->rq_pill, &RMF_DLM_REP);
- if (reply == NULL) {
+ if (!reply) {
rc = -EPROTO;
goto cleanup;
}
if (lvb_len != 0) {
- LASSERT(lvb != NULL);
+ LASSERT(lvb);
size = req_capsule_get_size(&req->rq_pill, &RMF_DLM_LVB,
RCL_SERVER);
@@ -401,7 +408,8 @@
/* Key change rehash lock in per-export hash with new key */
if (exp->exp_lock_hash) {
/* In the function below, .hs_keycmp resolves to
- * ldlm_export_lock_keycmp() */
+ * ldlm_export_lock_keycmp()
+ */
/* coverity[overrun-buffer-val] */
cfs_hash_rehash_key(exp->exp_lock_hash,
&lock->l_remote_handle,
@@ -415,7 +423,8 @@
lock->l_flags |= ldlm_flags_from_wire(reply->lock_flags &
LDLM_INHERIT_FLAGS);
/* move NO_TIMEOUT flag to the lock to force ldlm_lock_match()
- * to wait with no timeout as well */
+ * to wait with no timeout as well
+ */
lock->l_flags |= ldlm_flags_from_wire(reply->lock_flags &
LDLM_FL_NO_TIMEOUT);
unlock_res_and_lock(lock);
@@ -425,7 +434,8 @@
/* If enqueue returned a blocked lock but the completion handler has
* already run, then it fixed up the resource and we don't need to do it
- * again. */
+ * again.
+ */
if ((*flags) & LDLM_FL_LOCK_CHANGED) {
int newmode = reply->lock_desc.l_req_mode;
@@ -445,7 +455,7 @@
rc = ldlm_lock_change_resource(ns, lock,
&reply->lock_desc.l_resource.lr_name);
- if (rc || lock->l_resource == NULL) {
+ if (rc || !lock->l_resource) {
rc = -ENOMEM;
goto cleanup;
}
@@ -467,7 +477,8 @@
if ((*flags) & LDLM_FL_AST_SENT ||
/* Cancel extent locks as soon as possible on a liblustre client,
* because it cannot handle asynchronous ASTs robustly (see
- * bug 7311). */
+ * bug 7311).
+ */
(LIBLUSTRE_CLIENT && type == LDLM_EXTENT)) {
lock_res_and_lock(lock);
lock->l_flags |= LDLM_FL_CBPENDING | LDLM_FL_BL_AST;
@@ -476,12 +487,14 @@
}
/* If the lock has already been granted by a completion AST, don't
- * clobber the LVB with an older one. */
+ * clobber the LVB with an older one.
+ */
if (lvb_len != 0) {
/* We must lock or a racing completion might update lvb without
* letting us know and we'll clobber the correct value.
- * Cannot unlock after the check either, a that still leaves
- * a tiny window for completion to get in */
+ * Cannot unlock after the check either, as that still leaves
+ * a tiny window for completion to get in
+ */
lock_res_and_lock(lock);
if (lock->l_req_mode != lock->l_granted_mode)
rc = ldlm_fill_lvb(lock, &req->rq_pill, RCL_SERVER,
@@ -495,7 +508,7 @@
if (!is_replay) {
rc = ldlm_lock_enqueue(ns, &lock, NULL, flags);
- if (lock->l_completion_ast != NULL) {
+ if (lock->l_completion_ast) {
int err = lock->l_completion_ast(lock, *flags, NULL);
if (!rc)
@@ -505,9 +518,10 @@
}
}
- if (lvb_len && lvb != NULL) {
+ if (lvb_len && lvb) {
/* Copy the LVB here, and not earlier, because the completion
- * AST (if any) can override what we got in the reply */
+ * AST (if any) can override what we got in the reply
+ */
memcpy(lvb, lock->l_lvb_data, lvb_len);
}
@@ -579,7 +593,7 @@
LIST_HEAD(head);
int rc;
- if (cancels == NULL)
+ if (!cancels)
cancels = &head;
if (ns_connect_cancelset(ns)) {
/* Estimate the amount of available space in the request. */
@@ -593,7 +607,8 @@
/* Cancel LRU locks here _only_ if the server supports
* EARLY_CANCEL. Otherwise we have to send extra CANCEL
- * RPC, which will make us slower. */
+ * RPC, which will make us slower.
+ */
if (avail > count)
count += ldlm_cancel_lru_local(ns, cancels, to_free,
avail - count, 0, flags);
@@ -618,7 +633,8 @@
/* Skip first lock handler in ldlm_request_pack(),
* this method will increment @lock_count according
* to the lock handle amount actually written to
- * the buffer. */
+ * the buffer.
+ */
dlm->lock_count = canceloff;
}
/* Pack into the request @pack lock handles. */
@@ -665,15 +681,14 @@
int rc, err;
struct ptlrpc_request *req;
- LASSERT(exp != NULL);
-
ns = exp->exp_obd->obd_namespace;
/* If we're replaying this lock, just check some invariants.
- * If we're creating a new lock, get everything all setup nice. */
+ * If we're creating a new lock, get everything all setup nicely.
+ */
if (is_replay) {
lock = ldlm_handle2lock_long(lockh, 0);
- LASSERT(lock != NULL);
+ LASSERT(lock);
LDLM_DEBUG(lock, "client-side enqueue START");
LASSERT(exp == lock->l_conn_export);
} else {
@@ -685,13 +700,13 @@
lock = ldlm_lock_create(ns, res_id, einfo->ei_type,
einfo->ei_mode, &cbs, einfo->ei_cbdata,
lvb_len, lvb_type);
- if (lock == NULL)
+ if (!lock)
return -ENOMEM;
/* for the local lock, add the reference */
ldlm_lock_addref_internal(lock, einfo->ei_mode);
ldlm_lock2handle(lock, lockh);
- if (policy != NULL)
- lock->l_policy_data = *policy;
+ if (policy)
+ lock->l_policy_data = *policy;
if (einfo->ei_type == LDLM_EXTENT)
lock->l_req_extent = policy->l_extent;
@@ -706,12 +721,12 @@
/* lock not sent to server yet */
- if (reqp == NULL || *reqp == NULL) {
+ if (!reqp || !*reqp) {
req = ptlrpc_request_alloc_pack(class_exp2cliimp(exp),
&RQF_LDLM_ENQUEUE,
LUSTRE_DLM_VERSION,
LDLM_ENQUEUE);
- if (req == NULL) {
+ if (!req) {
failed_lock_cleanup(ns, lock, einfo->ei_mode);
LDLM_LOCK_RELEASE(lock);
return -ENOMEM;
@@ -754,7 +769,7 @@
policy->l_extent.end == OBD_OBJECT_EOF));
if (async) {
- LASSERT(reqp != NULL);
+ LASSERT(reqp);
return 0;
}
@@ -767,13 +782,14 @@
lockh, rc);
/* If ldlm_cli_enqueue_fini did not find the lock, we need to free
- * one reference that we took */
+ * one reference that we took
+ */
if (err == -ENOLCK)
LDLM_LOCK_RELEASE(lock);
else
rc = err;
- if (!req_passed_in && req != NULL) {
+ if (!req_passed_in && req) {
ptlrpc_req_finished(req);
if (reqp)
*reqp = NULL;
@@ -832,7 +848,7 @@
int max, packed = 0;
dlm = req_capsule_client_get(&req->rq_pill, &RMF_DLM_REQ);
- LASSERT(dlm != NULL);
+ LASSERT(dlm);
/* Check the room in the request buffer. */
max = req_capsule_get_size(&req->rq_pill, &RMF_DLM_REQ, RCL_CLIENT) -
@@ -843,7 +859,8 @@
/* XXX: it would be better to pack lock handles grouped by resource.
* so that the server cancel would call filter_lvbo_update() less
- * frequently. */
+ * frequently.
+ */
list_for_each_entry(lock, head, l_bl_ast) {
if (!count--)
break;
@@ -858,17 +875,18 @@
/**
* Prepare and send a batched cancel RPC. It will include \a count lock
- * handles of locks given in \a cancels list. */
+ * handles of locks given in \a cancels list.
+ */
static int ldlm_cli_cancel_req(struct obd_export *exp,
struct list_head *cancels,
- int count, ldlm_cancel_flags_t flags)
+ int count, enum ldlm_cancel_flags flags)
{
struct ptlrpc_request *req = NULL;
struct obd_import *imp;
int free, sent = 0;
int rc = 0;
- LASSERT(exp != NULL);
+ LASSERT(exp);
LASSERT(count > 0);
CFS_FAIL_TIMEOUT(OBD_FAIL_LDLM_PAUSE_CANCEL, cfs_fail_val);
@@ -883,14 +901,14 @@
while (1) {
imp = class_exp2cliimp(exp);
- if (imp == NULL || imp->imp_invalid) {
+ if (!imp || imp->imp_invalid) {
CDEBUG(D_DLMTRACE,
"skipping cancel on invalid import %p\n", imp);
return count;
}
req = ptlrpc_request_alloc(imp, &RQF_LDLM_CANCEL);
- if (req == NULL) {
+ if (!req) {
rc = -ENOMEM;
goto out;
}
@@ -946,7 +964,6 @@
static inline struct ldlm_pool *ldlm_imp2pl(struct obd_import *imp)
{
- LASSERT(imp != NULL);
return &imp->imp_obd->obd_namespace->ns_pool;
}
@@ -971,7 +988,8 @@
* is the case when server does not support LRU resize feature.
* This is also possible in some recovery cases when server-side
* reqs have no reference to the OBD export and thus access to
- * server-side namespace is not possible. */
+ * server-side namespace is not possible.
+ */
if (lustre_msg_get_slv(req->rq_repmsg) == 0 ||
lustre_msg_get_limit(req->rq_repmsg) == 0) {
DEBUG_REQ(D_HA, req,
@@ -989,7 +1007,8 @@
* to the pool thread. We do not access obd_namespace and pool
* directly here as there is no reliable way to make sure that
* they are still alive at cleanup time. Evil races are possible
- * which may cause Oops at that time. */
+ * which may cause Oops at that time.
+ */
write_lock(&obd->obd_pool_lock);
obd->obd_pool_slv = new_slv;
obd->obd_pool_limit = new_limit;
@@ -1005,7 +1024,7 @@
* Lock must not have any readers or writers by this time.
*/
int ldlm_cli_cancel(struct lustre_handle *lockh,
- ldlm_cancel_flags_t cancel_flags)
+ enum ldlm_cancel_flags cancel_flags)
{
struct obd_export *exp;
int avail, flags, count = 1;
@@ -1016,7 +1035,7 @@
/* concurrent cancels on the same handle can happen */
lock = ldlm_handle2lock_long(lockh, LDLM_FL_CANCELING);
- if (lock == NULL) {
+ if (!lock) {
LDLM_DEBUG_NOLOCK("lock is already being destroyed\n");
return 0;
}
@@ -1028,7 +1047,8 @@
}
/* Even if the lock is marked as LDLM_FL_BL_AST, this is a LDLM_CANCEL
* RPC which goes to canceld portal, so we can cancel other LRU locks
- * here and send them all as one LDLM_CANCEL RPC. */
+ * here and send them all as one LDLM_CANCEL RPC.
+ */
LASSERT(list_empty(&lock->l_bl_ast));
list_add(&lock->l_bl_ast, &cancels);
@@ -1055,7 +1075,7 @@
* Return the number of cancelled locks.
*/
int ldlm_cli_cancel_list_local(struct list_head *cancels, int count,
- ldlm_cancel_flags_t flags)
+ enum ldlm_cancel_flags flags)
{
LIST_HEAD(head);
struct ldlm_lock *lock, *next;
@@ -1076,7 +1096,8 @@
/* Until we have compound requests and can send LDLM_CANCEL
* requests batched with generic RPCs, we need to send cancels
* with the LDLM_FL_BL_AST flag in a separate RPC from
- * the one being generated now. */
+ * the one being generated now.
+ */
if (!(flags & LCF_BL_AST) && (rc == LDLM_FL_BL_AST)) {
LDLM_DEBUG(lock, "Cancel lock separately");
list_del_init(&lock->l_bl_ast);
@@ -1116,7 +1137,8 @@
lock_res_and_lock(lock);
/* don't check added & count since we want to process all locks
- * from unused list */
+ * from unused list
+ */
switch (lock->l_resource->lr_type) {
case LDLM_EXTENT:
case LDLM_IBITS:
@@ -1152,7 +1174,8 @@
unsigned long la;
/* Stop LRU processing when we reach past @count or have checked all
- * locks in LRU. */
+ * locks in LRU.
+ */
if (count && added >= count)
return LDLM_POLICY_KEEP_LOCK;
@@ -1166,7 +1189,8 @@
ldlm_pool_set_clv(pl, lv);
/* Stop when SLV is not yet come from server or lv is smaller than
- * it is. */
+ * it is.
+ */
return (slv == 0 || lv < slv) ?
LDLM_POLICY_KEEP_LOCK : LDLM_POLICY_CANCEL_LOCK;
}
@@ -1186,7 +1210,8 @@
int count)
{
/* Stop LRU processing when we reach past @count or have checked all
- * locks in LRU. */
+ * locks in LRU.
+ */
return (added >= count) ?
LDLM_POLICY_KEEP_LOCK : LDLM_POLICY_CANCEL_LOCK;
}
@@ -1227,7 +1252,8 @@
int count)
{
/* Stop LRU processing when we reach past count or have checked all
- * locks in LRU. */
+ * locks in LRU.
+ */
return (added >= count) ?
LDLM_POLICY_KEEP_LOCK : LDLM_POLICY_CANCEL_LOCK;
}
@@ -1307,7 +1333,7 @@
count += unused - ns->ns_max_unused;
pf = ldlm_cancel_lru_policy(ns, flags);
- LASSERT(pf != NULL);
+ LASSERT(pf);
while (!list_empty(&ns->ns_unused_list)) {
ldlm_policy_res_t result;
@@ -1331,7 +1357,8 @@
continue;
/* Somebody is already doing CANCEL. No need for this
- * lock in LRU, do not traverse it again. */
+ * lock in LRU, do not traverse it again.
+ */
if (!(lock->l_flags & LDLM_FL_CANCELING))
break;
@@ -1380,7 +1407,8 @@
/* Another thread is removing lock from LRU, or
* somebody is already doing CANCEL, or there
* is a blocking request which will send cancel
- * by itself, or the lock is no longer unused. */
+ * by itself, or the lock is no longer unused.
+ */
unlock_res_and_lock(lock);
lu_ref_del(&lock->l_reference,
__func__, current);
@@ -1394,7 +1422,8 @@
* better send cancel notification to server, so that it
* frees appropriate state. This might lead to a race
* where while we are doing cancel here, server is also
- * silently cancelling this lock. */
+ * silently cancelling this lock.
+ */
lock->l_flags &= ~LDLM_FL_CANCEL_ON_BLOCK;
/* Setting the CBPENDING flag is a little misleading,
@@ -1402,7 +1431,8 @@
* CBPENDING is set, the lock can accumulate no more
* readers/writers. Since readers and writers are
* already zero here, ldlm_lock_decref() won't see
- * this flag and call l_blocking_ast */
+ * this flag and call l_blocking_ast
+ */
lock->l_flags |= LDLM_FL_CBPENDING | LDLM_FL_CANCELING;
/* We can't re-add to l_lru as it confuses the
@@ -1410,7 +1440,8 @@
* arrives after we drop lr_lock below. We use l_bl_ast
* and can't use l_pending_chain as it is used both on
* server and client nevertheless bug 5666 says it is
- * used only on server */
+ * used only on server
+ */
LASSERT(list_empty(&lock->l_bl_ast));
list_add(&lock->l_bl_ast, cancels);
unlock_res_and_lock(lock);
@@ -1425,7 +1456,7 @@
int ldlm_cancel_lru_local(struct ldlm_namespace *ns,
struct list_head *cancels, int count, int max,
- ldlm_cancel_flags_t cancel_flags, int flags)
+ enum ldlm_cancel_flags cancel_flags, int flags)
{
int added;
@@ -1444,14 +1475,15 @@
* callback will be performed in this function.
*/
int ldlm_cancel_lru(struct ldlm_namespace *ns, int nr,
- ldlm_cancel_flags_t cancel_flags,
+ enum ldlm_cancel_flags cancel_flags,
int flags)
{
LIST_HEAD(cancels);
int count, rc;
/* Just prepare the list of locks, do not actually cancel them yet.
- * Locks are cancelled later in a separate thread. */
+ * Locks are cancelled later in a separate thread.
+ */
count = ldlm_prepare_lru_list(ns, &cancels, nr, 0, flags);
rc = ldlm_bl_to_thread_list(ns, NULL, &cancels, count, cancel_flags);
if (rc == 0)
@@ -1468,15 +1500,16 @@
int ldlm_cancel_resource_local(struct ldlm_resource *res,
struct list_head *cancels,
ldlm_policy_data_t *policy,
- ldlm_mode_t mode, __u64 lock_flags,
- ldlm_cancel_flags_t cancel_flags, void *opaque)
+ enum ldlm_mode mode, __u64 lock_flags,
+ enum ldlm_cancel_flags cancel_flags,
+ void *opaque)
{
struct ldlm_lock *lock;
int count = 0;
lock_res(res);
list_for_each_entry(lock, &res->lr_granted, l_res_link) {
- if (opaque != NULL && lock->l_ast_data != opaque) {
+ if (opaque && lock->l_ast_data != opaque) {
LDLM_ERROR(lock, "data %p doesn't match opaque %p",
lock->l_ast_data, opaque);
continue;
@@ -1486,7 +1519,8 @@
continue;
/* If somebody is already doing CANCEL, or blocking AST came,
- * skip this lock. */
+ * skip this lock.
+ */
if (lock->l_flags & LDLM_FL_BL_AST ||
lock->l_flags & LDLM_FL_CANCELING)
continue;
@@ -1495,7 +1529,8 @@
continue;
/* If policy is given and this is IBITS lock, add to list only
- * those locks that match by policy. */
+ * those locks that match by policy.
+ */
if (policy && (lock->l_resource->lr_type == LDLM_IBITS) &&
!(lock->l_policy_data.l_inodebits.bits &
policy->l_inodebits.bits))
@@ -1527,7 +1562,8 @@
* Destroy \a cancels at the end.
*/
int ldlm_cli_cancel_list(struct list_head *cancels, int count,
- struct ptlrpc_request *req, ldlm_cancel_flags_t flags)
+ struct ptlrpc_request *req,
+ enum ldlm_cancel_flags flags)
{
struct ldlm_lock *lock;
int res = 0;
@@ -1539,7 +1575,8 @@
* Usually it is enough to have just 1 RPC, but it is possible that
* there are too many locks to be cancelled in LRU or on a resource.
* It would also speed up the case when the server does not support
- * the feature. */
+ * the feature.
+ */
while (count > 0) {
LASSERT(!list_empty(cancels));
lock = list_entry(cancels->next, struct ldlm_lock,
@@ -1577,12 +1614,13 @@
* Cancel all locks on a resource that have 0 readers/writers.
*
* If flags & LDLM_FL_LOCAL_ONLY, throw the locks away without trying
- * to notify the server. */
+ * to notify the server.
+ */
int ldlm_cli_cancel_unused_resource(struct ldlm_namespace *ns,
const struct ldlm_res_id *res_id,
ldlm_policy_data_t *policy,
- ldlm_mode_t mode,
- ldlm_cancel_flags_t flags,
+ enum ldlm_mode mode,
+ enum ldlm_cancel_flags flags,
void *opaque)
{
struct ldlm_resource *res;
@@ -1591,7 +1629,7 @@
int rc;
res = ldlm_resource_get(ns, NULL, res_id, 0, 0);
- if (res == NULL) {
+ if (!res) {
/* This is not a problem. */
CDEBUG(D_INFO, "No resource %llu\n", res_id->name[0]);
return 0;
@@ -1638,17 +1676,17 @@
* to notify the server. */
int ldlm_cli_cancel_unused(struct ldlm_namespace *ns,
const struct ldlm_res_id *res_id,
- ldlm_cancel_flags_t flags, void *opaque)
+ enum ldlm_cancel_flags flags, void *opaque)
{
struct ldlm_cli_cancel_arg arg = {
.lc_flags = flags,
.lc_opaque = opaque,
};
- if (ns == NULL)
+ if (!ns)
return ELDLM_OK;
- if (res_id != NULL) {
+ if (res_id) {
return ldlm_cli_cancel_unused_resource(ns, res_id, NULL,
LCK_MINMODE, flags,
opaque);
@@ -1743,13 +1781,13 @@
struct ldlm_resource *res;
int rc;
- if (ns == NULL) {
+ if (!ns) {
CERROR("must pass in namespace\n");
LBUG();
}
res = ldlm_resource_get(ns, NULL, res_id, 0, 0);
- if (res == NULL)
+ if (!res)
return 0;
LDLM_RESOURCE_ADDREF(res);
@@ -1796,7 +1834,7 @@
goto out;
reply = req_capsule_server_get(&req->rq_pill, &RMF_DLM_REP);
- if (reply == NULL) {
+ if (!reply) {
rc = -EPROTO;
goto out;
}
@@ -1815,7 +1853,8 @@
exp = req->rq_export;
if (exp && exp->exp_lock_hash) {
/* In the function below, .hs_keycmp resolves to
- * ldlm_export_lock_keycmp() */
+ * ldlm_export_lock_keycmp()
+ */
/* coverity[overrun-buffer-val] */
cfs_hash_rehash_key(exp->exp_lock_hash,
&lock->l_remote_handle,
@@ -1850,7 +1889,8 @@
/* If this is reply-less callback lock, we cannot replay it, since
* server might have long dropped it, but notification of that event was
- * lost by network. (and server granted conflicting lock already) */
+ * lost by network. (and server granted conflicting lock already)
+ */
if (lock->l_flags & LDLM_FL_CANCEL_ON_BLOCK) {
LDLM_DEBUG(lock, "Not replaying reply-less lock:");
ldlm_lock_cancel(lock);
@@ -1882,7 +1922,7 @@
req = ptlrpc_request_alloc_pack(imp, &RQF_LDLM_ENQUEUE,
LUSTRE_DLM_VERSION, LDLM_ENQUEUE);
- if (req == NULL)
+ if (!req)
return -ENOMEM;
/* We're part of recovery, so don't wait for it. */
@@ -1901,7 +1941,8 @@
/* notify the server we've replayed all requests.
* also, we mark the request to be put on a dedicated
* queue to be processed after all request replayes.
- * bug 6063 */
+ * bug 6063
+ */
lustre_msg_set_flags(req->rq_reqmsg, MSG_REQ_REPLAY_DONE);
LDLM_DEBUG(lock, "replaying lock:");
@@ -1936,7 +1977,8 @@
/* We don't need to care whether or not LRU resize is enabled
* because the LDLM_CANCEL_NO_WAIT policy doesn't use the
- * count parameter */
+ * count parameter
+ */
canceled = ldlm_cancel_lru_local(ns, &cancels, ns->ns_nr_unused, 0,
LCF_LOCAL, LDLM_CANCEL_NO_WAIT);
diff --git a/drivers/staging/lustre/lustre/ldlm/ldlm_resource.c b/drivers/staging/lustre/lustre/ldlm/ldlm_resource.c
index 0ae6100..03b9726 100644
--- a/drivers/staging/lustre/lustre/ldlm/ldlm_resource.c
+++ b/drivers/staging/lustre/lustre/ldlm/ldlm_resource.c
@@ -56,7 +56,8 @@
struct mutex ldlm_cli_namespace_lock;
/* Client Namespaces that have active resources in them.
* Once all resources go away, ldlm_poold moves such namespaces to the
- * inactive list */
+ * inactive list
+ */
LIST_HEAD(ldlm_cli_active_namespace_list);
/* Client namespaces that don't have any locks in them */
static LIST_HEAD(ldlm_cli_inactive_namespace_list);
@@ -66,7 +67,8 @@
struct dentry *ldlm_svc_debugfs_dir;
/* during debug dump certain amount of granted locks for one resource to avoid
- * DDOS. */
+ * DDOS.
+ */
static unsigned int ldlm_dump_granted_max = 256;
static ssize_t
@@ -275,7 +277,8 @@
ldlm_cancel_lru(ns, 0, LCF_ASYNC, LDLM_CANCEL_PASSED);
/* Make sure that LRU resize was originally supported before
- * turning it on here. */
+ * turning it on here.
+ */
if (lru_resize &&
(ns->ns_orig_connect_flags & OBD_CONNECT_LRU_RESIZE)) {
CDEBUG(D_DLMTRACE,
@@ -380,7 +383,7 @@
else
ldebugfs_remove(&ns->ns_debugfs_entry);
- if (ns->ns_stats != NULL)
+ if (ns->ns_stats)
lprocfs_free_stats(&ns->ns_stats);
}
@@ -400,7 +403,7 @@
"%s", ldlm_ns_name(ns));
ns->ns_stats = lprocfs_alloc_stats(LDLM_NSS_LAST, 0);
- if (ns->ns_stats == NULL) {
+ if (!ns->ns_stats) {
kobject_put(&ns->ns_kobj);
return -ENOMEM;
}
@@ -420,7 +423,7 @@
} else {
ns_entry = debugfs_create_dir(ldlm_ns_name(ns),
ldlm_ns_debugfs_dir);
- if (ns_entry == NULL)
+ if (!ns_entry)
return -ENOMEM;
ns->ns_debugfs_entry = ns_entry;
}
@@ -554,7 +557,7 @@
};
struct ldlm_ns_hash_def {
- ldlm_ns_type_t nsd_type;
+ enum ldlm_ns_type nsd_type;
/** hash bucket bits */
unsigned nsd_bkt_bits;
/** hash bits */
@@ -621,8 +624,8 @@
*/
struct ldlm_namespace *ldlm_namespace_new(struct obd_device *obd, char *name,
ldlm_side_t client,
- ldlm_appetite_t apt,
- ldlm_ns_type_t ns_type)
+ enum ldlm_appetite apt,
+ enum ldlm_ns_type ns_type)
{
struct ldlm_namespace *ns = NULL;
struct ldlm_ns_bucket *nsb;
@@ -631,7 +634,7 @@
int idx;
int rc;
- LASSERT(obd != NULL);
+ LASSERT(obd);
rc = ldlm_get_ref();
if (rc) {
@@ -664,7 +667,7 @@
CFS_HASH_BIGNAME |
CFS_HASH_SPIN_BKTLOCK |
CFS_HASH_NO_ITEMREF);
- if (ns->ns_rs_hash == NULL)
+ if (!ns->ns_rs_hash)
goto out_ns;
cfs_hash_for_each_bucket(ns->ns_rs_hash, &bd, idx) {
@@ -749,7 +752,8 @@
struct lustre_handle lockh;
/* First, we look for non-cleaned-yet lock
- * all cleaned locks are marked by CLEANED flag. */
+ * all cleaned locks are marked by CLEANED flag.
+ */
lock_res(res);
list_for_each(tmp, q) {
lock = list_entry(tmp, struct ldlm_lock,
@@ -763,13 +767,14 @@
break;
}
- if (lock == NULL) {
+ if (!lock) {
unlock_res(res);
break;
}
/* Set CBPENDING so nothing in the cancellation path
- * can match this lock. */
+ * can match this lock.
+ */
lock->l_flags |= LDLM_FL_CBPENDING;
lock->l_flags |= LDLM_FL_FAILED;
lock->l_flags |= flags;
@@ -782,7 +787,8 @@
/* This is a little bit gross, but much better than the
* alternative: pretend that we got a blocking AST from
* the server, so that when the lock is decref'd, it
- * will go away ... */
+ * will go away ...
+ */
unlock_res(res);
LDLM_DEBUG(lock, "setting FL_LOCAL_ONLY");
if (lock->l_completion_ast)
@@ -837,7 +843,7 @@
*/
int ldlm_namespace_cleanup(struct ldlm_namespace *ns, __u64 flags)
{
- if (ns == NULL) {
+ if (!ns) {
CDEBUG(D_INFO, "NULL ns, skipping cleanup\n");
return ELDLM_OK;
}
@@ -873,7 +879,8 @@
atomic_read(&ns->ns_bref) == 0, &lwi);
/* Forced cleanups should be able to reclaim all references,
- * so it's safe to wait forever... we can't leak locks... */
+ * so it's safe to wait forever... we can't leak locks...
+ */
if (force && rc == -ETIMEDOUT) {
LCONSOLE_ERROR("Forced cleanup waiting for %s namespace with %d resources in use, (rc=%d)\n",
ldlm_ns_name(ns),
@@ -943,7 +950,8 @@
LASSERT(!list_empty(&ns->ns_list_chain));
/* Some asserts and possibly other parts of the code are still
* using list_empty(&ns->ns_list_chain). This is why it is
- * important to use list_del_init() here. */
+ * important to use list_del_init() here.
+ */
list_del_init(&ns->ns_list_chain);
ldlm_namespace_nr_dec(client);
mutex_unlock(ldlm_namespace_lock(client));
@@ -963,7 +971,8 @@
ldlm_namespace_unregister(ns, ns->ns_client);
/* Fini pool _before_ parent proc dir is removed. This is important as
* ldlm_pool_fini() removes own proc dir which is child to @dir.
- * Removing it after @dir may cause oops. */
+ * Removing it after @dir may cause oops.
+ */
ldlm_pool_fini(&ns->ns_pool);
ldlm_namespace_debugfs_unregister(ns);
@@ -971,7 +980,8 @@
cfs_hash_putref(ns->ns_rs_hash);
/* Namespace \a ns should be not on list at this time, otherwise
* this will cause issues related to using freed \a ns in poold
- * thread. */
+ * thread.
+ */
LASSERT(list_empty(&ns->ns_list_chain));
kfree(ns);
ldlm_put_ref();
@@ -1032,7 +1042,7 @@
int idx;
res = kmem_cache_alloc(ldlm_resource_slab, GFP_NOFS | __GFP_ZERO);
- if (res == NULL)
+ if (!res)
return NULL;
INIT_LIST_HEAD(&res->lr_granted);
@@ -1050,7 +1060,8 @@
lu_ref_init(&res->lr_reference);
/* The creator of the resource must unlock the mutex after LVB
- * initialization. */
+ * initialization.
+ */
mutex_init(&res->lr_lvb_mutex);
mutex_lock(&res->lr_lvb_mutex);
@@ -1065,7 +1076,8 @@
*/
struct ldlm_resource *
ldlm_resource_get(struct ldlm_namespace *ns, struct ldlm_resource *parent,
- const struct ldlm_res_id *name, ldlm_type_t type, int create)
+ const struct ldlm_res_id *name, enum ldlm_type type,
+ int create)
{
struct hlist_node *hnode;
struct ldlm_resource *res;
@@ -1073,14 +1085,13 @@
__u64 version;
int ns_refcount = 0;
- LASSERT(ns != NULL);
- LASSERT(parent == NULL);
- LASSERT(ns->ns_rs_hash != NULL);
+ LASSERT(!parent);
+ LASSERT(ns->ns_rs_hash);
LASSERT(name->name[0] != 0);
cfs_hash_bd_get_and_lock(ns->ns_rs_hash, (void *)name, &bd, 0);
hnode = cfs_hash_bd_lookup_locked(ns->ns_rs_hash, &bd, (void *)name);
- if (hnode != NULL) {
+ if (hnode) {
cfs_hash_bd_unlock(ns->ns_rs_hash, &bd, 0);
res = hlist_entry(hnode, struct ldlm_resource, lr_hash);
/* Synchronize with regard to resource creation. */
@@ -1111,13 +1122,12 @@
res->lr_ns_bucket = cfs_hash_bd_extra_get(ns->ns_rs_hash, &bd);
res->lr_name = *name;
res->lr_type = type;
- res->lr_most_restr = LCK_NL;
cfs_hash_bd_lock(ns->ns_rs_hash, &bd, 1);
hnode = (version == cfs_hash_bd_version_get(&bd)) ? NULL :
cfs_hash_bd_lookup_locked(ns->ns_rs_hash, &bd, (void *)name);
- if (hnode != NULL) {
+ if (hnode) {
/* Someone won the race and already added the resource. */
cfs_hash_bd_unlock(ns->ns_rs_hash, &bd, 1);
/* Clean lu_ref for failed resource. */
@@ -1167,7 +1177,8 @@
/* Let's see if we happened to be the very first resource in this
* namespace. If so, and this is a client namespace, we need to move
* the namespace into the active namespaces list to be patrolled by
- * the ldlm_poold. */
+ * the ldlm_poold.
+ */
if (ns_refcount == 1) {
mutex_lock(ldlm_namespace_lock(LDLM_NAMESPACE_CLIENT));
ldlm_namespace_move_to_active_locked(ns, LDLM_NAMESPACE_CLIENT);
diff --git a/drivers/staging/lustre/lustre/libcfs/debug.c b/drivers/staging/lustre/lustre/libcfs/debug.c
index 6274558..c90e510 100644
--- a/drivers/staging/lustre/lustre/libcfs/debug.c
+++ b/drivers/staging/lustre/lustre/libcfs/debug.c
@@ -47,15 +47,15 @@
static char debug_file_name[1024];
unsigned int libcfs_subsystem_debug = ~0;
+EXPORT_SYMBOL(libcfs_subsystem_debug);
module_param(libcfs_subsystem_debug, int, 0644);
MODULE_PARM_DESC(libcfs_subsystem_debug, "Lustre kernel debug subsystem mask");
-EXPORT_SYMBOL(libcfs_subsystem_debug);
unsigned int libcfs_debug = (D_CANTMASK |
D_NETERROR | D_HA | D_CONFIG | D_IOCTL);
+EXPORT_SYMBOL(libcfs_debug);
module_param(libcfs_debug, int, 0644);
MODULE_PARM_DESC(libcfs_debug, "Lustre kernel debug mask");
-EXPORT_SYMBOL(libcfs_debug);
static int libcfs_param_debug_mb_set(const char *val,
const struct kernel_param *kp)
@@ -82,7 +82,8 @@
/* While debug_mb setting look like unsigned int, in fact
* it needs quite a bunch of extra processing, so we define special
- * debugmb parameter type with corresponding methods to handle this case */
+ * debugmb parameter type with corresponding methods to handle this case
+ */
static struct kernel_param_ops param_ops_debugmb = {
.set = libcfs_param_debug_mb_set,
.get = param_get_uint,
@@ -227,8 +228,7 @@
int libcfs_panic_in_progress;
-/* libcfs_debug_token2mask() expects the returned
- * string in lower-case */
+/* libcfs_debug_token2mask() expects the returned string in lower-case */
static const char *
libcfs_debug_subsys2str(int subsys)
{
@@ -290,8 +290,7 @@
}
}
-/* libcfs_debug_token2mask() expects the returned
- * string in lower-case */
+/* libcfs_debug_token2mask() expects the returned string in lower-case */
static const char *
libcfs_debug_dbg2str(int debug)
{
@@ -378,7 +377,7 @@
continue;
token = fn(i);
- if (token == NULL) /* unused bit */
+ if (!token) /* unused bit */
continue;
if (len > 0) { /* separator? */
@@ -418,7 +417,7 @@
/* Allow a number for backwards compatibility */
for (n = strlen(str); n > 0; n--)
- if (!isspace(str[n-1]))
+ if (!isspace(str[n - 1]))
break;
matched = n;
t = sscanf(str, "%i%n", &m, &matched);
@@ -448,8 +447,7 @@
snprintf(debug_file_name, sizeof(debug_file_name) - 1,
"%s.%lld.%ld", libcfs_debug_file_path_arr,
(s64)ktime_get_real_seconds(), (long_ptr_t)arg);
- pr_alert("LustreError: dumping log to %s\n",
- debug_file_name);
+ pr_alert("LustreError: dumping log to %s\n", debug_file_name);
cfs_tracefile_dump_all_pages(debug_file_name);
libcfs_run_debug_log_upcall(debug_file_name);
}
@@ -471,7 +469,8 @@
/* we're being careful to ensure that the kernel thread is
* able to set our state to running as it exits before we
- * get to schedule() */
+ * get to schedule()
+ */
init_waitqueue_entry(&wait, current);
set_current_state(TASK_INTERRUPTIBLE);
add_wait_queue(&debug_ctlwq, &wait);
@@ -505,14 +504,15 @@
libcfs_console_min_delay = CDEBUG_DEFAULT_MIN_DELAY;
}
- if (libcfs_debug_file_path != NULL) {
+ if (libcfs_debug_file_path) {
strlcpy(libcfs_debug_file_path_arr,
libcfs_debug_file_path,
sizeof(libcfs_debug_file_path_arr));
}
/* If libcfs_debug_mb is set to an invalid value or uninitialized
- * then just make the total buffers smp_num_cpus * TCD_MAX_PAGES */
+ * then just make the total buffers smp_num_cpus * TCD_MAX_PAGES
+ */
if (max > cfs_trace_max_debug_mb() || max < num_possible_cpus()) {
max = TCD_MAX_PAGES;
} else {
@@ -542,8 +542,7 @@
return 0;
}
-/* Debug markers, although printed by S_LNET
- * should not be be marked as such. */
+/* Debug markers, although printed by S_LNET should not be be marked as such. */
#undef DEBUG_SUBSYSTEM
#define DEBUG_SUBSYSTEM S_UNDEFINED
int libcfs_debug_mark_buffer(const char *text)
diff --git a/drivers/staging/lustre/lustre/libcfs/fail.c b/drivers/staging/lustre/lustre/libcfs/fail.c
index 2783143..dadaf76 100644
--- a/drivers/staging/lustre/lustre/libcfs/fail.c
+++ b/drivers/staging/lustre/lustre/libcfs/fail.c
@@ -97,7 +97,8 @@
/* Lost race to set CFS_FAILED_BIT. */
if (test_and_set_bit(CFS_FAILED_BIT, &cfs_fail_loc)) {
/* If CFS_FAIL_ONCE is valid, only one process can fail,
- * otherwise multi-process can fail at the same time. */
+ * otherwise multi-process can fail at the same time.
+ */
if (cfs_fail_loc & CFS_FAIL_ONCE)
return 0;
}
diff --git a/drivers/staging/lustre/lustre/libcfs/hash.c b/drivers/staging/lustre/lustre/libcfs/hash.c
index 4d50510..f60feb3 100644
--- a/drivers/staging/lustre/lustre/libcfs/hash.c
+++ b/drivers/staging/lustre/lustre/libcfs/hash.c
@@ -355,7 +355,7 @@
dh = container_of(cfs_hash_dh_hhead(hs, bd),
struct cfs_hash_dhead, dh_head);
- if (dh->dh_tail != NULL) /* not empty */
+ if (dh->dh_tail) /* not empty */
hlist_add_behind(hnode, dh->dh_tail);
else /* empty list */
hlist_add_head(hnode, &dh->dh_head);
@@ -371,7 +371,7 @@
dh = container_of(cfs_hash_dh_hhead(hs, bd),
struct cfs_hash_dhead, dh_head);
- if (hnd->next == NULL) { /* it's the tail */
+ if (!hnd->next) { /* it's the tail */
dh->dh_tail = (hnd->pprev == &dh->dh_head.first) ? NULL :
container_of(hnd->pprev, struct hlist_node, next);
}
@@ -412,7 +412,7 @@
dh = container_of(cfs_hash_dd_hhead(hs, bd),
struct cfs_hash_dhead_dep, dd_head);
- if (dh->dd_tail != NULL) /* not empty */
+ if (dh->dd_tail) /* not empty */
hlist_add_behind(hnode, dh->dd_tail);
else /* empty list */
hlist_add_head(hnode, &dh->dd_head);
@@ -428,7 +428,7 @@
dh = container_of(cfs_hash_dd_hhead(hs, bd),
struct cfs_hash_dhead_dep, dd_head);
- if (hnd->next == NULL) { /* it's the tail */
+ if (!hnd->next) { /* it's the tail */
dh->dd_tail = (hnd->pprev == &dh->dd_head.first) ? NULL :
container_of(hnd->pprev, struct hlist_node, next);
}
@@ -492,7 +492,7 @@
cfs_hash_bd_get(struct cfs_hash *hs, const void *key, struct cfs_hash_bd *bd)
{
/* NB: caller should hold hs->hs_rwlock if REHASH is set */
- if (likely(hs->hs_rehash_buckets == NULL)) {
+ if (likely(!hs->hs_rehash_buckets)) {
cfs_hash_bd_from_key(hs, hs->hs_buckets,
hs->hs_cur_bits, key, bd);
} else {
@@ -579,7 +579,8 @@
return;
/* use cfs_hash_bd_hnode_add/del, to avoid atomic & refcount ops
- * in cfs_hash_bd_del/add_locked */
+ * in cfs_hash_bd_del/add_locked
+ */
hs->hs_hops->hop_hnode_del(hs, bd_old, hnode);
rc = hs->hs_hops->hop_hnode_add(hs, bd_new, hnode);
cfs_hash_bd_dep_record(hs, bd_new, rc);
@@ -635,13 +636,14 @@
int intent_add = (intent & CFS_HS_LOOKUP_MASK_ADD) != 0;
/* with this function, we can avoid a lot of useless refcount ops,
- * which are expensive atomic operations most time. */
+ * which are expensive atomic operations most time.
+ */
match = intent_add ? NULL : hnode;
hlist_for_each(ehnode, hhead) {
if (!cfs_hash_keycmp(hs, key, ehnode))
continue;
- if (match != NULL && match != ehnode) /* can't match */
+ if (match && match != ehnode) /* can't match */
continue;
/* match and ... */
@@ -659,7 +661,7 @@
if (!intent_add)
return NULL;
- LASSERT(hnode != NULL);
+ LASSERT(hnode);
cfs_hash_bd_add_locked(hs, bd, hnode);
return hnode;
}
@@ -698,8 +700,7 @@
if (prev == bds[i].bd_bucket)
continue;
- LASSERT(prev == NULL ||
- prev->hsb_index < bds[i].bd_bucket->hsb_index);
+ LASSERT(!prev || prev->hsb_index < bds[i].bd_bucket->hsb_index);
cfs_hash_bd_lock(hs, &bds[i], excl);
prev = bds[i].bd_bucket;
}
@@ -730,7 +731,7 @@
cfs_hash_for_each_bd(bds, n, i) {
ehnode = cfs_hash_bd_lookup_intent(hs, &bds[i], key, NULL,
CFS_HS_LOOKUP_IT_FIND);
- if (ehnode != NULL)
+ if (ehnode)
return ehnode;
}
return NULL;
@@ -745,13 +746,13 @@
int intent;
unsigned i;
- LASSERT(hnode != NULL);
+ LASSERT(hnode);
intent = (!noref * CFS_HS_LOOKUP_MASK_REF) | CFS_HS_LOOKUP_IT_PEEK;
cfs_hash_for_each_bd(bds, n, i) {
ehnode = cfs_hash_bd_lookup_intent(hs, &bds[i], key,
NULL, intent);
- if (ehnode != NULL)
+ if (ehnode)
return ehnode;
}
@@ -778,7 +779,7 @@
cfs_hash_for_each_bd(bds, n, i) {
ehnode = cfs_hash_bd_lookup_intent(hs, &bds[i], key, hnode,
CFS_HS_LOOKUP_IT_FINDDEL);
- if (ehnode != NULL)
+ if (ehnode)
return ehnode;
}
return NULL;
@@ -789,26 +790,20 @@
{
int rc;
- if (bd2->bd_bucket == NULL)
+ if (!bd2->bd_bucket)
return;
- if (bd1->bd_bucket == NULL) {
+ if (!bd1->bd_bucket) {
*bd1 = *bd2;
bd2->bd_bucket = NULL;
return;
}
rc = cfs_hash_bd_compare(bd1, bd2);
- if (rc == 0) {
+ if (!rc)
bd2->bd_bucket = NULL;
-
- } else if (rc > 0) { /* swab bd1 and bd2 */
- struct cfs_hash_bd tmp;
-
- tmp = *bd2;
- *bd2 = *bd1;
- *bd1 = tmp;
- }
+ else if (rc > 0)
+ swap(*bd1, *bd2); /* swap bd1 and bd2 */
}
void
@@ -818,7 +813,7 @@
/* NB: caller should hold hs_lock.rw if REHASH is set */
cfs_hash_bd_from_key(hs, hs->hs_buckets,
hs->hs_cur_bits, key, &bds[0]);
- if (likely(hs->hs_rehash_buckets == NULL)) {
+ if (likely(!hs->hs_rehash_buckets)) {
/* no rehash or not rehashing */
bds[1].bd_bucket = NULL;
return;
@@ -873,7 +868,7 @@
int i;
for (i = prev_size; i < size; i++) {
- if (buckets[i] != NULL)
+ if (buckets[i])
LIBCFS_FREE(buckets[i], bkt_size);
}
@@ -892,16 +887,16 @@
struct cfs_hash_bucket **new_bkts;
int i;
- LASSERT(old_size == 0 || old_bkts != NULL);
+ LASSERT(old_size == 0 || old_bkts);
- if (old_bkts != NULL && old_size == new_size)
+ if (old_bkts && old_size == new_size)
return old_bkts;
LIBCFS_ALLOC(new_bkts, sizeof(new_bkts[0]) * new_size);
- if (new_bkts == NULL)
+ if (!new_bkts)
return NULL;
- if (old_bkts != NULL) {
+ if (old_bkts) {
memcpy(new_bkts, old_bkts,
min(old_size, new_size) * sizeof(*old_bkts));
}
@@ -911,7 +906,7 @@
struct cfs_hash_bd bd;
LIBCFS_ALLOC(new_bkts[i], cfs_hash_bkt_size(hs));
- if (new_bkts[i] == NULL) {
+ if (!new_bkts[i]) {
cfs_hash_buckets_free(new_bkts, cfs_hash_bkt_size(hs),
old_size, new_size);
return NULL;
@@ -1011,14 +1006,13 @@
CLASSERT(CFS_HASH_THETA_BITS < 15);
- LASSERT(name != NULL);
- LASSERT(ops != NULL);
+ LASSERT(name);
LASSERT(ops->hs_key);
LASSERT(ops->hs_hash);
LASSERT(ops->hs_object);
LASSERT(ops->hs_keycmp);
- LASSERT(ops->hs_get != NULL);
- LASSERT(ops->hs_put_locked != NULL);
+ LASSERT(ops->hs_get);
+ LASSERT(ops->hs_put_locked);
if ((flags & CFS_HASH_REHASH) != 0)
flags |= CFS_HASH_COUNTER; /* must have counter */
@@ -1029,13 +1023,12 @@
LASSERT(ergo((flags & CFS_HASH_REHASH) == 0, cur_bits == max_bits));
LASSERT(ergo((flags & CFS_HASH_REHASH) != 0,
(flags & CFS_HASH_NO_LOCK) == 0));
- LASSERT(ergo((flags & CFS_HASH_REHASH_KEY) != 0,
- ops->hs_keycpy != NULL));
+ LASSERT(ergo((flags & CFS_HASH_REHASH_KEY) != 0, ops->hs_keycpy));
len = (flags & CFS_HASH_BIGNAME) == 0 ?
CFS_HASH_NAME_LEN : CFS_HASH_BIGNAME_LEN;
LIBCFS_ALLOC(hs, offsetof(struct cfs_hash, hs_name[len]));
- if (hs == NULL)
+ if (!hs)
return NULL;
strlcpy(hs->hs_name, name, len);
@@ -1063,7 +1056,7 @@
hs->hs_buckets = cfs_hash_buckets_realloc(hs, NULL, 0,
CFS_HASH_NBKT(hs));
- if (hs->hs_buckets != NULL)
+ if (hs->hs_buckets)
return hs;
LIBCFS_FREE(hs, offsetof(struct cfs_hash, hs_name[len]));
@@ -1082,7 +1075,7 @@
struct cfs_hash_bd bd;
int i;
- LASSERT(hs != NULL);
+ LASSERT(hs);
LASSERT(!cfs_hash_is_exiting(hs) &&
!cfs_hash_is_iterating(hs));
@@ -1096,13 +1089,12 @@
cfs_hash_depth_wi_cancel(hs);
/* rehash should be done/canceled */
- LASSERT(hs->hs_buckets != NULL &&
- hs->hs_rehash_buckets == NULL);
+ LASSERT(hs->hs_buckets && !hs->hs_rehash_buckets);
cfs_hash_for_each_bucket(hs, &bd, i) {
struct hlist_head *hhead;
- LASSERT(bd.bd_bucket != NULL);
+ LASSERT(bd.bd_bucket);
/* no need to take this lock, just for consistent code */
cfs_hash_bd_lock(hs, &bd, 1);
@@ -1113,7 +1105,8 @@
hs->hs_name, bd.bd_bucket->hsb_index,
bd.bd_offset, bd.bd_bucket->hsb_count);
/* can't assert key valicate, because we
- * can interrupt rehash */
+ * can interrupt rehash
+ */
cfs_hash_bd_del_locked(hs, &bd, hnode);
cfs_hash_exit(hs, hnode);
}
@@ -1164,7 +1157,8 @@
return -EAGAIN;
/* XXX: need to handle case with max_theta != 2.0
- * and the case with min_theta != 0.5 */
+ * and the case with min_theta != 0.5
+ */
if ((hs->hs_cur_bits < hs->hs_max_bits) &&
(__cfs_hash_theta(hs) > hs->hs_max_theta))
return hs->hs_cur_bits + 1;
@@ -1293,8 +1287,8 @@
cfs_hash_dual_bd_get_and_lock(hs, key, bds, 1);
/* NB: do nothing if @hnode is not in hash table */
- if (hnode == NULL || !hlist_unhashed(hnode)) {
- if (bds[1].bd_bucket == NULL && hnode != NULL) {
+ if (!hnode || !hlist_unhashed(hnode)) {
+ if (!bds[1].bd_bucket && hnode) {
cfs_hash_bd_del_locked(hs, &bds[0], hnode);
} else {
hnode = cfs_hash_dual_bd_finddel_locked(hs, bds,
@@ -1302,7 +1296,7 @@
}
}
- if (hnode != NULL) {
+ if (hnode) {
obj = cfs_hash_object(hs, hnode);
bits = cfs_hash_rehash_bits(hs);
}
@@ -1348,7 +1342,7 @@
cfs_hash_dual_bd_get_and_lock(hs, key, bds, 0);
hnode = cfs_hash_dual_bd_lookup_locked(hs, bds, key);
- if (hnode != NULL)
+ if (hnode)
obj = cfs_hash_object(hs, hnode);
cfs_hash_dual_bd_unlock(hs, bds, 0);
@@ -1378,7 +1372,8 @@
/* NB: iteration is mostly called by service thread,
* we tend to cancel pending rehash-request, instead of
* blocking service thread, we will relaunch rehash request
- * after iteration */
+ * after iteration
+ */
if (cfs_hash_is_rehashing(hs))
cfs_hash_rehash_cancel_locked(hs);
cfs_hash_unlock(hs, 1);
@@ -1436,7 +1431,7 @@
struct hlist_head *hhead;
cfs_hash_bd_lock(hs, &bd, excl);
- if (func == NULL) { /* only glimpse size */
+ if (!func) { /* only glimpse size */
count += bd.bd_bucket->hsb_count;
cfs_hash_bd_unlock(hs, &bd, excl);
continue;
@@ -1574,7 +1569,7 @@
stop_on_change = cfs_hash_with_rehash_key(hs) ||
!cfs_hash_with_no_itemref(hs) ||
- hs->hs_ops->hs_put_locked == NULL;
+ !hs->hs_ops->hs_put_locked;
cfs_hash_lock(hs, 0);
LASSERT(!cfs_hash_is_rehashing(hs));
@@ -1585,7 +1580,7 @@
version = cfs_hash_bd_version_get(&bd);
cfs_hash_bd_for_each_hlist(hs, &bd, hhead) {
- for (hnode = hhead->first; hnode != NULL;) {
+ for (hnode = hhead->first; hnode;) {
cfs_hash_bucket_validate(hs, &bd, hnode);
cfs_hash_get(hs, hnode);
cfs_hash_bd_unlock(hs, &bd, 0);
@@ -1634,9 +1629,8 @@
!cfs_hash_with_no_itemref(hs))
return -EOPNOTSUPP;
- if (hs->hs_ops->hs_get == NULL ||
- (hs->hs_ops->hs_put == NULL &&
- hs->hs_ops->hs_put_locked == NULL))
+ if (!hs->hs_ops->hs_get ||
+ (!hs->hs_ops->hs_put && !hs->hs_ops->hs_put_locked))
return -EOPNOTSUPP;
cfs_hash_for_each_enter(hs);
@@ -1667,9 +1661,8 @@
if (cfs_hash_with_no_lock(hs))
return -EOPNOTSUPP;
- if (hs->hs_ops->hs_get == NULL ||
- (hs->hs_ops->hs_put == NULL &&
- hs->hs_ops->hs_put_locked == NULL))
+ if (!hs->hs_ops->hs_get ||
+ (!hs->hs_ops->hs_put && !hs->hs_ops->hs_put_locked))
return -EOPNOTSUPP;
cfs_hash_for_each_enter(hs);
@@ -1708,7 +1701,6 @@
cfs_hash_unlock(hs, 0);
cfs_hash_for_each_exit(hs);
}
-
EXPORT_SYMBOL(cfs_hash_hlist_for_each);
/*
@@ -1837,7 +1829,7 @@
cfs_hash_bd_for_each_hlist(hs, old, hhead) {
hlist_for_each_safe(hnode, pos, hhead) {
key = cfs_hash_key(hs, hnode);
- LASSERT(key != NULL);
+ LASSERT(key);
/* Validate hnode is in the correct bucket. */
cfs_hash_bucket_validate(hs, old, hnode);
/*
@@ -1867,7 +1859,7 @@
int rc = 0;
int i;
- LASSERT(hs != NULL && cfs_hash_with_rehash(hs));
+ LASSERT(hs && cfs_hash_with_rehash(hs));
cfs_hash_lock(hs, 0);
LASSERT(cfs_hash_is_rehashing(hs));
@@ -1884,7 +1876,7 @@
bkts = cfs_hash_buckets_realloc(hs, hs->hs_buckets,
old_size, new_size);
cfs_hash_lock(hs, 1);
- if (bkts == NULL) {
+ if (!bkts) {
rc = -ENOMEM;
goto out;
}
@@ -1903,7 +1895,7 @@
goto out;
}
- LASSERT(hs->hs_rehash_buckets == NULL);
+ LASSERT(!hs->hs_rehash_buckets);
hs->hs_rehash_buckets = bkts;
rc = 0;
@@ -1946,7 +1938,7 @@
bsize = cfs_hash_bkt_size(hs);
cfs_hash_unlock(hs, 1);
/* can't refer to @hs anymore because it could be destroyed */
- if (bkts != NULL)
+ if (bkts)
cfs_hash_buckets_free(bkts, bsize, new_size, old_size);
if (rc != 0)
CDEBUG(D_INFO, "early quit of rehashing: %d\n", rc);
@@ -1987,14 +1979,15 @@
cfs_hash_bd_order(&bds[0], &bds[1]);
cfs_hash_multi_bd_lock(hs, bds, 3, 1);
- if (likely(old_bds[1].bd_bucket == NULL)) {
+ if (likely(!old_bds[1].bd_bucket)) {
cfs_hash_bd_move_locked(hs, &old_bds[0], &new_bd, hnode);
} else {
cfs_hash_dual_bd_finddel_locked(hs, old_bds, old_key, hnode);
cfs_hash_bd_add_locked(hs, &new_bd, hnode);
}
/* overwrite key inside locks, otherwise may screw up with
- * other operations, i.e: rehash */
+ * other operations, i.e: rehash
+ */
cfs_hash_keycpy(hs, hnode, new_key);
cfs_hash_multi_bd_unlock(hs, bds, 3, 1);
@@ -2013,7 +2006,7 @@
cfs_hash_full_bkts(struct cfs_hash *hs)
{
/* NB: caller should hold hs->hs_rwlock if REHASH is set */
- if (hs->hs_rehash_buckets == NULL)
+ if (!hs->hs_rehash_buckets)
return hs->hs_buckets;
LASSERT(hs->hs_rehash_bits != 0);
@@ -2025,7 +2018,7 @@
cfs_hash_full_nbkt(struct cfs_hash *hs)
{
/* NB: caller should hold hs->hs_rwlock if REHASH is set */
- if (hs->hs_rehash_buckets == NULL)
+ if (!hs->hs_rehash_buckets)
return CFS_HASH_NBKT(hs);
LASSERT(hs->hs_rehash_bits != 0);
@@ -2046,15 +2039,15 @@
theta = __cfs_hash_theta(hs);
seq_printf(m, "%-*s %5d %5d %5d %d.%03d %d.%03d %d.%03d 0x%02x %6d ",
- CFS_HASH_BIGNAME_LEN, hs->hs_name,
- 1 << hs->hs_cur_bits, 1 << hs->hs_min_bits,
- 1 << hs->hs_max_bits,
- __cfs_hash_theta_int(theta), __cfs_hash_theta_frac(theta),
- __cfs_hash_theta_int(hs->hs_min_theta),
- __cfs_hash_theta_frac(hs->hs_min_theta),
- __cfs_hash_theta_int(hs->hs_max_theta),
- __cfs_hash_theta_frac(hs->hs_max_theta),
- hs->hs_flags, hs->hs_rehash_count);
+ CFS_HASH_BIGNAME_LEN, hs->hs_name,
+ 1 << hs->hs_cur_bits, 1 << hs->hs_min_bits,
+ 1 << hs->hs_max_bits,
+ __cfs_hash_theta_int(theta), __cfs_hash_theta_frac(theta),
+ __cfs_hash_theta_int(hs->hs_min_theta),
+ __cfs_hash_theta_frac(hs->hs_min_theta),
+ __cfs_hash_theta_int(hs->hs_max_theta),
+ __cfs_hash_theta_frac(hs->hs_max_theta),
+ hs->hs_flags, hs->hs_rehash_count);
/*
* The distribution is a summary of the chained hash depth in
diff --git a/drivers/staging/lustre/lustre/libcfs/libcfs_cpu.c b/drivers/staging/lustre/lustre/libcfs/libcfs_cpu.c
index ba97c79..33352af 100644
--- a/drivers/staging/lustre/lustre/libcfs/libcfs_cpu.c
+++ b/drivers/staging/lustre/lustre/libcfs/libcfs_cpu.c
@@ -13,11 +13,6 @@
* General Public License version 2 for more details (a copy is included
* in the LICENSE file that accompanied this code).
*
- * You should have received a copy of the GNU General Public License
- * version 2 along with this program; if not, write to the
- * Free Software Foundation, Inc., 59 Temple Place - Suite 330,
- * Boston, MA 021110-1307, USA
- *
* GPL HEADER END
*/
/*
@@ -56,7 +51,7 @@
}
LIBCFS_ALLOC(cptab, sizeof(*cptab));
- if (cptab != NULL) {
+ if (cptab) {
cptab->ctb_version = CFS_CPU_VERSION_MAGIC;
node_set(0, cptab->ctb_nodemask);
cptab->ctb_nparts = ncpt;
@@ -215,7 +210,7 @@
void
cfs_cpu_fini(void)
{
- if (cfs_cpt_table != NULL) {
+ if (cfs_cpt_table) {
cfs_cpt_table_free(cfs_cpt_table);
cfs_cpt_table = NULL;
}
@@ -226,7 +221,7 @@
{
cfs_cpt_table = cfs_cpt_table_alloc(1);
- return cfs_cpt_table != NULL ? 0 : -1;
+ return cfs_cpt_table ? 0 : -1;
}
#endif /* HAVE_LIBCFS_CPT */
diff --git a/drivers/staging/lustre/lustre/libcfs/libcfs_lock.c b/drivers/staging/lustre/lustre/libcfs/libcfs_lock.c
index 32db788..2de9eea 100644
--- a/drivers/staging/lustre/lustre/libcfs/libcfs_lock.c
+++ b/drivers/staging/lustre/lustre/libcfs/libcfs_lock.c
@@ -13,11 +13,6 @@
* General Public License version 2 for more details (a copy is included
* in the LICENSE file that accompanied this code).
*
- * You should have received a copy of the GNU General Public License
- * version 2 along with this program; if not, write to the
- * Free Software Foundation, Inc., 59 Temple Place - Suite 330,
- * Boston, MA 021110-1307, USA
- *
* GPL HEADER END
*/
/* Copyright (c) 2010, Oracle and/or its affiliates. All rights reserved.
@@ -38,7 +33,7 @@
void
cfs_percpt_lock_free(struct cfs_percpt_lock *pcl)
{
- LASSERT(pcl->pcl_locks != NULL);
+ LASSERT(pcl->pcl_locks);
LASSERT(!pcl->pcl_locked);
cfs_percpt_free(pcl->pcl_locks);
@@ -115,7 +110,8 @@
if (i == 0) {
LASSERT(!pcl->pcl_locked);
/* nobody should take private lock after this
- * so I wouldn't starve for too long time */
+ * so I wouldn't starve for too long time
+ */
pcl->pcl_locked = 1;
}
}
diff --git a/drivers/staging/lustre/lustre/libcfs/libcfs_mem.c b/drivers/staging/lustre/lustre/libcfs/libcfs_mem.c
index 27cf861..c5a6951 100644
--- a/drivers/staging/lustre/lustre/libcfs/libcfs_mem.c
+++ b/drivers/staging/lustre/lustre/libcfs/libcfs_mem.c
@@ -13,11 +13,6 @@
* General Public License version 2 for more details (a copy is included
* in the LICENSE file that accompanied this code).
*
- * You should have received a copy of the GNU General Public License
- * version 2 along with this program; if not, write to the
- * Free Software Foundation, Inc., 59 Temple Place - Suite 330,
- * Boston, MA 021110-1307, USA
- *
* GPL HEADER END
*/
/*
@@ -54,7 +49,7 @@
arr = container_of(vars, struct cfs_var_array, va_ptrs[0]);
for (i = 0; i < arr->va_count; i++) {
- if (arr->va_ptrs[i] != NULL)
+ if (arr->va_ptrs[i])
LIBCFS_FREE(arr->va_ptrs[i], arr->va_size);
}
@@ -87,9 +82,10 @@
if (!arr)
return NULL;
- arr->va_size = size = L1_CACHE_ALIGN(size);
- arr->va_count = count;
- arr->va_cptab = cptab;
+ size = L1_CACHE_ALIGN(size);
+ arr->va_size = size;
+ arr->va_count = count;
+ arr->va_cptab = cptab;
for (i = 0; i < count; i++) {
LIBCFS_CPT_ALLOC(arr->va_ptrs[i], cptab, i, size);
diff --git a/drivers/staging/lustre/lustre/libcfs/libcfs_string.c b/drivers/staging/lustre/lustre/libcfs/libcfs_string.c
index 205a3ed..9dca666 100644
--- a/drivers/staging/lustre/lustre/libcfs/libcfs_string.c
+++ b/drivers/staging/lustre/lustre/libcfs/libcfs_string.c
@@ -54,7 +54,8 @@
* and optionally an operator ('+' or '-'). If an operator
* appears first in <str>, '*oldmask' is used as the starting point
* (relative), otherwise minmask is used (absolute). An operator
- * applies to all following tokens up to the next operator. */
+ * applies to all following tokens up to the next operator.
+ */
while (*str != '\0') {
while (isspace(*str))
str++;
@@ -81,8 +82,7 @@
found = 0;
for (i = 0; i < 32; i++) {
debugstr = bit2str(i);
- if (debugstr != NULL &&
- strlen(debugstr) == len &&
+ if (debugstr && strlen(debugstr) == len &&
strncasecmp(str, debugstr, len) == 0) {
if (op == '-')
newmask &= ~(1 << i);
@@ -175,7 +175,7 @@
{
char *end;
- if (next->ls_str == NULL)
+ if (!next->ls_str)
return 0;
/* skip leading white spaces */
@@ -196,7 +196,7 @@
res->ls_str = next->ls_str;
end = memchr(next->ls_str, delim, next->ls_len);
- if (end == NULL) {
+ if (!end) {
/* there is no the delimeter in the string */
end = next->ls_str + next->ls_len;
next->ls_str = NULL;
@@ -229,18 +229,13 @@
cfs_str2num_check(char *str, int nob, unsigned *num,
unsigned min, unsigned max)
{
- char *endp;
+ int rc;
str = cfs_trimwhite(str);
- *num = simple_strtoul(str, &endp, 0);
- if (endp == str)
+ rc = kstrtouint(str, 10, num);
+ if (rc)
return 0;
- for (; endp < str + nob; endp++) {
- if (!isspace(*endp))
- return 0;
- }
-
return (*num >= min && *num <= max);
}
EXPORT_SYMBOL(cfs_str2num_check);
@@ -266,7 +261,7 @@
struct cfs_lstr tok;
LIBCFS_ALLOC(re, sizeof(*re));
- if (re == NULL)
+ if (!re)
return -ENOMEM;
if (src->ls_len == 1 && src->ls_str[0] == '*') {
@@ -337,18 +332,19 @@
char s[] = "[";
char e[] = "]";
- if (bracketed)
- s[0] = e[0] = '\0';
+ if (bracketed) {
+ s[0] = '\0';
+ e[0] = '\0';
+ }
if (expr->re_lo == expr->re_hi)
i = scnprintf(buffer, count, "%u", expr->re_lo);
else if (expr->re_stride == 1)
i = scnprintf(buffer, count, "%s%u-%u%s",
- s, expr->re_lo, expr->re_hi, e);
+ s, expr->re_lo, expr->re_hi, e);
else
i = scnprintf(buffer, count, "%s%u-%u/%u%s",
- s, expr->re_lo, expr->re_hi,
- expr->re_stride, e);
+ s, expr->re_lo, expr->re_hi, expr->re_stride, e);
return i;
}
@@ -442,7 +438,7 @@
}
LIBCFS_ALLOC(val, sizeof(val[0]) * count);
- if (val == NULL)
+ if (!val)
return -ENOMEM;
count = 0;
@@ -470,7 +466,7 @@
struct cfs_range_expr *expr;
expr = list_entry(expr_list->el_exprs.next,
- struct cfs_range_expr, re_link);
+ struct cfs_range_expr, re_link);
list_del(&expr->re_link);
LIBCFS_FREE(expr, sizeof(*expr));
}
@@ -495,7 +491,7 @@
int rc;
LIBCFS_ALLOC(expr_list, sizeof(*expr_list));
- if (expr_list == NULL)
+ if (!expr_list)
return -ENOMEM;
src.ls_str = str;
@@ -509,7 +505,7 @@
src.ls_len -= 2;
rc = -EINVAL;
- while (src.ls_str != NULL) {
+ while (src.ls_str) {
struct cfs_lstr tok;
if (!cfs_gettok(&src, ',', &tok)) {
@@ -521,15 +517,12 @@
if (rc != 0)
break;
- list_add_tail(&expr->re_link,
- &expr_list->el_exprs);
+ list_add_tail(&expr->re_link, &expr_list->el_exprs);
}
} else {
rc = cfs_range_expr_parse(&src, min, max, 0, &expr);
- if (rc == 0) {
- list_add_tail(&expr->re_link,
- &expr_list->el_exprs);
- }
+ if (rc == 0)
+ list_add_tail(&expr->re_link, &expr_list->el_exprs);
}
if (rc != 0)
@@ -555,8 +548,7 @@
struct cfs_expr_list *el;
while (!list_empty(list)) {
- el = list_entry(list->next,
- struct cfs_expr_list, el_link);
+ el = list_entry(list->next, struct cfs_expr_list, el_link);
list_del(&el->el_link);
cfs_expr_list_free(el);
}
diff --git a/drivers/staging/lustre/lustre/libcfs/linux/linux-cpu.c b/drivers/staging/lustre/lustre/libcfs/linux/linux-cpu.c
index e52afe3..389fb9e 100644
--- a/drivers/staging/lustre/lustre/libcfs/linux/linux-cpu.c
+++ b/drivers/staging/lustre/lustre/libcfs/linux/linux-cpu.c
@@ -13,11 +13,6 @@
* General Public License version 2 for more details (a copy is included
* in the LICENSE file that accompanied this code).
*
- * You should have received a copy of the GNU General Public License
- * version 2 along with this program; if not, write to the
- * Free Software Foundation, Inc., 59 Temple Place - Suite 330,
- * Boston, MA 021110-1307, USA
- *
* GPL HEADER END
*/
/*
@@ -84,32 +79,32 @@
{
int i;
- if (cptab->ctb_cpu2cpt != NULL) {
+ if (cptab->ctb_cpu2cpt) {
LIBCFS_FREE(cptab->ctb_cpu2cpt,
num_possible_cpus() *
sizeof(cptab->ctb_cpu2cpt[0]));
}
- for (i = 0; cptab->ctb_parts != NULL && i < cptab->ctb_nparts; i++) {
+ for (i = 0; cptab->ctb_parts && i < cptab->ctb_nparts; i++) {
struct cfs_cpu_partition *part = &cptab->ctb_parts[i];
- if (part->cpt_nodemask != NULL) {
+ if (part->cpt_nodemask) {
LIBCFS_FREE(part->cpt_nodemask,
sizeof(*part->cpt_nodemask));
}
- if (part->cpt_cpumask != NULL)
+ if (part->cpt_cpumask)
LIBCFS_FREE(part->cpt_cpumask, cpumask_size());
}
- if (cptab->ctb_parts != NULL) {
+ if (cptab->ctb_parts) {
LIBCFS_FREE(cptab->ctb_parts,
cptab->ctb_nparts * sizeof(cptab->ctb_parts[0]));
}
- if (cptab->ctb_nodemask != NULL)
+ if (cptab->ctb_nodemask)
LIBCFS_FREE(cptab->ctb_nodemask, sizeof(*cptab->ctb_nodemask));
- if (cptab->ctb_cpumask != NULL)
+ if (cptab->ctb_cpumask)
LIBCFS_FREE(cptab->ctb_cpumask, cpumask_size());
LIBCFS_FREE(cptab, sizeof(*cptab));
@@ -123,7 +118,7 @@
int i;
LIBCFS_ALLOC(cptab, sizeof(*cptab));
- if (cptab == NULL)
+ if (!cptab)
return NULL;
cptab->ctb_nparts = ncpt;
@@ -131,19 +126,19 @@
LIBCFS_ALLOC(cptab->ctb_cpumask, cpumask_size());
LIBCFS_ALLOC(cptab->ctb_nodemask, sizeof(*cptab->ctb_nodemask));
- if (cptab->ctb_cpumask == NULL || cptab->ctb_nodemask == NULL)
+ if (!cptab->ctb_cpumask || !cptab->ctb_nodemask)
goto failed;
LIBCFS_ALLOC(cptab->ctb_cpu2cpt,
num_possible_cpus() * sizeof(cptab->ctb_cpu2cpt[0]));
- if (cptab->ctb_cpu2cpt == NULL)
+ if (!cptab->ctb_cpu2cpt)
goto failed;
memset(cptab->ctb_cpu2cpt, -1,
num_possible_cpus() * sizeof(cptab->ctb_cpu2cpt[0]));
LIBCFS_ALLOC(cptab->ctb_parts, ncpt * sizeof(cptab->ctb_parts[0]));
- if (cptab->ctb_parts == NULL)
+ if (!cptab->ctb_parts)
goto failed;
for (i = 0; i < ncpt; i++) {
@@ -151,7 +146,7 @@
LIBCFS_ALLOC(part->cpt_cpumask, cpumask_size());
LIBCFS_ALLOC(part->cpt_nodemask, sizeof(*part->cpt_nodemask));
- if (part->cpt_cpumask == NULL || part->cpt_nodemask == NULL)
+ if (!part->cpt_cpumask || !part->cpt_nodemask)
goto failed;
}
@@ -359,8 +354,6 @@
if (i >= nr_cpu_ids)
node_clear(node, *cptab->ctb_nodemask);
-
- return;
}
EXPORT_SYMBOL(cfs_cpt_unset_cpu);
@@ -530,7 +523,8 @@
return cpt;
/* don't return negative value for safety of upper layer,
- * instead we shadow the unknown cpu to a valid partition ID */
+ * instead we shadow the unknown cpu to a valid partition ID
+ */
cpt = cpu % cptab->ctb_nparts;
}
@@ -618,7 +612,7 @@
/* allocate scratch buffer */
LIBCFS_ALLOC(socket, cpumask_size());
LIBCFS_ALLOC(core, cpumask_size());
- if (socket == NULL || core == NULL) {
+ if (!socket || !core) {
rc = -ENOMEM;
goto out;
}
@@ -659,9 +653,9 @@
}
out:
- if (socket != NULL)
+ if (socket)
LIBCFS_FREE(socket, cpumask_size());
- if (core != NULL)
+ if (core)
LIBCFS_FREE(core, cpumask_size());
return rc;
}
@@ -682,7 +676,8 @@
/* generate reasonable number of CPU partitions based on total number
* of CPUs, Preferred N should be power2 and match this condition:
- * 2 * (N - 1)^2 < NCPUS <= 2 * N^2 */
+ * 2 * (N - 1)^2 < NCPUS <= 2 * N^2
+ */
for (ncpt = 2; ncpu > 2 * ncpt * ncpt; ncpt <<= 1)
;
@@ -700,7 +695,8 @@
out:
#if (BITS_PER_LONG == 32)
/* config many CPU partitions on 32-bit system could consume
- * too much memory */
+ * too much memory
+ */
ncpt = min(2U, ncpt);
#endif
while (ncpu % ncpt != 0)
@@ -735,7 +731,7 @@
}
cptab = cfs_cpt_table_alloc(ncpt);
- if (cptab == NULL) {
+ if (!cptab) {
CERROR("Failed to allocate CPU map(%d)\n", ncpt);
goto failed;
}
@@ -747,7 +743,7 @@
}
LIBCFS_ALLOC(mask, cpumask_size());
- if (mask == NULL) {
+ if (!mask) {
CERROR("Failed to allocate scratch cpumask\n");
goto failed;
}
@@ -793,10 +789,10 @@
CERROR("Failed to setup CPU-partition-table with %d CPU-partitions, online HW nodes: %d, HW cpus: %d.\n",
ncpt, num_online_nodes(), num_online_cpus());
- if (mask != NULL)
+ if (mask)
LIBCFS_FREE(mask, cpumask_size());
- if (cptab != NULL)
+ if (cptab)
cfs_cpt_table_free(cptab);
return NULL;
@@ -814,7 +810,7 @@
for (ncpt = 0;; ncpt++) { /* quick scan bracket */
str = strchr(str, '[');
- if (str == NULL)
+ if (!str)
break;
str++;
}
@@ -836,7 +832,7 @@
high = node ? MAX_NUMNODES - 1 : nr_cpu_ids - 1;
cptab = cfs_cpt_table_alloc(ncpt);
- if (cptab == NULL) {
+ if (!cptab) {
CERROR("Failed to allocate cpu partition table\n");
return NULL;
}
@@ -850,11 +846,12 @@
int i;
int n;
- if (bracket == NULL) {
+ if (!bracket) {
if (*str != 0) {
CERROR("Invalid pattern %s\n", str);
goto failed;
- } else if (c != ncpt) {
+ }
+ if (c != ncpt) {
CERROR("expect %d partitions but found %d\n",
ncpt, c);
goto failed;
@@ -885,7 +882,7 @@
}
bracket = strchr(str, ']');
- if (bracket == NULL) {
+ if (!bracket) {
CERROR("missing right bracket for cpt %d, %s\n",
cpt, str);
goto failed;
@@ -943,6 +940,7 @@
spin_lock(&cpt_data.cpt_lock);
cpt_data.cpt_version++;
spin_unlock(&cpt_data.cpt_lock);
+ /* Fall through */
default:
if (action != CPU_DEAD && action != CPU_DEAD_FROZEN) {
CDEBUG(D_INFO, "CPU changed [cpu %u action %lx]\n",
@@ -975,25 +973,25 @@
void
cfs_cpu_fini(void)
{
- if (cfs_cpt_table != NULL)
+ if (cfs_cpt_table)
cfs_cpt_table_free(cfs_cpt_table);
#ifdef CONFIG_HOTPLUG_CPU
unregister_hotcpu_notifier(&cfs_cpu_notifier);
#endif
- if (cpt_data.cpt_cpumask != NULL)
+ if (cpt_data.cpt_cpumask)
LIBCFS_FREE(cpt_data.cpt_cpumask, cpumask_size());
}
int
cfs_cpu_init(void)
{
- LASSERT(cfs_cpt_table == NULL);
+ LASSERT(!cfs_cpt_table);
memset(&cpt_data, 0, sizeof(cpt_data));
LIBCFS_ALLOC(cpt_data.cpt_cpumask, cpumask_size());
- if (cpt_data.cpt_cpumask == NULL) {
+ if (!cpt_data.cpt_cpumask) {
CERROR("Failed to allocate scratch buffer\n");
return -1;
}
@@ -1007,7 +1005,7 @@
if (*cpu_pattern != 0) {
cfs_cpt_table = cfs_cpt_table_create_pattern(cpu_pattern);
- if (cfs_cpt_table == NULL) {
+ if (!cfs_cpt_table) {
CERROR("Failed to create cptab from pattern %s\n",
cpu_pattern);
goto failed;
@@ -1015,7 +1013,7 @@
} else {
cfs_cpt_table = cfs_cpt_table_create(cpu_npartitions);
- if (cfs_cpt_table == NULL) {
+ if (!cfs_cpt_table) {
CERROR("Failed to create ptable with npartitions %d\n",
cpu_npartitions);
goto failed;
diff --git a/drivers/staging/lustre/lustre/libcfs/linux/linux-crypto.c b/drivers/staging/lustre/lustre/libcfs/linux/linux-crypto.c
index 079d50e..ec75801 100644
--- a/drivers/staging/lustre/lustre/libcfs/linux/linux-crypto.c
+++ b/drivers/staging/lustre/lustre/libcfs/linux/linux-crypto.c
@@ -45,14 +45,14 @@
*type = cfs_crypto_hash_type(alg_id);
- if (*type == NULL) {
+ if (!*type) {
CWARN("Unsupported hash algorithm id = %d, max id is %d\n",
alg_id, CFS_HASH_ALG_MAX);
return -EINVAL;
}
desc->tfm = crypto_alloc_hash((*type)->cht_name, 0, 0);
- if (desc->tfm == NULL)
+ if (!desc->tfm)
return -EINVAL;
if (IS_ERR(desc->tfm)) {
@@ -69,7 +69,7 @@
* Skip this function for digest, because we use shash logic at
* cfs_crypto_hash_alloc.
*/
- if (key != NULL)
+ if (key)
err = crypto_hash_setkey(desc->tfm, key, key_len);
else if ((*type)->cht_key != 0)
err = crypto_hash_setkey(desc->tfm,
@@ -99,14 +99,14 @@
int err;
const struct cfs_crypto_hash_type *type;
- if (buf == NULL || buf_len == 0 || hash_len == NULL)
+ if (!buf || buf_len == 0 || !hash_len)
return -EINVAL;
err = cfs_crypto_hash_alloc(alg_id, &type, &hdesc, key, key_len);
if (err != 0)
return err;
- if (hash == NULL || *hash_len < type->cht_size) {
+ if (!hash || *hash_len < type->cht_size) {
*hash_len = type->cht_size;
crypto_free_hash(hdesc.tfm);
return -ENOSPC;
@@ -125,13 +125,12 @@
cfs_crypto_hash_init(unsigned char alg_id,
unsigned char *key, unsigned int key_len)
{
-
struct hash_desc *hdesc;
int err;
const struct cfs_crypto_hash_type *type;
hdesc = kmalloc(sizeof(*hdesc), 0);
- if (hdesc == NULL)
+ if (!hdesc)
return ERR_PTR(-ENOMEM);
err = cfs_crypto_hash_alloc(alg_id, &type, hdesc, key, key_len);
@@ -175,16 +174,16 @@
int err;
int size = crypto_hash_digestsize(((struct hash_desc *)hdesc)->tfm);
- if (hash_len == NULL) {
+ if (!hash_len) {
crypto_free_hash(((struct hash_desc *)hdesc)->tfm);
kfree(hdesc);
return 0;
}
- if (hash == NULL || *hash_len < size) {
+ if (!hash || *hash_len < size) {
*hash_len = size;
return -ENOSPC;
}
- err = crypto_hash_final((struct hash_desc *) hdesc, hash);
+ err = crypto_hash_final((struct hash_desc *)hdesc, hash);
if (err < 0) {
/* May be caller can fix error */
@@ -212,7 +211,6 @@
hash, &hash_len);
if (err)
break;
-
}
end = jiffies;
@@ -235,8 +233,7 @@
{
if (hash_alg < CFS_HASH_ALG_MAX)
return cfs_crypto_hash_speeds[hash_alg];
- else
- return -1;
+ return -1;
}
EXPORT_SYMBOL(cfs_crypto_hash_speed);
@@ -249,11 +246,12 @@
unsigned char *data;
unsigned int j;
/* Data block size for testing hash. Maximum
- * kmalloc size for 2.6.18 kernel is 128K */
+ * kmalloc size for 2.6.18 kernel is 128K
+ */
unsigned int data_len = 1 * 128 * 1024;
data = kmalloc(data_len, 0);
- if (data == NULL) {
+ if (!data) {
CERROR("Failed to allocate mem\n");
return -ENOMEM;
}
@@ -285,6 +283,4 @@
{
if (adler32 == 0)
cfs_crypto_adler32_unregister();
-
- return;
}
diff --git a/drivers/staging/lustre/lustre/libcfs/linux/linux-curproc.c b/drivers/staging/lustre/lustre/libcfs/linux/linux-curproc.c
index 68515d9..13d31e8 100644
--- a/drivers/staging/lustre/lustre/libcfs/linux/linux-curproc.c
+++ b/drivers/staging/lustre/lustre/libcfs/linux/linux-curproc.c
@@ -65,6 +65,7 @@
commit_creds(cred);
}
}
+EXPORT_SYMBOL(cfs_cap_raise);
void cfs_cap_lower(cfs_cap_t cap)
{
@@ -76,11 +77,13 @@
commit_creds(cred);
}
}
+EXPORT_SYMBOL(cfs_cap_lower);
int cfs_cap_raised(cfs_cap_t cap)
{
return cap_raised(current_cap(), cap);
}
+EXPORT_SYMBOL(cfs_cap_raised);
static void cfs_kernel_cap_pack(kernel_cap_t kcap, cfs_cap_t *cap)
{
@@ -95,10 +98,6 @@
cfs_kernel_cap_pack(current_cap(), &cap);
return cap;
}
-
-EXPORT_SYMBOL(cfs_cap_raise);
-EXPORT_SYMBOL(cfs_cap_lower);
-EXPORT_SYMBOL(cfs_cap_raised);
EXPORT_SYMBOL(cfs_curproc_cap_pack);
/*
diff --git a/drivers/staging/lustre/lustre/libcfs/linux/linux-debug.c b/drivers/staging/lustre/lustre/libcfs/linux/linux-debug.c
index 59c7bf3..638e4b3 100644
--- a/drivers/staging/lustre/lustre/libcfs/linux/linux-debug.c
+++ b/drivers/staging/lustre/lustre/libcfs/linux/linux-debug.c
@@ -80,14 +80,14 @@
argv[0] = lnet_debug_log_upcall;
- LASSERTF(file != NULL, "called on a null filename\n");
+ LASSERTF(file, "called on a null filename\n");
argv[1] = file; /* only need to pass the path of the file */
argv[2] = NULL;
rc = call_usermodehelper(argv[0], argv, envp, 1);
if (rc < 0 && rc != -ENOENT) {
- CERROR("Error %d invoking LNET debug log upcall %s %s; check /proc/sys/lnet/debug_log_upcall\n",
+ CERROR("Error %d invoking LNET debug log upcall %s %s; check /sys/kernel/debug/lnet/debug_log_upcall\n",
rc, argv[0], argv[1]);
} else {
CDEBUG(D_HA, "Invoked LNET debug log upcall %s %s\n",
@@ -106,14 +106,14 @@
argv[0] = lnet_upcall;
argc = 1;
- while (argv[argc] != NULL)
+ while (argv[argc])
argc++;
LASSERT(argc >= 2);
rc = call_usermodehelper(argv[0], argv, envp, 1);
if (rc < 0 && rc != -ENOENT) {
- CERROR("Error %d invoking LNET upcall %s %s%s%s%s%s%s%s%s; check /proc/sys/lnet/upcall\n",
+ CERROR("Error %d invoking LNET upcall %s %s%s%s%s%s%s%s%s; check /sys/kernel/debug/lnet/upcall\n",
rc, argv[0], argv[1],
argc < 3 ? "" : ",", argc < 3 ? "" : argv[2],
argc < 4 ? "" : ",", argc < 4 ? "" : argv[3],
@@ -142,8 +142,9 @@
argv[4] = buf;
argv[5] = NULL;
- libcfs_run_upcall (argv);
+ libcfs_run_upcall(argv);
}
+EXPORT_SYMBOL(libcfs_run_lbug_upcall);
/* coverity[+kill] */
void __noreturn lbug_with_loc(struct libcfs_debug_msg_data *msgdata)
@@ -166,9 +167,10 @@
while (1)
schedule();
}
+EXPORT_SYMBOL(lbug_with_loc);
static int panic_notifier(struct notifier_block *self, unsigned long unused1,
- void *unused2)
+ void *unused2)
{
if (libcfs_panic_in_progress)
return 0;
@@ -187,13 +189,12 @@
void libcfs_register_panic_notifier(void)
{
- atomic_notifier_chain_register(&panic_notifier_list, &libcfs_panic_notifier);
+ atomic_notifier_chain_register(&panic_notifier_list,
+ &libcfs_panic_notifier);
}
void libcfs_unregister_panic_notifier(void)
{
- atomic_notifier_chain_unregister(&panic_notifier_list, &libcfs_panic_notifier);
+ atomic_notifier_chain_unregister(&panic_notifier_list,
+ &libcfs_panic_notifier);
}
-
-EXPORT_SYMBOL(libcfs_run_lbug_upcall);
-EXPORT_SYMBOL(lbug_with_loc);
diff --git a/drivers/staging/lustre/lustre/libcfs/linux/linux-mem.c b/drivers/staging/lustre/lustre/libcfs/linux/linux-mem.c
index 025e2f0..86f32ff 100644
--- a/drivers/staging/lustre/lustre/libcfs/linux/linux-mem.c
+++ b/drivers/staging/lustre/lustre/libcfs/linux/linux-mem.c
@@ -50,7 +50,7 @@
ret = kzalloc_node(size, flags | __GFP_NOWARN,
cfs_cpt_spread_node(cptab, cpt));
if (!ret) {
- WARN_ON(!(flags & (__GFP_FS|__GFP_HIGH)));
+ WARN_ON(!(flags & (__GFP_FS | __GFP_HIGH)));
ret = vmalloc_node(size, cfs_cpt_spread_node(cptab, cpt));
}
diff --git a/drivers/staging/lustre/lustre/libcfs/linux/linux-module.c b/drivers/staging/lustre/lustre/libcfs/linux/linux-module.c
index e5bc3d3..ebc60ac 100644
--- a/drivers/staging/lustre/lustre/libcfs/linux/linux-module.c
+++ b/drivers/staging/lustre/lustre/libcfs/linux/linux-module.c
@@ -40,41 +40,10 @@
#define LNET_MINOR 240
-int libcfs_ioctl_getdata(char *buf, char *end, void __user *arg)
+int libcfs_ioctl_data_adjust(struct libcfs_ioctl_data *data)
{
- struct libcfs_ioctl_hdr *hdr;
- struct libcfs_ioctl_data *data;
- int orig_len;
-
- hdr = (struct libcfs_ioctl_hdr *)buf;
- data = (struct libcfs_ioctl_data *)buf;
-
- if (copy_from_user(buf, arg, sizeof(*hdr)))
- return -EFAULT;
-
- if (hdr->ioc_version != LIBCFS_IOCTL_VERSION) {
- CERROR("PORTALS: version mismatch kernel vs application\n");
- return -EINVAL;
- }
-
- if (hdr->ioc_len >= end - buf) {
- CERROR("PORTALS: user buffer exceeds kernel buffer\n");
- return -EINVAL;
- }
-
- if (hdr->ioc_len < sizeof(struct libcfs_ioctl_data)) {
- CERROR("PORTALS: user buffer too small for ioctl\n");
- return -EINVAL;
- }
-
- orig_len = hdr->ioc_len;
- if (copy_from_user(buf, arg, hdr->ioc_len))
- return -EFAULT;
- if (orig_len != data->ioc_len)
- return -EINVAL;
-
if (libcfs_ioctl_is_invalid(data)) {
- CERROR("PORTALS: ioctl not correctly formatted\n");
+ CERROR("LNET: ioctl not correctly formatted\n");
return -EINVAL;
}
@@ -88,6 +57,26 @@
return 0;
}
+int libcfs_ioctl_getdata_len(const struct libcfs_ioctl_hdr __user *arg,
+ __u32 *len)
+{
+ struct libcfs_ioctl_hdr hdr;
+
+ if (copy_from_user(&hdr, arg, sizeof(hdr)))
+ return -EFAULT;
+
+ if (hdr.ioc_version != LIBCFS_IOCTL_VERSION &&
+ hdr.ioc_version != LIBCFS_IOCTL_VERSION2) {
+ CERROR("LNET: version mismatch expected %#x, got %#x\n",
+ LIBCFS_IOCTL_VERSION, hdr.ioc_version);
+ return -EINVAL;
+ }
+
+ *len = hdr.ioc_len;
+
+ return 0;
+}
+
int libcfs_ioctl_popdata(void __user *arg, void *data, int size)
{
if (copy_to_user(arg, data, size))
@@ -102,7 +91,7 @@
if (!inode)
return -EINVAL;
- if (libcfs_psdev_ops.p_open != NULL)
+ if (libcfs_psdev_ops.p_open)
rc = libcfs_psdev_ops.p_open(0, NULL);
else
return -EPERM;
@@ -117,7 +106,7 @@
if (!inode)
return -EINVAL;
- if (libcfs_psdev_ops.p_close != NULL)
+ if (libcfs_psdev_ops.p_close)
rc = libcfs_psdev_ops.p_close(0, NULL);
else
rc = -EPERM;
@@ -134,8 +123,8 @@
return -EACCES;
if (_IOC_TYPE(cmd) != IOC_LIBCFS_TYPE ||
- _IOC_NR(cmd) < IOC_LIBCFS_MIN_NR ||
- _IOC_NR(cmd) > IOC_LIBCFS_MAX_NR) {
+ _IOC_NR(cmd) < IOC_LIBCFS_MIN_NR ||
+ _IOC_NR(cmd) > IOC_LIBCFS_MAX_NR) {
CDEBUG(D_IOCTL, "invalid ioctl ( type %d, nr %d, size %d )\n",
_IOC_TYPE(cmd), _IOC_NR(cmd), _IOC_SIZE(cmd));
return -EINVAL;
@@ -150,7 +139,7 @@
return 0;
}
- if (libcfs_psdev_ops.p_ioctl != NULL)
+ if (libcfs_psdev_ops.p_ioctl)
rc = libcfs_psdev_ops.p_ioctl(&pfile, cmd, (void __user *)arg);
else
rc = -EPERM;
diff --git a/drivers/staging/lustre/lustre/libcfs/linux/linux-tracefile.c b/drivers/staging/lustre/lustre/libcfs/linux/linux-tracefile.c
index 64a136c..91c2ae8 100644
--- a/drivers/staging/lustre/lustre/libcfs/linux/linux-tracefile.c
+++ b/drivers/staging/lustre/lustre/libcfs/linux/linux-tracefile.c
@@ -63,9 +63,8 @@
cfs_trace_data[i] =
kmalloc(sizeof(union cfs_trace_data_union) *
num_possible_cpus(), GFP_KERNEL);
- if (cfs_trace_data[i] == NULL)
+ if (!cfs_trace_data[i])
goto out;
-
}
/* arch related info initialized */
@@ -82,7 +81,7 @@
kmalloc(CFS_TRACE_CONSOLE_BUFFER_SIZE,
GFP_KERNEL);
- if (cfs_trace_console_buffers[i][j] == NULL)
+ if (!cfs_trace_console_buffers[i][j])
goto out;
}
@@ -105,7 +104,7 @@
cfs_trace_console_buffers[i][j] = NULL;
}
- for (i = 0; cfs_trace_data[i] != NULL; i++) {
+ for (i = 0; cfs_trace_data[i]; i++) {
kfree(cfs_trace_data[i]);
cfs_trace_data[i] = NULL;
}
@@ -131,14 +130,13 @@
up_write(&cfs_tracefile_sem);
}
-cfs_trace_buf_type_t cfs_trace_buf_idx_get(void)
+enum cfs_trace_buf_type cfs_trace_buf_idx_get(void)
{
if (in_irq())
return CFS_TCD_TYPE_IRQ;
- else if (in_softirq())
+ if (in_softirq())
return CFS_TCD_TYPE_SOFTIRQ;
- else
- return CFS_TCD_TYPE_PROC;
+ return CFS_TCD_TYPE_PROC;
}
/*
@@ -176,16 +174,6 @@
spin_unlock(&tcd->tcd_lock);
}
-int cfs_tcd_owns_tage(struct cfs_trace_cpu_data *tcd,
- struct cfs_trace_page *tage)
-{
- /*
- * XXX nikita: do NOT call portals_debug_msg() (CDEBUG/ENTRY/EXIT)
- * from here: this will lead to infinite recursion.
- */
- return tcd->tcd_cpu == tage->cpu;
-}
-
void
cfs_set_ptldebug_header(struct ptldebug_header *header,
struct libcfs_debug_msg_data *msgdata,
@@ -200,14 +188,14 @@
header->ph_cpu_id = smp_processor_id();
header->ph_type = cfs_trace_buf_idx_get();
/* y2038 safe since all user space treats this as unsigned, but
- * will overflow in 2106 */
+ * will overflow in 2106
+ */
header->ph_sec = (u32)ts.tv_sec;
header->ph_usec = ts.tv_nsec / NSEC_PER_USEC;
header->ph_stack = stack;
header->ph_pid = current->pid;
header->ph_line_num = msgdata->msg_line;
header->ph_extern_pid = 0;
- return;
}
static char *
@@ -261,12 +249,11 @@
hdr->ph_pid, hdr->ph_extern_pid, file, hdr->ph_line_num,
fn, len, buf);
}
- return;
}
int cfs_trace_max_debug_mb(void)
{
int total_mb = (totalram_pages >> (20 - PAGE_SHIFT));
- return max(512, (total_mb * 80)/100);
+ return max(512, (total_mb * 80) / 100);
}
diff --git a/drivers/staging/lustre/lustre/libcfs/module.c b/drivers/staging/lustre/lustre/libcfs/module.c
index 611607a..05e2c56 100644
--- a/drivers/staging/lustre/lustre/libcfs/module.c
+++ b/drivers/staging/lustre/lustre/libcfs/module.c
@@ -54,11 +54,15 @@
# define DEBUG_SUBSYSTEM S_LNET
+#define LNET_MAX_IOCTL_BUF_LEN (sizeof(struct lnet_ioctl_net_config) + \
+ sizeof(struct lnet_ioctl_config_data))
+
#include "../../include/linux/libcfs/libcfs.h"
#include <asm/div64.h>
#include "../../include/linux/libcfs/libcfs_crypto.h"
#include "../../include/linux/lnet/lib-lnet.h"
+#include "../../include/linux/lnet/lib-dlc.h"
#include "../../include/linux/lnet/lnet.h"
#include "tracefile.h"
@@ -115,11 +119,25 @@
}
EXPORT_SYMBOL(libcfs_deregister_ioctl);
-static int libcfs_ioctl_int(struct cfs_psdev_file *pfile, unsigned long cmd,
- void __user *arg, struct libcfs_ioctl_data *data)
+static int libcfs_ioctl_handle(struct cfs_psdev_file *pfile, unsigned long cmd,
+ void *arg, struct libcfs_ioctl_hdr *hdr)
{
+ struct libcfs_ioctl_data *data = NULL;
int err = -EINVAL;
+ /*
+ * The libcfs_ioctl_data_adjust() function performs adjustment
+ * operations on the libcfs_ioctl_data structure to make
+ * it usable by the code. This doesn't need to be called
+ * for new data structures added.
+ */
+ if (hdr->ioc_version == LIBCFS_IOCTL_VERSION) {
+ data = container_of(hdr, struct libcfs_ioctl_data, ioc_hdr);
+ err = libcfs_ioctl_data_adjust(data);
+ if (err)
+ return err;
+ }
+
switch (cmd) {
case IOC_LIBCFS_CLEAR_DEBUG:
libcfs_debug_clear_buffer();
@@ -129,7 +147,7 @@
* Handled in arch/cfs_module.c
*/
case IOC_LIBCFS_MARK_DEBUG:
- if (data->ioc_inlbuf1 == NULL ||
+ if (!data->ioc_inlbuf1 ||
data->ioc_inlbuf1[data->ioc_inllen1 - 1] != '\0')
return -EINVAL;
libcfs_debug_mark_buffer(data->ioc_inlbuf1);
@@ -141,11 +159,11 @@
err = -EINVAL;
down_read(&ioctl_list_sem);
list_for_each_entry(hand, &ioctl_list, item) {
- err = hand->handle_ioctl(cmd, data);
+ err = hand->handle_ioctl(cmd, hdr);
if (err != -EINVAL) {
if (err == 0)
err = libcfs_ioctl_popdata(arg,
- data, sizeof(*data));
+ hdr, hdr->ioc_len);
break;
}
}
@@ -160,26 +178,38 @@
static int libcfs_ioctl(struct cfs_psdev_file *pfile, unsigned long cmd,
void __user *arg)
{
- char *buf;
- struct libcfs_ioctl_data *data;
+ struct libcfs_ioctl_hdr *hdr;
int err = 0;
+ __u32 buf_len;
- LIBCFS_ALLOC_GFP(buf, 1024, GFP_KERNEL);
- if (buf == NULL)
+ err = libcfs_ioctl_getdata_len(arg, &buf_len);
+ if (err)
+ return err;
+
+ /*
+ * do a check here to restrict the size of the memory
+ * to allocate to guard against DoS attacks.
+ */
+ if (buf_len > LNET_MAX_IOCTL_BUF_LEN) {
+ CERROR("LNET: user buffer exceeds kernel buffer\n");
+ return -EINVAL;
+ }
+
+ LIBCFS_ALLOC_GFP(hdr, buf_len, GFP_KERNEL);
+ if (!hdr)
return -ENOMEM;
/* 'cmd' and permissions get checked in our arch-specific caller */
- if (libcfs_ioctl_getdata(buf, buf + 800, arg)) {
- CERROR("PORTALS ioctl: data error\n");
- err = -EINVAL;
+ if (copy_from_user(hdr, arg, buf_len)) {
+ CERROR("LNET ioctl: data error\n");
+ err = -EFAULT;
goto out;
}
- data = (struct libcfs_ioctl_data *)buf;
- err = libcfs_ioctl_int(pfile, cmd, arg, data);
+ err = libcfs_ioctl_handle(pfile, cmd, arg, hdr);
out:
- LIBCFS_FREE(buf, 1024);
+ LIBCFS_FREE(hdr, buf_len);
return err;
}
@@ -192,9 +222,9 @@
};
static int proc_call_handler(void *data, int write, loff_t *ppos,
- void __user *buffer, size_t *lenp,
- int (*handler)(void *data, int write,
- loff_t pos, void __user *buffer, int len))
+ void __user *buffer, size_t *lenp,
+ int (*handler)(void *data, int write, loff_t pos,
+ void __user *buffer, int len))
{
int rc = handler(data, write, *ppos, buffer, *lenp);
@@ -329,11 +359,11 @@
if (write)
return -EPERM;
- LASSERT(cfs_cpt_table != NULL);
+ LASSERT(cfs_cpt_table);
while (1) {
LIBCFS_ALLOC(buf, len);
- if (buf == NULL)
+ if (!buf)
return -ENOMEM;
rc = cfs_cpt_table_print(cfs_cpt_table, buf, len);
@@ -355,23 +385,19 @@
rc = cfs_trace_copyout_string(buffer, nob, buf + pos, NULL);
out:
- if (buf != NULL)
+ if (buf)
LIBCFS_FREE(buf, len);
return rc;
}
static int proc_cpt_table(struct ctl_table *table, int write,
- void __user *buffer, size_t *lenp, loff_t *ppos)
+ void __user *buffer, size_t *lenp, loff_t *ppos)
{
return proc_call_handler(table->data, write, ppos, buffer, lenp,
__proc_cpt_table);
}
static struct ctl_table lnet_table[] = {
- /*
- * NB No .strategy entries have been provided since sysctl(8) prefers
- * to go via /proc for portability.
- */
{
.procname = "debug",
.data = &libcfs_debug,
@@ -535,7 +561,7 @@
void lustre_insert_debugfs(struct ctl_table *table,
const struct lnet_debugfs_symlink_def *symlinks)
{
- if (lnet_debugfs_root == NULL)
+ if (!lnet_debugfs_root)
lnet_debugfs_root = debugfs_create_dir("lnet", NULL);
/* Even if we cannot create, just ignore it altogether) */
@@ -553,14 +579,12 @@
for (; symlinks && symlinks->name; symlinks++)
debugfs_create_symlink(symlinks->name, lnet_debugfs_root,
symlinks->target);
-
}
EXPORT_SYMBOL_GPL(lustre_insert_debugfs);
static void lustre_remove_debugfs(void)
{
- if (lnet_debugfs_root != NULL)
- debugfs_remove_recursive(lnet_debugfs_root);
+ debugfs_remove_recursive(lnet_debugfs_root);
lnet_debugfs_root = NULL;
}
diff --git a/drivers/staging/lustre/lustre/libcfs/prng.c b/drivers/staging/lustre/lustre/libcfs/prng.c
index 4147664..c75ae9a 100644
--- a/drivers/staging/lustre/lustre/libcfs/prng.c
+++ b/drivers/staging/lustre/lustre/libcfs/prng.c
@@ -42,11 +42,11 @@
#include "../../include/linux/libcfs/libcfs.h"
/*
-From: George Marsaglia <geo@stat.fsu.edu>
-Newsgroups: sci.math
-Subject: Re: A RANDOM NUMBER GENERATOR FOR C
-Date: Tue, 30 Sep 1997 05:29:35 -0700
-
+ * From: George Marsaglia <geo@stat.fsu.edu>
+ * Newsgroups: sci.math
+ * Subject: Re: A RANDOM NUMBER GENERATOR FOR C
+ * Date: Tue, 30 Sep 1997 05:29:35 -0700
+ *
* You may replace the two constants 36969 and 18000 by any
* pair of distinct constants from this list:
* 18000 18030 18273 18513 18879 19074 19098 19164 19215 19584
@@ -58,7 +58,8 @@
* 27960 28320 28380 28689 28710 28794 28854 28959 28980 29013
* 29379 29889 30135 30345 30459 30714 30903 30963 31059 31083
* (or any other 16-bit constants k for which both k*2^16-1
- * and k*2^15-1 are prime) */
+ * and k*2^15-1 are prime)
+ */
#define RANDOM_CONST_A 18030
#define RANDOM_CONST_B 29013
diff --git a/drivers/staging/lustre/lustre/libcfs/tracefile.c b/drivers/staging/lustre/lustre/libcfs/tracefile.c
index 65c4f1a..ec3bc04 100644
--- a/drivers/staging/lustre/lustre/libcfs/tracefile.c
+++ b/drivers/staging/lustre/lustre/libcfs/tracefile.c
@@ -56,6 +56,51 @@
static atomic_t cfs_tage_allocated = ATOMIC_INIT(0);
+struct page_collection {
+ struct list_head pc_pages;
+ /*
+ * if this flag is set, collect_pages() will spill both
+ * ->tcd_daemon_pages and ->tcd_pages to the ->pc_pages. Otherwise,
+ * only ->tcd_pages are spilled.
+ */
+ int pc_want_daemon_pages;
+};
+
+struct tracefiled_ctl {
+ struct completion tctl_start;
+ struct completion tctl_stop;
+ wait_queue_head_t tctl_waitq;
+ pid_t tctl_pid;
+ atomic_t tctl_shutdown;
+};
+
+/*
+ * small data-structure for each page owned by tracefiled.
+ */
+struct cfs_trace_page {
+ /*
+ * page itself
+ */
+ struct page *page;
+ /*
+ * linkage into one of the lists in trace_data_union or
+ * page_collection
+ */
+ struct list_head linkage;
+ /*
+ * number of bytes used within this page
+ */
+ unsigned int used;
+ /*
+ * cpu that owns this page
+ */
+ unsigned short cpu;
+ /*
+ * type(context) of this page
+ */
+ unsigned short type;
+};
+
static void put_pages_on_tcd_daemon_list(struct page_collection *pc,
struct cfs_trace_cpu_data *tcd);
@@ -80,11 +125,11 @@
*/
gfp |= __GFP_NOWARN;
page = alloc_page(gfp);
- if (page == NULL)
+ if (!page)
return NULL;
tage = kmalloc(sizeof(*tage), gfp);
- if (tage == NULL) {
+ if (!tage) {
__free_page(page);
return NULL;
}
@@ -96,9 +141,6 @@
static void cfs_tage_free(struct cfs_trace_page *tage)
{
- __LASSERT(tage != NULL);
- __LASSERT(tage->page != NULL);
-
__free_page(tage->page);
kfree(tage);
atomic_dec(&cfs_tage_allocated);
@@ -107,9 +149,6 @@
static void cfs_tage_to_tail(struct cfs_trace_page *tage,
struct list_head *queue)
{
- __LASSERT(tage != NULL);
- __LASSERT(queue != NULL);
-
list_move_tail(&tage->linkage, queue);
}
@@ -127,7 +166,7 @@
struct cfs_trace_page *tage;
tage = cfs_tage_alloc(gfp);
- if (tage == NULL)
+ if (!tage)
break;
list_add_tail(&tage->linkage, stock);
}
@@ -154,7 +193,7 @@
list_del_init(&tage->linkage);
} else {
tage = cfs_tage_alloc(GFP_ATOMIC);
- if (unlikely(tage == NULL)) {
+ if (unlikely(!tage)) {
if ((!memory_pressure_get() ||
in_interrupt()) && printk_ratelimit())
printk(KERN_WARNING
@@ -227,7 +266,7 @@
}
tage = cfs_trace_get_tage_try(tcd, len);
- if (tage != NULL)
+ if (tage)
return tage;
if (thread_running)
cfs_tcd_shrink(tcd);
@@ -278,10 +317,11 @@
/* cfs_trace_get_tcd() grabs a lock, which disables preemption and
* pins us to a particular CPU. This avoids an smp_processor_id()
- * warning on Linux when debugging is enabled. */
+ * warning on Linux when debugging is enabled.
+ */
cfs_set_ptldebug_header(&header, msgdata, CDEBUG_STACK());
- if (tcd == NULL) /* arch may not log in IRQ context */
+ if (!tcd) /* arch may not log in IRQ context */
goto console;
if (tcd->tcd_cur_pages == 0)
@@ -301,14 +341,14 @@
if (libcfs_debug_binary)
known_size += sizeof(header);
- /*/
+ /*
* '2' used because vsnprintf return real size required for output
* _without_ terminating NULL.
* if needed is to small for this format.
*/
for (i = 0; i < 2; i++) {
tage = cfs_trace_get_tage(tcd, needed + known_size + 1);
- if (tage == NULL) {
+ if (!tage) {
if (needed + known_size > PAGE_CACHE_SIZE)
mask |= D_ERROR;
@@ -352,7 +392,7 @@
break;
}
- if (*(string_buf+needed-1) != '\n')
+ if (*(string_buf + needed - 1) != '\n')
printk(KERN_INFO "format at %s:%d:%s doesn't end in newline\n",
file, msgdata->msg_line, msgdata->msg_fn);
@@ -384,30 +424,30 @@
__LASSERT(debug_buf == string_buf);
tage->used += needed;
- __LASSERT (tage->used <= PAGE_CACHE_SIZE);
+ __LASSERT(tage->used <= PAGE_CACHE_SIZE);
console:
if ((mask & libcfs_printk) == 0) {
/* no console output requested */
- if (tcd != NULL)
+ if (tcd)
cfs_trace_put_tcd(tcd);
return 1;
}
- if (cdls != NULL) {
+ if (cdls) {
if (libcfs_console_ratelimit &&
cdls->cdls_next != 0 && /* not first time ever */
!cfs_time_after(cfs_time_current(), cdls->cdls_next)) {
/* skipping a console message */
cdls->cdls_count++;
- if (tcd != NULL)
+ if (tcd)
cfs_trace_put_tcd(tcd);
return 1;
}
- if (cfs_time_after(cfs_time_current(), cdls->cdls_next +
- libcfs_console_max_delay
- + cfs_time_seconds(10))) {
+ if (cfs_time_after(cfs_time_current(),
+ cdls->cdls_next + libcfs_console_max_delay +
+ cfs_time_seconds(10))) {
/* last timeout was a long time ago */
cdls->cdls_delay /= libcfs_console_backoff * 4;
} else {
@@ -423,7 +463,7 @@
cdls->cdls_next = (cfs_time_current() + cdls->cdls_delay) | 1;
}
- if (tcd != NULL) {
+ if (tcd) {
cfs_print_to_console(&header, mask, string_buf, needed, file,
msgdata->msg_fn);
cfs_trace_put_tcd(tcd);
@@ -431,18 +471,18 @@
string_buf = cfs_trace_get_console_buffer();
needed = 0;
- if (format1 != NULL) {
+ if (format1) {
va_copy(ap, args);
needed = vsnprintf(string_buf,
CFS_TRACE_CONSOLE_BUFFER_SIZE,
format1, ap);
va_end(ap);
}
- if (format2 != NULL) {
+ if (format2) {
remain = CFS_TRACE_CONSOLE_BUFFER_SIZE - needed;
if (remain > 0) {
va_start(ap, format2);
- needed += vsnprintf(string_buf+needed, remain,
+ needed += vsnprintf(string_buf + needed, remain,
format2, ap);
va_end(ap);
}
@@ -453,7 +493,7 @@
put_cpu();
}
- if (cdls != NULL && cdls->cdls_count != 0) {
+ if (cdls && cdls->cdls_count != 0) {
string_buf = cfs_trace_get_console_buffer();
needed = snprintf(string_buf, CFS_TRACE_CONSOLE_BUFFER_SIZE,
@@ -497,7 +537,8 @@
{
/* Do the collect_pages job on a single CPU: assumes that all other
* CPUs have been stopped during a panic. If this isn't true for some
- * arch, this will have to be implemented separately in each arch. */
+ * arch, this will have to be implemented separately in each arch.
+ */
int i;
int j;
struct cfs_trace_cpu_data *tcd;
@@ -509,8 +550,7 @@
tcd->tcd_cur_pages = 0;
if (pc->pc_want_daemon_pages) {
- list_splice_init(&tcd->tcd_daemon_pages,
- &pc->pc_pages);
+ list_splice_init(&tcd->tcd_daemon_pages, &pc->pc_pages);
tcd->tcd_cur_daemon_pages = 0;
}
}
@@ -527,7 +567,7 @@
tcd->tcd_cur_pages = 0;
if (pc->pc_want_daemon_pages) {
list_splice_init(&tcd->tcd_daemon_pages,
- &pc->pc_pages);
+ &pc->pc_pages);
tcd->tcd_cur_daemon_pages = 0;
}
}
@@ -558,7 +598,6 @@
list_for_each_entry_safe(tage, tmp, &pc->pc_pages,
linkage) {
-
__LASSERT_TAGE_INVARIANT(tage);
if (tage->cpu != cpu || tage->type != i)
@@ -580,7 +619,8 @@
/* Add pages to a per-cpu debug daemon ringbuffer. This buffer makes sure that
* we have a good amount of data at all times for dumping during an LBUG, even
* if we have been steadily writing (and otherwise discarding) pages via the
- * debug daemon. */
+ * debug daemon.
+ */
static void put_pages_on_tcd_daemon_list(struct page_collection *pc,
struct cfs_trace_cpu_data *tcd)
{
@@ -588,7 +628,6 @@
struct cfs_trace_page *tmp;
list_for_each_entry_safe(tage, tmp, &pc->pc_pages, linkage) {
-
__LASSERT_TAGE_INVARIANT(tage);
if (tage->cpu != tcd->tcd_cpu || tage->type != tcd->tcd_type)
@@ -674,12 +713,13 @@
cfs_tracefile_write_lock();
- filp = filp_open(filename, O_CREAT|O_EXCL|O_WRONLY|O_LARGEFILE, 0600);
+ filp = filp_open(filename, O_CREAT | O_EXCL | O_WRONLY | O_LARGEFILE,
+ 0600);
if (IS_ERR(filp)) {
rc = PTR_ERR(filp);
filp = NULL;
pr_err("LustreError: can't open %s for dump: rc %d\n",
- filename, rc);
+ filename, rc);
goto out;
}
@@ -691,10 +731,10 @@
}
/* ok, for now, just write the pages. in the future we'll be building
- * iobufs with the pages and calling generic_direct_IO */
+ * iobufs with the pages and calling generic_direct_IO
+ */
MMSPACE_OPEN;
list_for_each_entry_safe(tage, tmp, &pc.pc_pages, linkage) {
-
__LASSERT_TAGE_INVARIANT(tage);
buf = kmap(tage->page);
@@ -732,7 +772,6 @@
pc.pc_want_daemon_pages = 1;
collect_pages(&pc);
list_for_each_entry_safe(tage, tmp, &pc.pc_pages, linkage) {
-
__LASSERT_TAGE_INVARIANT(tage);
list_del(&tage->linkage);
@@ -771,9 +810,10 @@
int cfs_trace_copyout_string(char __user *usr_buffer, int usr_buffer_nob,
const char *knl_buffer, char *append)
{
- /* NB if 'append' != NULL, it's a single character to append to the
- * copied out string - usually "\n", for /proc entries and "" (i.e. a
- * terminating zero byte) for sysctl entries */
+ /*
+ * NB if 'append' != NULL, it's a single character to append to the
+ * copied out string - usually "\n" or "" (i.e. a terminating zero byte)
+ */
int nob = strlen(knl_buffer);
if (nob > usr_buffer_nob)
@@ -782,7 +822,7 @@
if (copy_to_user(usr_buffer, knl_buffer, nob))
return -EFAULT;
- if (append != NULL && nob < usr_buffer_nob) {
+ if (append && nob < usr_buffer_nob) {
if (copy_to_user(usr_buffer + nob, append, 1))
return -EFAULT;
@@ -799,7 +839,7 @@
return -EINVAL;
*str = kmalloc(nob, GFP_KERNEL | __GFP_ZERO);
- if (*str == NULL)
+ if (!*str)
return -ENOMEM;
return 0;
@@ -842,12 +882,15 @@
memset(cfs_tracefile, 0, sizeof(cfs_tracefile));
} else if (strncmp(str, "size=", 5) == 0) {
- cfs_tracefile_size = simple_strtoul(str + 5, NULL, 0);
- if (cfs_tracefile_size < 10 || cfs_tracefile_size > 20480)
- cfs_tracefile_size = CFS_TRACEFILE_SIZE;
- else
- cfs_tracefile_size <<= 20;
+ unsigned long tmp;
+ rc = kstrtoul(str + 5, 10, &tmp);
+ if (!rc) {
+ if (tmp < 10 || tmp > 20480)
+ cfs_tracefile_size = CFS_TRACEFILE_SIZE;
+ else
+ cfs_tracefile_size = tmp << 20;
+ }
} else if (strlen(str) >= sizeof(cfs_tracefile)) {
rc = -ENAMETOOLONG;
} else if (str[0] != '/') {
@@ -877,7 +920,7 @@
return rc;
rc = cfs_trace_copyin_string(str, usr_str_nob + 1,
- usr_str, usr_str_nob);
+ usr_str, usr_str_nob);
if (rc == 0)
rc = cfs_trace_daemon_command(str);
@@ -977,7 +1020,7 @@
}
}
cfs_tracefile_read_unlock();
- if (filp == NULL) {
+ if (!filp) {
put_pages_on_daemon_list(&pc);
__LASSERT(list_empty(&pc.pc_pages));
goto end_loop;
@@ -985,8 +1028,7 @@
MMSPACE_OPEN;
- list_for_each_entry_safe(tage, tmp, &pc.pc_pages,
- linkage) {
+ list_for_each_entry_safe(tage, tmp, &pc.pc_pages, linkage) {
static loff_t f_pos;
__LASSERT_TAGE_INVARIANT(tage);
@@ -1017,8 +1059,7 @@
int i;
printk(KERN_ALERT "Lustre: trace pages aren't empty\n");
- pr_err("total cpus(%d): ",
- num_possible_cpus());
+ pr_err("total cpus(%d): ", num_possible_cpus());
for (i = 0; i < num_possible_cpus(); i++)
if (cpu_online(i))
pr_cont("%d(on) ", i);
@@ -1028,9 +1069,9 @@
i = 0;
list_for_each_entry_safe(tage, tmp, &pc.pc_pages,
- linkage)
+ linkage)
pr_err("page %d belongs to cpu %d\n",
- ++i, tage->cpu);
+ ++i, tage->cpu);
pr_err("There are %d pages unwritten\n", i);
}
__LASSERT(list_empty(&pc.pc_pages));
@@ -1056,6 +1097,7 @@
int cfs_trace_start_thread(void)
{
struct tracefiled_ctl *tctl = &trace_tctl;
+ struct task_struct *task;
int rc = 0;
mutex_lock(&cfs_trace_thread_mutex);
@@ -1067,8 +1109,9 @@
init_waitqueue_head(&tctl->tctl_waitq);
atomic_set(&tctl->tctl_shutdown, 0);
- if (IS_ERR(kthread_run(tracefiled, tctl, "ktracefiled"))) {
- rc = -ECHILD;
+ task = kthread_run(tracefiled, tctl, "ktracefiled");
+ if (IS_ERR(task)) {
+ rc = PTR_ERR(task);
goto out;
}
@@ -1135,7 +1178,7 @@
tcd->tcd_shutting_down = 1;
list_for_each_entry_safe(tage, tmp, &tcd->tcd_pages,
- linkage) {
+ linkage) {
__LASSERT_TAGE_INVARIANT(tage);
list_del(&tage->linkage);
diff --git a/drivers/staging/lustre/lustre/libcfs/tracefile.h b/drivers/staging/lustre/lustre/libcfs/tracefile.h
index 7bf1471..4c77f90 100644
--- a/drivers/staging/lustre/lustre/libcfs/tracefile.h
+++ b/drivers/staging/lustre/lustre/libcfs/tracefile.h
@@ -39,12 +39,12 @@
#include "../../include/linux/libcfs/libcfs.h"
-typedef enum {
+enum cfs_trace_buf_type {
CFS_TCD_TYPE_PROC = 0,
CFS_TCD_TYPE_SOFTIRQ,
CFS_TCD_TYPE_IRQ,
CFS_TCD_TYPE_MAX
-} cfs_trace_buf_type_t;
+};
/* trace file lock routines */
@@ -101,8 +101,10 @@
#define CFS_TRACEFILE_SIZE (500 << 20)
-/* Size of a buffer for sprinting console messages if we can't get a page
- * from system */
+/*
+ * Size of a buffer for sprinting console messages if we can't get a page
+ * from system
+ */
#define CFS_TRACE_CONSOLE_BUFFER_SIZE 1024
union cfs_trace_data_union {
@@ -185,66 +187,15 @@
extern union cfs_trace_data_union (*cfs_trace_data[TCD_MAX_TYPES])[NR_CPUS];
#define cfs_tcd_for_each(tcd, i, j) \
- for (i = 0; cfs_trace_data[i] != NULL; i++) \
- for (j = 0, ((tcd) = &(*cfs_trace_data[i])[j].tcd); \
- j < num_possible_cpus(); \
- j++, (tcd) = &(*cfs_trace_data[i])[j].tcd)
+ for (i = 0; cfs_trace_data[i]; i++) \
+ for (j = 0, ((tcd) = &(*cfs_trace_data[i])[j].tcd); \
+ j < num_possible_cpus(); \
+ j++, (tcd) = &(*cfs_trace_data[i])[j].tcd)
#define cfs_tcd_for_each_type_lock(tcd, i, cpu) \
- for (i = 0; cfs_trace_data[i] && \
- (tcd = &(*cfs_trace_data[i])[cpu].tcd) && \
- cfs_trace_lock_tcd(tcd, 1); cfs_trace_unlock_tcd(tcd, 1), i++)
-
-/* XXX nikita: this declaration is internal to tracefile.c and should probably
- * be moved there */
-struct page_collection {
- struct list_head pc_pages;
- /*
- * if this flag is set, collect_pages() will spill both
- * ->tcd_daemon_pages and ->tcd_pages to the ->pc_pages. Otherwise,
- * only ->tcd_pages are spilled.
- */
- int pc_want_daemon_pages;
-};
-
-/* XXX nikita: this declaration is internal to tracefile.c and should probably
- * be moved there */
-struct tracefiled_ctl {
- struct completion tctl_start;
- struct completion tctl_stop;
- wait_queue_head_t tctl_waitq;
- pid_t tctl_pid;
- atomic_t tctl_shutdown;
-};
-
-/*
- * small data-structure for each page owned by tracefiled.
- */
-/* XXX nikita: this declaration is internal to tracefile.c and should probably
- * be moved there */
-struct cfs_trace_page {
- /*
- * page itself
- */
- struct page *page;
- /*
- * linkage into one of the lists in trace_data_union or
- * page_collection
- */
- struct list_head linkage;
- /*
- * number of bytes used within this page
- */
- unsigned int used;
- /*
- * cpu that owns this page
- */
- unsigned short cpu;
- /*
- * type(context) of this page
- */
- unsigned short type;
-};
+ for (i = 0; cfs_trace_data[i] && \
+ (tcd = &(*cfs_trace_data[i])[cpu].tcd) && \
+ cfs_trace_lock_tcd(tcd, 1); cfs_trace_unlock_tcd(tcd, 1), i++)
void cfs_set_ptldebug_header(struct ptldebug_header *header,
struct libcfs_debug_msg_data *m,
@@ -257,7 +208,7 @@
void cfs_trace_unlock_tcd(struct cfs_trace_cpu_data *tcd, int walking);
extern char *cfs_trace_console_buffers[NR_CPUS][CFS_TCD_TYPE_MAX];
-cfs_trace_buf_type_t cfs_trace_buf_idx_get(void);
+enum cfs_trace_buf_type cfs_trace_buf_idx_get(void);
static inline char *
cfs_trace_get_console_buffer(void)
@@ -279,8 +230,7 @@
return tcd;
}
-static inline void
-cfs_trace_put_tcd (struct cfs_trace_cpu_data *tcd)
+static inline void cfs_trace_put_tcd(struct cfs_trace_cpu_data *tcd)
{
cfs_trace_unlock_tcd(tcd, 0);
@@ -290,9 +240,6 @@
int cfs_trace_refill_stock(struct cfs_trace_cpu_data *tcd, gfp_t gfp,
struct list_head *stock);
-int cfs_tcd_owns_tage(struct cfs_trace_cpu_data *tcd,
- struct cfs_trace_page *tage);
-
void cfs_trace_assertion_failed(const char *str,
struct libcfs_debug_msg_data *m);
@@ -308,8 +255,8 @@
#define __LASSERT_TAGE_INVARIANT(tage) \
do { \
- __LASSERT(tage != NULL); \
- __LASSERT(tage->page != NULL); \
+ __LASSERT(tage); \
+ __LASSERT(tage->page); \
__LASSERT(tage->used <= PAGE_CACHE_SIZE); \
__LASSERT(page_count(tage->page) > 0); \
} while (0)
diff --git a/drivers/staging/lustre/lustre/libcfs/workitem.c b/drivers/staging/lustre/lustre/libcfs/workitem.c
index 60bb88a..136bc13 100644
--- a/drivers/staging/lustre/lustre/libcfs/workitem.c
+++ b/drivers/staging/lustre/lustre/libcfs/workitem.c
@@ -46,18 +46,21 @@
#define CFS_WS_NAME_LEN 16
struct cfs_wi_sched {
- struct list_head ws_list; /* chain on global list */
+ /* chain on global list */
+ struct list_head ws_list;
/** serialised workitems */
spinlock_t ws_lock;
/** where schedulers sleep */
wait_queue_head_t ws_waitq;
/** concurrent workitems */
struct list_head ws_runq;
- /** rescheduled running-workitems, a workitem can be rescheduled
+ /**
+ * rescheduled running-workitems, a workitem can be rescheduled
* while running in wi_action(), but we don't to execute it again
* unless it returns from wi_action(), so we put it on ws_rerunq
* while rescheduling, and move it to runq after it returns
- * from wi_action() */
+ * from wi_action()
+ */
struct list_head ws_rerunq;
/** CPT-table for this scheduler */
struct cfs_cpt_table *ws_cptab;
@@ -128,8 +131,6 @@
wi->wi_scheduled = 1; /* LBUG future schedule attempts */
spin_unlock(&sched->ws_lock);
-
- return;
}
EXPORT_SYMBOL(cfs_wi_exit);
@@ -163,7 +164,7 @@
wi->wi_scheduled = 0;
}
- LASSERT (list_empty(&wi->wi_list));
+ LASSERT(list_empty(&wi->wi_list));
spin_unlock(&sched->ws_lock);
return rc;
@@ -186,7 +187,7 @@
spin_lock(&sched->ws_lock);
if (!wi->wi_scheduled) {
- LASSERT (list_empty(&wi->wi_list));
+ LASSERT(list_empty(&wi->wi_list));
wi->wi_scheduled = 1;
sched->ws_nscheduled++;
@@ -198,21 +199,19 @@
}
}
- LASSERT (!list_empty(&wi->wi_list));
+ LASSERT(!list_empty(&wi->wi_list));
spin_unlock(&sched->ws_lock);
- return;
}
EXPORT_SYMBOL(cfs_wi_schedule);
-static int
-cfs_wi_scheduler (void *arg)
+static int cfs_wi_scheduler(void *arg)
{
struct cfs_wi_sched *sched = (struct cfs_wi_sched *)arg;
cfs_block_allsigs();
/* CPT affinity scheduler? */
- if (sched->ws_cptab != NULL)
+ if (sched->ws_cptab)
if (cfs_cpt_bind(sched->ws_cptab, sched->ws_cpt) != 0)
CWARN("Failed to bind %s on CPT %d\n",
sched->ws_name, sched->ws_cpt);
@@ -234,8 +233,8 @@
while (!list_empty(&sched->ws_runq) &&
nloops < CFS_WI_RESCHED) {
- wi = list_entry(sched->ws_runq.next,
- cfs_workitem_t, wi_list);
+ wi = list_entry(sched->ws_runq.next, cfs_workitem_t,
+ wi_list);
LASSERT(wi->wi_scheduled && !wi->wi_running);
list_del_init(&wi->wi_list);
@@ -261,14 +260,16 @@
LASSERT(wi->wi_scheduled);
/* wi is rescheduled, should be on rerunq now, we
- * move it to runq so it can run action now */
+ * move it to runq so it can run action now
+ */
list_move_tail(&wi->wi_list, &sched->ws_runq);
}
if (!list_empty(&sched->ws_runq)) {
spin_unlock(&sched->ws_lock);
/* don't sleep because some workitems still
- * expect me to come back soon */
+ * expect me to come back soon
+ */
cond_resched();
spin_lock(&sched->ws_lock);
continue;
@@ -343,11 +344,11 @@
LASSERT(cfs_wi_data.wi_init);
LASSERT(!cfs_wi_data.wi_stopping);
- LASSERT(cptab == NULL || cpt == CFS_CPT_ANY ||
+ LASSERT(!cptab || cpt == CFS_CPT_ANY ||
(cpt >= 0 && cpt < cfs_cpt_number(cptab)));
LIBCFS_ALLOC(sched, sizeof(*sched));
- if (sched == NULL)
+ if (!sched)
return -ENOMEM;
strlcpy(sched->ws_name, name, CFS_WS_NAME_LEN);
@@ -376,7 +377,7 @@
sched->ws_starting++;
spin_unlock(&cfs_wi_data.wi_glock);
- if (sched->ws_cptab != NULL && sched->ws_cpt >= 0) {
+ if (sched->ws_cptab && sched->ws_cpt >= 0) {
snprintf(name, sizeof(name), "%s_%02d_%02u",
sched->ws_name, sched->ws_cpt,
sched->ws_nthreads);
@@ -455,7 +456,7 @@
}
while (!list_empty(&cfs_wi_data.wi_scheds)) {
sched = list_entry(cfs_wi_data.wi_scheds.next,
- struct cfs_wi_sched, ws_list);
+ struct cfs_wi_sched, ws_list);
list_del(&sched->ws_list);
LIBCFS_FREE(sched, sizeof(*sched));
}
diff --git a/drivers/staging/lustre/lustre/llite/dcache.c b/drivers/staging/lustre/lustre/llite/dcache.c
index bc179e5..0bc0fb9f 100644
--- a/drivers/staging/lustre/lustre/llite/dcache.c
+++ b/drivers/staging/lustre/lustre/llite/dcache.c
@@ -60,7 +60,7 @@
{
struct ll_dentry_data *lld;
- LASSERT(de != NULL);
+ LASSERT(de);
lld = ll_d2d(de);
if (!lld) /* NFS copies the de->d_op methods (bug 4655) */
return;
@@ -80,7 +80,8 @@
* This avoids a race where ll_lookup_it() instantiates a dentry, but we get
* an AST before calling d_revalidate_it(). The dentry still exists (marked
* INVALID) so d_lookup() matches it, but we have no lock on it (so
- * lock_match() fails) and we spin around real_lookup(). */
+ * lock_match() fails) and we spin around real_lookup().
+ */
static int ll_dcompare(const struct dentry *parent, const struct dentry *dentry,
unsigned int len, const char *str,
const struct qstr *name)
@@ -117,7 +118,8 @@
/* find any ldlm lock of the inode in mdc and lov
* return 0 not find
* 1 find one
- * < 0 error */
+ * < 0 error
+ */
static int find_cbdata(struct inode *inode)
{
struct ll_sb_info *sbi = ll_i2sbi(inode);
@@ -163,10 +165,12 @@
/* Disable this piece of code temporarily because this is called
* inside dcache_lock so it's not appropriate to do lots of work
* here. ATTENTION: Before this piece of code enabling, LU-2487 must be
- * resolved. */
+ * resolved.
+ */
#if 0
/* if not ldlm lock for this inode, set i_nlink to 0 so that
- * this inode can be recycled later b=20433 */
+ * this inode can be recycled later b=20433
+ */
if (d_really_is_positive(de) && !find_cbdata(d_inode(de)))
clear_nlink(d_inode(de));
#endif
@@ -178,8 +182,6 @@
int ll_d_init(struct dentry *de)
{
- LASSERT(de != NULL);
-
CDEBUG(D_DENTRY, "ldd on dentry %pd (%p) parent %p inode %p refc %d\n",
de, de, de->d_parent, d_inode(de),
d_count(de));
@@ -218,7 +220,8 @@
ldlm_lock_decref(&handle, it->d.lustre.it_lock_mode);
/* bug 494: intent_release may be called multiple times, from
- * this thread and we don't want to double-decref this lock */
+ * this thread and we don't want to double-decref this lock
+ */
it->d.lustre.it_lock_mode = 0;
if (it->d.lustre.it_remote_lock_mode != 0) {
handle.cookie = it->d.lustre.it_remote_lock_handle;
@@ -251,8 +254,6 @@
{
struct dentry *dentry;
- LASSERT(inode != NULL);
-
CDEBUG(D_INODE, "marking dentries for ino %lu/%u(%p) invalid\n",
inode->i_ino, inode->i_generation, inode);
@@ -286,9 +287,7 @@
void ll_lookup_finish_locks(struct lookup_intent *it, struct inode *inode)
{
- LASSERT(it != NULL);
-
- if (it->d.lustre.it_lock_mode && inode != NULL) {
+ if (it->d.lustre.it_lock_mode && inode) {
struct ll_sb_info *sbi = ll_i2sbi(inode);
CDEBUG(D_DLMTRACE, "setting l_data to inode %p (%lu/%u)\n",
@@ -300,7 +299,8 @@
if (it->it_op == IT_LOOKUP || it->it_op == IT_GETATTR) {
/* on 2.6 there are situation when several lookups and
* revalidations may be requested during single operation.
- * therefore, we don't release intent here -bzzz */
+ * therefore, we don't release intent here -bzzz
+ */
ll_intent_drop_lock(it);
}
}
diff --git a/drivers/staging/lustre/lustre/llite/dir.c b/drivers/staging/lustre/lustre/llite/dir.c
index 36b1055..bd88a3b 100644
--- a/drivers/staging/lustre/lustre/llite/dir.c
+++ b/drivers/staging/lustre/lustre/llite/dir.c
@@ -190,8 +190,6 @@
} else if (rc == 0) {
body = req_capsule_server_get(&request->rq_pill, &RMF_MDT_BODY);
/* Checked by mdc_readpage() */
- LASSERT(body != NULL);
-
if (body->valid & OBD_MD_FLSIZE)
cl_isize_write(inode, body->size);
@@ -245,7 +243,7 @@
kunmap(page);
if (remove) {
lock_page(page);
- if (likely(page->mapping != NULL))
+ if (likely(page->mapping))
truncate_complete_page(page->mapping, page);
unlock_page(page);
}
@@ -334,7 +332,7 @@
struct lustre_handle lockh;
struct lu_dirpage *dp;
struct page *page;
- ldlm_mode_t mode;
+ enum ldlm_mode mode;
int rc;
__u64 start = 0;
__u64 end = 0;
@@ -381,7 +379,8 @@
&it.d.lustre.it_lock_handle, dir, NULL);
} else {
/* for cross-ref object, l_ast_data of the lock may not be set,
- * we reset it here */
+ * we reset it here
+ */
md_set_lock_data(ll_i2sbi(dir)->ll_md_exp, &lockh.cookie,
dir, NULL);
}
@@ -393,7 +392,7 @@
CERROR("dir page locate: "DFID" at %llu: rc %ld\n",
PFID(ll_inode2fid(dir)), lhash, PTR_ERR(page));
goto out_unlock;
- } else if (page != NULL) {
+ } else if (page) {
/*
* XXX nikita: not entirely correct handling of a corner case:
* suppose hash chain of entries with hash value HASH crosses
@@ -499,7 +498,7 @@
__u64 next;
dp = page_address(page);
- for (ent = lu_dirent_start(dp); ent != NULL && !done;
+ for (ent = lu_dirent_start(dp); ent && !done;
ent = lu_dirent_next(ent)) {
__u16 type;
int namelen;
@@ -689,7 +688,7 @@
struct obd_device *mgc = lsi->lsi_mgc;
int lum_size;
- if (lump != NULL) {
+ if (lump) {
/*
* This is coming from userspace, so should be in
* local endian. But the MDS would like it in little
@@ -725,7 +724,7 @@
if (IS_ERR(op_data))
return PTR_ERR(op_data);
- if (lump != NULL && lump->lmm_magic == cpu_to_le32(LMV_USER_MAGIC))
+ if (lump && lump->lmm_magic == cpu_to_le32(LMV_USER_MAGIC))
op_data->op_cli_flags |= CLI_SET_MEA;
/* swabbing is done in lov_setstripe() on server side */
@@ -739,8 +738,9 @@
}
/* In the following we use the fact that LOV_USER_MAGIC_V1 and
- LOV_USER_MAGIC_V3 have the same initial fields so we do not
- need to make the distinction between the 2 versions */
+ * LOV_USER_MAGIC_V3 have the same initial fields so we do not
+ * need to make the distinction between the 2 versions
+ */
if (set_default && mgc->u.cli.cl_mgc_mgsexp) {
char *param = NULL;
char *buf;
@@ -812,7 +812,6 @@
}
body = req_capsule_server_get(&req->rq_pill, &RMF_MDT_BODY);
- LASSERT(body != NULL);
lmmsize = body->eadatasize;
@@ -824,7 +823,6 @@
lmm = req_capsule_server_sized_get(&req->rq_pill,
&RMF_MDT_MD, lmmsize);
- LASSERT(lmm != NULL);
/*
* This is coming from the MDS, so is probably in
@@ -933,7 +931,8 @@
}
/* Store it the hsm_copy for later copytool use.
- * Always modified even if no lsm. */
+ * Always modified even if no lsm.
+ */
copy->hc_data_version = data_version;
}
@@ -1010,11 +1009,13 @@
}
/* Store it the hsm_copy for later copytool use.
- * Always modified even if no lsm. */
+ * Always modified even if no lsm.
+ */
hpk.hpk_data_version = data_version;
/* File could have been stripped during archiving, so we need
- * to check anyway. */
+ * to check anyway.
+ */
if ((copy->hc_hai.hai_action == HSMA_ARCHIVE) &&
(copy->hc_data_version != data_version)) {
CDEBUG(D_HSM, "File data version mismatched. File content was changed during archiving. "
@@ -1026,7 +1027,8 @@
* the cdt will loop on retried archive requests.
* The policy engine will ask for a new archive later
* when the file will not be modified for some tunable
- * time */
+ * time
+ */
/* we do not notify caller */
hpk.hpk_flags &= ~HP_FLAG_RETRY;
/* hpk_errval must be >= 0 */
@@ -1154,7 +1156,8 @@
return rc;
}
/* If QIF_SPACE is not set, client should collect the
- * space usage from OSSs by itself */
+ * space usage from OSSs by itself
+ */
if (cmd == Q_GETQUOTA &&
!(oqctl->qc_dqblk.dqb_valid & QIF_SPACE) &&
!oqctl->qc_dqblk.dqb_curspace) {
@@ -1205,7 +1208,8 @@
/* This function tries to get a single name component,
* to send to the server. No actual path traversal involved,
- * so we limit to NAME_MAX */
+ * so we limit to NAME_MAX
+ */
static char *ll_getname(const char __user *filename)
{
int ret = 0, len;
@@ -1326,7 +1330,7 @@
return rc;
data = (void *)buf;
- if (data->ioc_inlbuf1 == NULL || data->ioc_inlbuf2 == NULL ||
+ if (!data->ioc_inlbuf1 || !data->ioc_inlbuf2 ||
data->ioc_inllen1 == 0 || data->ioc_inllen2 == 0) {
rc = -EINVAL;
goto lmv_out_free;
@@ -1461,7 +1465,7 @@
if (request) {
body = req_capsule_server_get(&request->rq_pill,
&RMF_MDT_BODY);
- LASSERT(body != NULL);
+ LASSERT(body);
} else {
goto out_req;
}
@@ -1539,7 +1543,7 @@
return rc;
lmm = libcfs_kvzalloc(lmmsize, GFP_NOFS);
- if (lmm == NULL)
+ if (!lmm)
return -ENOMEM;
if (copy_from_user(lmm, lum, lmmsize)) {
rc = -EFAULT;
@@ -1688,7 +1692,6 @@
if (sbi->ll_flags & LL_SBI_RMT_CLIENT && is_root_inode(inode)) {
struct ll_file_data *fd = LUSTRE_FPRIVATE(file);
- LASSERT(fd != NULL);
rc = rct_add(&sbi->ll_rct, current_pid(), arg);
if (!rc)
fd->fd_flags |= LL_FILE_RMTACL;
@@ -1757,7 +1760,7 @@
return -E2BIG;
hur = libcfs_kvzalloc(totalsize, GFP_NOFS);
- if (hur == NULL)
+ if (!hur)
return -ENOMEM;
/* Copy the whole struct */
@@ -1808,7 +1811,8 @@
hpk.hpk_data_version = 0;
/* File may not exist in Lustre; all progress
- * reported to Lustre root */
+ * reported to Lustre root
+ */
rc = obd_iocontrol(cmd, sbi->ll_md_exp, sizeof(hpk), &hpk,
NULL);
return rc;
diff --git a/drivers/staging/lustre/lustre/llite/file.c b/drivers/staging/lustre/lustre/llite/file.c
index 132d19b..4c25cf2 100644
--- a/drivers/staging/lustre/lustre/llite/file.c
+++ b/drivers/staging/lustre/lustre/llite/file.c
@@ -65,7 +65,7 @@
struct ll_file_data *fd;
fd = kmem_cache_alloc(ll_file_data_slab, GFP_NOFS | __GFP_ZERO);
- if (fd == NULL)
+ if (!fd)
return NULL;
fd->fd_write_failed = false;
return fd;
@@ -73,7 +73,7 @@
static void ll_file_data_put(struct ll_file_data *fd)
{
- if (fd != NULL)
+ if (fd)
kmem_cache_free(ll_file_data_slab, fd);
}
@@ -134,7 +134,7 @@
int epoch_close = 1;
int rc;
- if (obd == NULL) {
+ if (!obd) {
/*
* XXX: in case of LMV, is this correct to access
* ->exp_handle?
@@ -153,7 +153,7 @@
}
ll_prepare_close(inode, op_data, och);
- if (data_version != NULL) {
+ if (data_version) {
/* Pass in data_version implies release. */
op_data->op_bias |= MDS_HSM_RELEASE;
op_data->op_data_version = *data_version;
@@ -166,7 +166,8 @@
/* This close must have the epoch closed. */
LASSERT(epoch_close);
/* MDS has instructed us to obtain Size-on-MDS attribute from
- * OSTs and send setattr to back to MDS. */
+ * OSTs and send setattr to back to MDS.
+ */
rc = ll_som_update(inode, op_data);
if (rc) {
CERROR("inode %lu mdc Size-on-MDS update failed: rc = %d\n",
@@ -179,7 +180,8 @@
}
/* DATA_MODIFIED flag was successfully sent on close, cancel data
- * modification flag. */
+ * modification flag.
+ */
if (rc == 0 && (op_data->op_bias & MDS_DATA_MODIFIED)) {
struct ll_inode_info *lli = ll_i2info(inode);
@@ -242,7 +244,8 @@
mutex_lock(&lli->lli_och_mutex);
if (*och_usecount > 0) {
/* There are still users of this handle, so skip
- * freeing it. */
+ * freeing it.
+ */
mutex_unlock(&lli->lli_och_mutex);
return 0;
}
@@ -251,9 +254,10 @@
*och_p = NULL;
mutex_unlock(&lli->lli_och_mutex);
- if (och != NULL) {
+ if (och) {
/* There might be a race and this handle may already
- be closed. */
+ * be closed.
+ */
rc = ll_close_inode_openhandle(ll_i2sbi(inode)->ll_md_exp,
inode, och, NULL);
}
@@ -276,11 +280,12 @@
if (unlikely(fd->fd_flags & LL_FILE_GROUP_LOCKED))
ll_put_grouplock(inode, file, fd->fd_grouplock.cg_gid);
- if (fd->fd_lease_och != NULL) {
+ if (fd->fd_lease_och) {
bool lease_broken;
/* Usually the lease is not released when the
- * application crashed, we need to release here. */
+ * application crashed, we need to release here.
+ */
rc = ll_lease_close(fd->fd_lease_och, inode, &lease_broken);
CDEBUG(rc ? D_ERROR : D_INODE, "Clean up lease "DFID" %d/%d\n",
PFID(&lli->lli_fid), rc, lease_broken);
@@ -288,14 +293,15 @@
fd->fd_lease_och = NULL;
}
- if (fd->fd_och != NULL) {
+ if (fd->fd_och) {
rc = ll_close_inode_openhandle(md_exp, inode, fd->fd_och, NULL);
fd->fd_och = NULL;
goto out;
}
/* Let's see if we have good enough OPEN lock on the file and if
- we can skip talking to MDS */
+ * we can skip talking to MDS
+ */
mutex_lock(&lli->lli_och_mutex);
if (fd->fd_omode & FMODE_WRITE) {
@@ -343,7 +349,6 @@
if (sbi->ll_flags & LL_SBI_RMT_CLIENT && is_root_inode(inode)) {
struct ll_file_data *fd = LUSTRE_FPRIVATE(file);
- LASSERT(fd != NULL);
if (unlikely(fd->fd_flags & LL_FILE_RMTACL)) {
fd->fd_flags &= ~LL_FILE_RMTACL;
rct_del(&sbi->ll_rct, current_pid());
@@ -355,11 +360,12 @@
if (!is_root_inode(inode))
ll_stats_ops_tally(sbi, LPROC_LL_RELEASE, 1);
fd = LUSTRE_FPRIVATE(file);
- LASSERT(fd != NULL);
+ LASSERT(fd);
- /* The last ref on @file, maybe not the owner pid of statahead.
+ /* The last ref on @file, maybe not be the owner pid of statahead.
* Different processes can open the same dir, "ll_opendir_key" means:
- * it is me that should stop the statahead thread. */
+ * it is me that should stop the statahead thread.
+ */
if (S_ISDIR(inode->i_mode) && lli->lli_opendir_key == fd &&
lli->lli_opendir_pid != 0)
ll_stop_statahead(inode, lli->lli_opendir_key);
@@ -396,16 +402,16 @@
__u32 opc = LUSTRE_OPC_ANY;
int rc;
- /* Usually we come here only for NFSD, and we want open lock.
- But we can also get here with pre 2.6.15 patchless kernels, and in
- that case that lock is also ok */
+ /* Usually we come here only for NFSD, and we want open lock. */
/* We can also get here if there was cached open handle in revalidate_it
* but it disappeared while we were getting from there to ll_file_open.
* But this means this file was closed and immediately opened which
- * makes a good candidate for using OPEN lock */
+ * makes a good candidate for using OPEN lock
+ */
/* If lmmsize & lmm are not 0, we are just setting stripe info
- * parameters. No need for the open lock */
- if (lmm == NULL && lmmsize == 0) {
+ * parameters. No need for the open lock
+ */
+ if (!lmm && lmmsize == 0) {
itp->it_flags |= MDS_OPEN_LOCK;
if (itp->it_flags & FMODE_WRITE)
opc = LUSTRE_OPC_CREATE;
@@ -492,7 +498,7 @@
LASSERT(!LUSTRE_FPRIVATE(file));
- LASSERT(fd != NULL);
+ LASSERT(fd);
if (och) {
struct ptlrpc_request *req = it->d.lustre.it_data;
@@ -543,7 +549,7 @@
file->private_data = NULL; /* prevent ll_local_open assertion */
fd = ll_file_data_get();
- if (fd == NULL) {
+ if (!fd) {
rc = -ENOMEM;
goto out_openerr;
}
@@ -551,7 +557,7 @@
fd->fd_file = file;
if (S_ISDIR(inode->i_mode)) {
spin_lock(&lli->lli_sa_lock);
- if (lli->lli_opendir_key == NULL && lli->lli_sai == NULL &&
+ if (!lli->lli_opendir_key && !lli->lli_sai &&
lli->lli_opendir_pid == 0) {
lli->lli_opendir_key = fd;
lli->lli_opendir_pid = current_pid();
@@ -568,7 +574,8 @@
if (!it || !it->d.lustre.it_disposition) {
/* Convert f_flags into access mode. We cannot use file->f_mode,
* because everything but O_ACCMODE mask was stripped from
- * there */
+ * there
+ */
if ((oit.it_flags + 1) & O_ACCMODE)
oit.it_flags++;
if (file->f_flags & O_TRUNC)
@@ -577,17 +584,20 @@
/* kernel only call f_op->open in dentry_open. filp_open calls
* dentry_open after call to open_namei that checks permissions.
* Only nfsd_open call dentry_open directly without checking
- * permissions and because of that this code below is safe. */
+ * permissions and because of that this code below is safe.
+ */
if (oit.it_flags & (FMODE_WRITE | FMODE_READ))
oit.it_flags |= MDS_OPEN_OWNEROVERRIDE;
/* We do not want O_EXCL here, presumably we opened the file
- * already? XXX - NFS implications? */
+ * already? XXX - NFS implications?
+ */
oit.it_flags &= ~O_EXCL;
/* bug20584, if "it_flags" contains O_CREAT, the file will be
* created if necessary, then "IT_CREAT" should be set to keep
- * consistent with it */
+ * consistent with it
+ */
if (oit.it_flags & O_CREAT)
oit.it_op |= IT_CREAT;
@@ -611,7 +621,8 @@
if (*och_p) { /* Open handle is present */
if (it_disposition(it, DISP_OPEN_OPEN)) {
/* Well, there's extra open request that we do not need,
- let's close it somehow. This will decref request. */
+ * let's close it somehow. This will decref request.
+ */
rc = it_open_error(DISP_OPEN_OPEN, it);
if (rc) {
mutex_unlock(&lli->lli_och_mutex);
@@ -632,10 +643,11 @@
LASSERT(*och_usecount == 0);
if (!it->d.lustre.it_disposition) {
/* We cannot just request lock handle now, new ELC code
- means that one of other OPEN locks for this file
- could be cancelled, and since blocking ast handler
- would attempt to grab och_mutex as well, that would
- result in a deadlock */
+ * means that one of other OPEN locks for this file
+ * could be cancelled, and since blocking ast handler
+ * would attempt to grab och_mutex as well, that would
+ * result in a deadlock
+ */
mutex_unlock(&lli->lli_och_mutex);
it->it_create_mode |= M_CHECK_STALE;
rc = ll_intent_file_open(file->f_path.dentry, NULL, 0, it);
@@ -655,9 +667,11 @@
/* md_intent_lock() didn't get a request ref if there was an
* open error, so don't do cleanup on the request here
- * (bug 3430) */
+ * (bug 3430)
+ */
/* XXX (green): Should not we bail out on any error here, not
- * just open error? */
+ * just open error?
+ */
rc = it_open_error(DISP_OPEN_OPEN, it);
if (rc)
goto out_och_free;
@@ -672,8 +686,9 @@
fd = NULL;
/* Must do this outside lli_och_mutex lock to prevent deadlock where
- different kind of OPEN lock for this same inode gets cancelled
- by ldlm_cancel_lru */
+ * different kind of OPEN lock for this same inode gets cancelled
+ * by ldlm_cancel_lru
+ */
if (!S_ISREG(inode->i_mode))
goto out_och_free;
@@ -752,7 +767,7 @@
if (fmode != FMODE_WRITE && fmode != FMODE_READ)
return ERR_PTR(-EINVAL);
- if (file != NULL) {
+ if (file) {
struct ll_inode_info *lli = ll_i2info(inode);
struct ll_file_data *fd = LUSTRE_FPRIVATE(file);
struct obd_client_handle **och_p;
@@ -764,18 +779,18 @@
/* Get the openhandle of the file */
rc = -EBUSY;
mutex_lock(&lli->lli_och_mutex);
- if (fd->fd_lease_och != NULL) {
+ if (fd->fd_lease_och) {
mutex_unlock(&lli->lli_och_mutex);
return ERR_PTR(rc);
}
- if (fd->fd_och == NULL) {
+ if (!fd->fd_och) {
if (file->f_mode & FMODE_WRITE) {
- LASSERT(lli->lli_mds_write_och != NULL);
+ LASSERT(lli->lli_mds_write_och);
och_p = &lli->lli_mds_write_och;
och_usecount = &lli->lli_open_fd_write_count;
} else {
- LASSERT(lli->lli_mds_read_och != NULL);
+ LASSERT(lli->lli_mds_read_och);
och_p = &lli->lli_mds_read_och;
och_usecount = &lli->lli_open_fd_read_count;
}
@@ -790,7 +805,7 @@
if (rc < 0) /* more than 1 opener */
return ERR_PTR(rc);
- LASSERT(fd->fd_och != NULL);
+ LASSERT(fd->fd_och);
old_handle = fd->fd_och->och_fh;
}
@@ -817,7 +832,8 @@
* broken;
* LDLM_FL_EXCL: Set this flag so that it won't be matched by normal
* open in ll_md_blocking_ast(). Otherwise as ll_md_blocking_lease_ast
- * doesn't deal with openhandle, so normal openhandle will be leaked. */
+ * doesn't deal with openhandle, so normal openhandle will be leaked.
+ */
LDLM_FL_NO_LRU | LDLM_FL_EXCL);
ll_finish_md_op_data(op_data);
ptlrpc_req_finished(req);
@@ -886,7 +902,7 @@
int rc;
lock = ldlm_handle2lock(&och->och_lease_handle);
- if (lock != NULL) {
+ if (lock) {
lock_res_and_lock(lock);
cancelled = ldlm_is_cancel(lock);
unlock_res_and_lock(lock);
@@ -898,7 +914,7 @@
if (!cancelled)
ldlm_cli_cancel(&och->och_lease_handle, 0);
- if (lease_broken != NULL)
+ if (lease_broken)
*lease_broken = cancelled;
rc = ll_close_inode_openhandle(ll_i2sbi(inode)->ll_md_exp, inode, och,
@@ -914,7 +930,7 @@
struct obd_info oinfo = { };
int rc;
- LASSERT(lsm != NULL);
+ LASSERT(lsm);
oinfo.oi_md = lsm;
oinfo.oi_oa = obdo;
@@ -933,7 +949,7 @@
}
set = ptlrpc_prep_set();
- if (set == NULL) {
+ if (!set) {
CERROR("can't allocate ptlrpc set\n");
rc = -ENOMEM;
} else {
@@ -986,7 +1002,8 @@
ll_inode_size_lock(inode);
/* merge timestamps the most recently obtained from mds with
- timestamps obtained from osts */
+ * timestamps obtained from osts
+ */
LTIME_S(inode->i_atime) = lli->lli_lvb.lvb_atime;
LTIME_S(inode->i_mtime) = lli->lli_lvb.lvb_mtime;
LTIME_S(inode->i_ctime) = lli->lli_lvb.lvb_ctime;
@@ -1155,7 +1172,8 @@
out:
cl_io_fini(env, io);
/* If any bit been read/written (result != 0), we just return
- * short read/write instead of restart io. */
+ * short read/write instead of restart io.
+ */
if ((result == 0 || result == -ENODATA) && io->ci_need_restart) {
CDEBUG(D_VFSTRACE, "Restart %s on %pD from %lld, count:%zd\n",
iot == CIT_READ ? "read" : "write",
@@ -1261,7 +1279,7 @@
struct lov_stripe_md *lsm = NULL, *lsm2;
oa = kmem_cache_alloc(obdo_cachep, GFP_NOFS | __GFP_ZERO);
- if (oa == NULL)
+ if (!oa)
return -ENOMEM;
lsm = ccc_inode_lsm_get(inode);
@@ -1274,7 +1292,7 @@
(lsm->lsm_stripe_count));
lsm2 = libcfs_kvzalloc(lsm_size, GFP_NOFS);
- if (lsm2 == NULL) {
+ if (!lsm2) {
rc = -ENOMEM;
goto out;
}
@@ -1341,7 +1359,7 @@
int rc = 0;
lsm = ccc_inode_lsm_get(inode);
- if (lsm != NULL) {
+ if (lsm) {
ccc_inode_lsm_put(inode, lsm);
CDEBUG(D_IOCTL, "stripe already exists for ino %lu\n",
inode->i_ino);
@@ -1401,7 +1419,6 @@
}
body = req_capsule_server_get(&req->rq_pill, &RMF_MDT_BODY);
- LASSERT(body != NULL); /* checked by mdc_getattr_name */
lmmsize = body->eadatasize;
@@ -1412,7 +1429,6 @@
}
lmm = req_capsule_server_sized_get(&req->rq_pill, &RMF_MDT_MD, lmmsize);
- LASSERT(lmm != NULL);
if ((lmm->lmm_magic != cpu_to_le32(LOV_MAGIC_V1)) &&
(lmm->lmm_magic != cpu_to_le32(LOV_MAGIC_V3))) {
@@ -1433,7 +1449,8 @@
stripe_count = 0;
/* if function called for directory - we should
- * avoid swab not existent lsm objects */
+ * avoid swab not existent lsm objects
+ */
if (lmm->lmm_magic == cpu_to_le32(LOV_MAGIC_V1)) {
lustre_swab_lov_user_md_v1((struct lov_user_md_v1 *)lmm);
if (S_ISREG(body->mode))
@@ -1469,7 +1486,7 @@
return -EPERM;
lump = libcfs_kvzalloc(lum_size, GFP_NOFS);
- if (lump == NULL)
+ if (!lump)
return -ENOMEM;
if (copy_from_user(lump, (struct lov_user_md __user *)arg, lum_size)) {
@@ -1530,7 +1547,7 @@
int rc = -ENODATA;
lsm = ccc_inode_lsm_get(inode);
- if (lsm != NULL)
+ if (lsm)
rc = obd_iocontrol(LL_IOC_LOV_GETSTRIPE, ll_i2dtexp(inode), 0,
lsm, (void __user *)arg);
ccc_inode_lsm_put(inode, lsm);
@@ -1560,7 +1577,7 @@
spin_unlock(&lli->lli_lock);
return -EINVAL;
}
- LASSERT(fd->fd_grouplock.cg_lock == NULL);
+ LASSERT(!fd->fd_grouplock.cg_lock);
spin_unlock(&lli->lli_lock);
rc = cl_get_grouplock(cl_i2info(inode)->lli_clob,
@@ -1597,7 +1614,7 @@
CWARN("no group lock held\n");
return -EINVAL;
}
- LASSERT(fd->fd_grouplock.cg_lock != NULL);
+ LASSERT(fd->fd_grouplock.cg_lock);
if (fd->fd_grouplock.cg_gid != arg) {
CWARN("group lock %lu doesn't match current id %lu\n",
@@ -1688,7 +1705,7 @@
}
lsm = ccc_inode_lsm_get(inode);
- if (lsm == NULL)
+ if (!lsm)
return -ENOENT;
/* If the stripe_count > 1 and the application does not understand
@@ -1782,7 +1799,8 @@
int rc = 0;
/* Get the extent count so we can calculate the size of
- * required fiemap buffer */
+ * required fiemap buffer
+ */
if (get_user(extent_count,
&((struct ll_user_fiemap __user *)arg)->fm_extent_count))
return -EFAULT;
@@ -1794,7 +1812,7 @@
sizeof(struct ll_fiemap_extent));
fiemap_s = libcfs_kvzalloc(num_bytes, GFP_NOFS);
- if (fiemap_s == NULL)
+ if (!fiemap_s)
return -ENOMEM;
/* get the fiemap value */
@@ -1806,7 +1824,8 @@
/* If fm_extent_count is non-zero, read the first extent since
* it is used to calculate end_offset and device from previous
- * fiemap call. */
+ * fiemap call.
+ */
if (extent_count) {
if (copy_from_user(&fiemap_s->fm_extents[0],
(char __user *)arg + sizeof(*fiemap_s),
@@ -1917,13 +1936,14 @@
/* Release the file.
* NB: lease lock handle is released in mdc_hsm_release_pack() because
- * we still need it to pack l_remote_handle to MDT. */
+ * we still need it to pack l_remote_handle to MDT.
+ */
rc = ll_close_inode_openhandle(ll_i2sbi(inode)->ll_md_exp, inode, och,
&data_version);
och = NULL;
out:
- if (och != NULL && !IS_ERR(och)) /* close the file */
+ if (och && !IS_ERR(och)) /* close the file */
ll_lease_close(och, inode, NULL);
return rc;
@@ -2007,7 +2027,8 @@
}
/* to be able to restore mtime and atime after swap
- * we need to first save them */
+ * we need to first save them
+ */
if (lsl->sl_flags &
(SWAP_LAYOUTS_KEEP_MTIME | SWAP_LAYOUTS_KEEP_ATIME)) {
llss->ia1.ia_mtime = llss->inode1->i_mtime;
@@ -2019,7 +2040,8 @@
}
/* ultimate check, before swapping the layouts we check if
- * dataversion has changed (if requested) */
+ * dataversion has changed (if requested)
+ */
if (llss->check_dv1) {
rc = ll_data_version(llss->inode1, &dv, 0);
if (rc)
@@ -2042,9 +2064,11 @@
/* struct md_op_data is used to send the swap args to the mdt
* only flags is missing, so we use struct mdc_swap_layouts
- * through the md_op_data->op_data */
+ * through the md_op_data->op_data
+ */
/* flags from user space have to be converted before they are send to
- * server, no flag is sent today, they are only used on the client */
+ * server, no flag is sent today, they are only used on the client
+ */
msl.msl_flags = 0;
rc = -ENOMEM;
op_data = ll_prep_md_op_data(NULL, llss->inode1, llss->inode2, NULL, 0,
@@ -2113,7 +2137,8 @@
return -EINVAL;
/* Non-root users are forbidden to set or clear flags which are
- * NOT defined in HSM_USER_MASK. */
+ * NOT defined in HSM_USER_MASK.
+ */
if (((hss->hss_setmask | hss->hss_clearmask) & ~HSM_USER_MASK) &&
!capable(CFS_CAP_SYS_ADMIN))
return -EPERM;
@@ -2250,7 +2275,7 @@
return -EPERM;
file2 = fget(lsl.sl_fd);
- if (file2 == NULL)
+ if (!file2)
return -EBADF;
rc = -EPERM;
@@ -2413,13 +2438,13 @@
break;
case F_UNLCK:
mutex_lock(&lli->lli_och_mutex);
- if (fd->fd_lease_och != NULL) {
+ if (fd->fd_lease_och) {
och = fd->fd_lease_och;
fd->fd_lease_och = NULL;
}
mutex_unlock(&lli->lli_och_mutex);
- if (och != NULL) {
+ if (och) {
mode = och->och_flags &
(FMODE_READ|FMODE_WRITE);
rc = ll_lease_close(och, inode, &lease_broken);
@@ -2444,12 +2469,12 @@
rc = 0;
mutex_lock(&lli->lli_och_mutex);
- if (fd->fd_lease_och == NULL) {
+ if (!fd->fd_lease_och) {
fd->fd_lease_och = och;
och = NULL;
}
mutex_unlock(&lli->lli_och_mutex);
- if (och != NULL) {
+ if (och) {
/* impossible now that only excl is supported for now */
ll_lease_close(och, inode, &lease_broken);
rc = -EBUSY;
@@ -2462,11 +2487,11 @@
rc = 0;
mutex_lock(&lli->lli_och_mutex);
- if (fd->fd_lease_och != NULL) {
+ if (fd->fd_lease_och) {
struct obd_client_handle *och = fd->fd_lease_och;
lock = ldlm_handle2lock(&och->och_lease_handle);
- if (lock != NULL) {
+ if (lock) {
lock_res_and_lock(lock);
if (!ldlm_is_cancel(lock))
rc = och->och_flags &
@@ -2537,15 +2562,17 @@
LASSERT(!S_ISDIR(inode->i_mode));
/* catch async errors that were recorded back when async writeback
- * failed for pages in this mapping. */
+ * failed for pages in this mapping.
+ */
rc = lli->lli_async_rc;
lli->lli_async_rc = 0;
err = lov_read_and_clear_async_rc(lli->lli_clob);
if (rc == 0)
rc = err;
- /* The application has been told write failure already.
- * Do not report failure again. */
+ /* The application has been told about write failure already.
+ * Do not report failure again.
+ */
if (fd->fd_write_failed)
return 0;
return rc ? -EIO : 0;
@@ -2613,7 +2640,8 @@
inode_lock(inode);
/* catch async errors that were recorded back when async writeback
- * failed for pages in this mapping. */
+ * failed for pages in this mapping.
+ */
if (!S_ISDIR(inode->i_mode)) {
err = lli->lli_async_rc;
lli->lli_async_rc = 0;
@@ -2684,7 +2712,8 @@
* I guess between lockd processes) and then compares pid.
* As such we assign pid to the owner field to make it all work,
* conflict with normal locks is unlikely since pid space and
- * pointer space for current->files are not intersecting */
+ * pointer space for current->files are not intersecting
+ */
if (file_lock->fl_lmops && file_lock->fl_lmops->lm_compare_owner)
flock.l_flock.owner = (unsigned long)file_lock->fl_pid;
@@ -2700,7 +2729,8 @@
* order to process an unlock request we need all of the same
* information that is given with a normal read or write record
* lock request. To avoid creating another ldlm unlock (cancel)
- * message we'll treat a LCK_NL flock request as an unlock. */
+ * message we'll treat a LCK_NL flock request as an unlock.
+ */
einfo.ei_mode = LCK_NL;
break;
case F_WRLCK:
@@ -2731,7 +2761,8 @@
#endif
flags = LDLM_FL_TEST_LOCK;
/* Save the old mode so that if the mode in the lock changes we
- * can decrement the appropriate reader or writer refcount. */
+ * can decrement the appropriate reader or writer refcount.
+ */
file_lock->fl_type = einfo.ei_mode;
break;
default:
@@ -2783,11 +2814,12 @@
* \param l_req_mode [IN] searched lock mode
* \retval boolean, true iff all bits are found
*/
-int ll_have_md_lock(struct inode *inode, __u64 *bits, ldlm_mode_t l_req_mode)
+int ll_have_md_lock(struct inode *inode, __u64 *bits,
+ enum ldlm_mode l_req_mode)
{
struct lustre_handle lockh;
ldlm_policy_data_t policy;
- ldlm_mode_t mode = (l_req_mode == LCK_MINMODE) ?
+ enum ldlm_mode mode = (l_req_mode == LCK_MINMODE) ?
(LCK_CR|LCK_CW|LCK_PR|LCK_PW) : l_req_mode;
struct lu_fid *fid;
__u64 flags;
@@ -2823,13 +2855,13 @@
return *bits == 0;
}
-ldlm_mode_t ll_take_md_lock(struct inode *inode, __u64 bits,
- struct lustre_handle *lockh, __u64 flags,
- ldlm_mode_t mode)
+enum ldlm_mode ll_take_md_lock(struct inode *inode, __u64 bits,
+ struct lustre_handle *lockh, __u64 flags,
+ enum ldlm_mode mode)
{
ldlm_policy_data_t policy = { .l_inodebits = {bits} };
struct lu_fid *fid;
- ldlm_mode_t rc;
+ enum ldlm_mode rc;
fid = &ll_i2info(inode)->lli_fid;
CDEBUG(D_INFO, "trying to match res "DFID"\n", PFID(fid));
@@ -2867,8 +2899,6 @@
struct obd_export *exp;
int rc = 0;
- LASSERT(inode != NULL);
-
CDEBUG(D_VFSTRACE, "VFS Op:inode=%lu/%u(%p),name=%pd\n",
inode->i_ino, inode->i_generation, inode, dentry);
@@ -2876,7 +2906,8 @@
/* XXX: Enable OBD_CONNECT_ATTRFID to reduce unnecessary getattr RPC.
* But under CMD case, it caused some lock issues, should be fixed
- * with new CMD ibits lock. See bug 12718 */
+ * with new CMD ibits lock. See bug 12718
+ */
if (exp_connect_flags(exp) & OBD_CONNECT_ATTRFID) {
struct lookup_intent oit = { .it_op = IT_GETATTR };
struct md_op_data *op_data;
@@ -2894,7 +2925,8 @@
oit.it_create_mode |= M_CHECK_STALE;
rc = md_intent_lock(exp, op_data, NULL, 0,
/* we are not interested in name
- based lookup */
+ * based lookup
+ */
&oit, 0, &req,
ll_md_blocking_ast, 0);
ll_finish_md_op_data(op_data);
@@ -2911,9 +2943,10 @@
}
/* Unlinked? Unhash dentry, so it is not picked up later by
- do_lookup() -> ll_revalidate_it(). We cannot use d_drop
- here to preserve get_cwd functionality on 2.6.
- Bug 10503 */
+ * do_lookup() -> ll_revalidate_it(). We cannot use d_drop
+ * here to preserve get_cwd functionality on 2.6.
+ * Bug 10503
+ */
if (!d_inode(dentry)->i_nlink)
d_lustre_invalidate(dentry, 0);
@@ -3027,7 +3060,7 @@
sizeof(struct ll_fiemap_extent));
fiemap = libcfs_kvzalloc(num_bytes, GFP_NOFS);
- if (fiemap == NULL)
+ if (!fiemap)
return -ENOMEM;
fiemap->fm_flags = fieinfo->fi_flags;
@@ -3075,13 +3108,12 @@
{
int rc = 0;
-#ifdef MAY_NOT_BLOCK
if (mask & MAY_NOT_BLOCK)
return -ECHILD;
-#endif
/* as root inode are NOT getting validated in lookup operation,
- * need to do it before permission check. */
+ * need to do it before permission check.
+ */
if (is_root_inode(inode)) {
rc = __ll_inode_revalidate(inode->i_sb->s_root,
@@ -3181,8 +3213,7 @@
unsigned int size;
struct llioc_data *in_data = NULL;
- if (cb == NULL || cmd == NULL ||
- count > LLIOC_MAX_CMD || count < 0)
+ if (!cb || !cmd || count > LLIOC_MAX_CMD || count < 0)
return NULL;
size = sizeof(*in_data) + count * sizeof(unsigned int);
@@ -3208,7 +3239,7 @@
{
struct llioc_data *tmp;
- if (magic == NULL)
+ if (!magic)
return;
down_write(&llioc.ioc_sem);
@@ -3262,7 +3293,7 @@
struct lu_env *env;
int result;
- if (lli->lli_clob == NULL)
+ if (!lli->lli_clob)
return 0;
env = cl_env_nested_get(&nest);
@@ -3275,13 +3306,14 @@
if (conf->coc_opc == OBJECT_CONF_SET) {
struct ldlm_lock *lock = conf->coc_lock;
- LASSERT(lock != NULL);
+ LASSERT(lock);
LASSERT(ldlm_has_layout(lock));
if (result == 0) {
/* it can only be allowed to match after layout is
* applied to inode otherwise false layout would be
* seen. Applying layout should happen before dropping
- * the intent lock. */
+ * the intent lock.
+ */
ldlm_lock_allow_match(lock);
}
}
@@ -3304,14 +3336,15 @@
PFID(ll_inode2fid(inode)), !!(lock->l_flags & LDLM_FL_LVB_READY),
lock->l_lvb_data, lock->l_lvb_len);
- if ((lock->l_lvb_data != NULL) && (lock->l_flags & LDLM_FL_LVB_READY))
+ if (lock->l_lvb_data && (lock->l_flags & LDLM_FL_LVB_READY))
return 0;
/* if layout lock was granted right away, the layout is returned
* within DLM_LVB of dlm reply; otherwise if the lock was ever
* blocked and then granted via completion ast, we have to fetch
* layout here. Please note that we can't use the LVB buffer in
- * completion AST because it doesn't have a large enough buffer */
+ * completion AST because it doesn't have a large enough buffer
+ */
rc = ll_get_default_mdsize(sbi, &lmmsize);
if (rc == 0)
rc = md_getxattr(sbi->ll_md_exp, ll_inode2fid(inode),
@@ -3321,7 +3354,7 @@
return rc;
body = req_capsule_server_get(&req->rq_pill, &RMF_MDT_BODY);
- if (body == NULL) {
+ if (!body) {
rc = -EPROTO;
goto out;
}
@@ -3333,20 +3366,20 @@
}
lmm = req_capsule_server_sized_get(&req->rq_pill, &RMF_EADATA, lmmsize);
- if (lmm == NULL) {
+ if (!lmm) {
rc = -EFAULT;
goto out;
}
lvbdata = libcfs_kvzalloc(lmmsize, GFP_NOFS);
- if (lvbdata == NULL) {
+ if (!lvbdata) {
rc = -ENOMEM;
goto out;
}
memcpy(lvbdata, lmm, lmmsize);
lock_res_and_lock(lock);
- if (lock->l_lvb_data != NULL)
+ if (lock->l_lvb_data)
kvfree(lock->l_lvb_data);
lock->l_lvb_data = lvbdata;
@@ -3362,8 +3395,8 @@
* Apply the layout to the inode. Layout lock is held and will be released
* in this function.
*/
-static int ll_layout_lock_set(struct lustre_handle *lockh, ldlm_mode_t mode,
- struct inode *inode, __u32 *gen, bool reconf)
+static int ll_layout_lock_set(struct lustre_handle *lockh, enum ldlm_mode mode,
+ struct inode *inode, __u32 *gen, bool reconf)
{
struct ll_inode_info *lli = ll_i2info(inode);
struct ll_sb_info *sbi = ll_i2sbi(inode);
@@ -3377,7 +3410,7 @@
LASSERT(lustre_handle_is_used(lockh));
lock = ldlm_handle2lock(lockh);
- LASSERT(lock != NULL);
+ LASSERT(lock);
LASSERT(ldlm_has_layout(lock));
LDLM_DEBUG(lock, "File %p/"DFID" being reconfigured: %d.\n",
@@ -3390,12 +3423,14 @@
lvb_ready = !!(lock->l_flags & LDLM_FL_LVB_READY);
unlock_res_and_lock(lock);
/* checking lvb_ready is racy but this is okay. The worst case is
- * that multi processes may configure the file on the same time. */
+ * that multi processes may configure the file on the same time.
+ */
if (lvb_ready || !reconf) {
rc = -ENODATA;
if (lvb_ready) {
/* layout_gen must be valid if layout lock is not
- * cancelled and stripe has already set */
+ * cancelled and stripe has already set
+ */
*gen = ll_layout_version_get(lli);
rc = 0;
}
@@ -3409,13 +3444,14 @@
/* for layout lock, lmm is returned in lock's lvb.
* lvb_data is immutable if the lock is held so it's safe to access it
* without res lock. See the description in ldlm_lock_decref_internal()
- * for the condition to free lvb_data of layout lock */
- if (lock->l_lvb_data != NULL) {
+ * for the condition to free lvb_data of layout lock
+ */
+ if (lock->l_lvb_data) {
rc = obd_unpackmd(sbi->ll_dt_exp, &md.lsm,
lock->l_lvb_data, lock->l_lvb_len);
if (rc >= 0) {
*gen = LL_LAYOUT_GEN_EMPTY;
- if (md.lsm != NULL)
+ if (md.lsm)
*gen = md.lsm->lsm_layout_gen;
rc = 0;
} else {
@@ -3428,7 +3464,8 @@
goto out;
/* set layout to file. Unlikely this will fail as old layout was
- * surely eliminated */
+ * surely eliminated
+ */
memset(&conf, 0, sizeof(conf));
conf.coc_opc = OBJECT_CONF_SET;
conf.coc_inode = inode;
@@ -3436,7 +3473,7 @@
conf.u.coc_md = &md;
rc = ll_layout_conf(inode, &conf);
- if (md.lsm != NULL)
+ if (md.lsm)
obd_free_memmd(sbi->ll_dt_exp, &md.lsm);
/* refresh layout failed, need to wait */
@@ -3485,7 +3522,7 @@
struct md_op_data *op_data;
struct lookup_intent it;
struct lustre_handle lockh;
- ldlm_mode_t mode;
+ enum ldlm_mode mode;
struct ldlm_enqueue_info einfo = {
.ei_type = LDLM_IBITS,
.ei_mode = LCK_CR,
@@ -3507,7 +3544,8 @@
again:
/* mostly layout lock is caching on the local side, so try to match
- * it before grabbing layout lock mutex. */
+ * it before grabbing layout lock mutex.
+ */
mode = ll_take_md_lock(inode, MDS_INODELOCK_LAYOUT, &lockh, 0,
LCK_CR | LCK_CW | LCK_PR | LCK_PW);
if (mode != 0) { /* hit cached lock */
@@ -3537,8 +3575,7 @@
rc = md_enqueue(sbi->ll_md_exp, &einfo, &it, op_data, &lockh,
NULL, 0, NULL, 0);
- if (it.d.lustre.it_data != NULL)
- ptlrpc_req_finished(it.d.lustre.it_data);
+ ptlrpc_req_finished(it.d.lustre.it_data);
it.d.lustre.it_data = NULL;
ll_finish_md_op_data(op_data);
diff --git a/drivers/staging/lustre/lustre/llite/llite_close.c b/drivers/staging/lustre/lustre/llite/llite_close.c
index 3f348a3..fcb6657 100644
--- a/drivers/staging/lustre/lustre/llite/llite_close.c
+++ b/drivers/staging/lustre/lustre/llite/llite_close.c
@@ -52,7 +52,7 @@
spin_lock(&lli->lli_lock);
lli->lli_flags |= LLIF_SOM_DIRTY;
- if (page != NULL && list_empty(&page->cpg_pending_linkage))
+ if (page && list_empty(&page->cpg_pending_linkage))
list_add(&page->cpg_pending_linkage,
&club->cob_pending_list);
spin_unlock(&lli->lli_lock);
@@ -65,7 +65,7 @@
int rc = 0;
spin_lock(&lli->lli_lock);
- if (page != NULL && !list_empty(&page->cpg_pending_linkage)) {
+ if (page && !list_empty(&page->cpg_pending_linkage)) {
list_del_init(&page->cpg_pending_linkage);
rc = 1;
}
@@ -76,7 +76,8 @@
/** Queues DONE_WRITING if
* - done writing is allowed;
- * - inode has no no dirty pages; */
+ * - inode has no no dirty pages;
+ */
void ll_queue_done_writing(struct inode *inode, unsigned long flags)
{
struct ll_inode_info *lli = ll_i2info(inode);
@@ -106,7 +107,8 @@
* close() happen, epoch is closed as the inode is marked as
* LLIF_EPOCH_PENDING. When pages are written inode should not
* be inserted into the queue again, clear this flag to avoid
- * it. */
+ * it.
+ */
lli->lli_flags &= ~LLIF_DONE_WRITING;
wake_up(&lcq->lcq_waitq);
@@ -144,10 +146,11 @@
spin_lock(&lli->lli_lock);
if (!(list_empty(&club->cob_pending_list))) {
if (!(lli->lli_flags & LLIF_EPOCH_PENDING)) {
- LASSERT(*och != NULL);
- LASSERT(lli->lli_pending_och == NULL);
+ LASSERT(*och);
+ LASSERT(!lli->lli_pending_och);
/* Inode is dirty and there is no pending write done
- * request yet, DONE_WRITE is to be sent later. */
+ * request yet, DONE_WRITE is to be sent later.
+ */
lli->lli_flags |= LLIF_EPOCH_PENDING;
lli->lli_pending_och = *och;
spin_unlock(&lli->lli_lock);
@@ -159,7 +162,8 @@
if (flags & LLIF_DONE_WRITING) {
/* Some pages are still dirty, it is early to send
* DONE_WRITE. Wait until all pages will be flushed
- * and try DONE_WRITE again later. */
+ * and try DONE_WRITE again later.
+ */
LASSERT(!(lli->lli_flags & LLIF_DONE_WRITING));
lli->lli_flags |= LLIF_DONE_WRITING;
spin_unlock(&lli->lli_lock);
@@ -187,7 +191,8 @@
}
/* There is a pending DONE_WRITE -- close epoch with no
- * attribute change. */
+ * attribute change.
+ */
if (lli->lli_flags & LLIF_EPOCH_PENDING) {
spin_unlock(&lli->lli_lock);
goto out;
@@ -215,7 +220,7 @@
struct obdo *oa;
int rc;
- LASSERT(op_data != NULL);
+ LASSERT(op_data);
if (lli->lli_flags & LLIF_MDS_SIZE_LOCK)
CERROR("ino %lu/%u(flags %u) som valid it just after recovery\n",
inode->i_ino, inode->i_generation,
@@ -266,7 +271,7 @@
{
ll_ioepoch_close(inode, op_data, och, LLIF_DONE_WRITING);
/* If there is no @och, we do not do D_W yet. */
- if (*och == NULL)
+ if (!*och)
return;
ll_pack_inode2opdata(inode, op_data, &(*och)->och_fh);
@@ -289,13 +294,14 @@
ll_prepare_done_writing(inode, op_data, &och);
/* If there is no @och, we do not do D_W yet. */
- if (och == NULL)
+ if (!och)
goto out;
rc = md_done_writing(ll_i2sbi(inode)->ll_md_exp, op_data, NULL);
if (rc == -EAGAIN)
/* MDS has instructed us to obtain Size-on-MDS attribute from
- * OSTs and send setattr to back to MDS. */
+ * OSTs and send setattr to back to MDS.
+ */
rc = ll_som_update(inode, op_data);
else if (rc)
CERROR("inode %lu mdc done_writing failed: rc = %d\n",
diff --git a/drivers/staging/lustre/lustre/llite/llite_internal.h b/drivers/staging/lustre/lustre/llite/llite_internal.h
index 29c325d..f6921ff 100644
--- a/drivers/staging/lustre/lustre/llite/llite_internal.h
+++ b/drivers/staging/lustre/lustre/llite/llite_internal.h
@@ -93,9 +93,10 @@
gid_t lrp_gid;
uid_t lrp_fsuid;
gid_t lrp_fsgid;
- int lrp_access_perm; /* MAY_READ/WRITE/EXEC, this
- is access permission with
- lrp_fsuid/lrp_fsgid. */
+ int lrp_access_perm; /* MAY_READ/WRITE/EXEC, this
+ * is access permission with
+ * lrp_fsuid/lrp_fsgid.
+ */
};
enum lli_flags {
@@ -106,7 +107,8 @@
/* DONE WRITING is allowed. */
LLIF_DONE_WRITING = (1 << 2),
/* Sizeon-on-MDS attributes are changed. An attribute update needs to
- * be sent to MDS. */
+ * be sent to MDS.
+ */
LLIF_SOM_DIRTY = (1 << 3),
/* File data is modified. */
LLIF_DATA_MODIFIED = (1 << 4),
@@ -130,22 +132,23 @@
/* identifying fields for both metadata and data stacks. */
struct lu_fid lli_fid;
/* Parent fid for accessing default stripe data on parent directory
- * for allocating OST objects after a mknod() and later open-by-FID. */
+ * for allocating OST objects after a mknod() and later open-by-FID.
+ */
struct lu_fid lli_pfid;
- struct list_head lli_close_list;
- /* open count currently used by capability only, indicate whether
- * capability needs renewal */
- atomic_t lli_open_count;
+ struct list_head lli_close_list;
+
unsigned long lli_rmtperm_time;
/* handle is to be sent to MDS later on done_writing and setattr.
* Open handle data are needed for the recovery to reconstruct
- * the inode state on the MDS. XXX: recovery is not ready yet. */
+ * the inode state on the MDS. XXX: recovery is not ready yet.
+ */
struct obd_client_handle *lli_pending_och;
/* We need all three because every inode may be opened in different
- * modes */
+ * modes
+ */
struct obd_client_handle *lli_mds_read_och;
struct obd_client_handle *lli_mds_write_och;
struct obd_client_handle *lli_mds_exec_och;
@@ -162,7 +165,8 @@
spinlock_t lli_agl_lock;
/* Try to make the d::member and f::member are aligned. Before using
- * these members, make clear whether it is directory or not. */
+ * these members, make clear whether it is directory or not.
+ */
union {
/* for directory */
struct {
@@ -173,13 +177,15 @@
/* since parent-child threads can share the same @file
* struct, "opendir_key" is the token when dir close for
* case of parent exit before child -- it is me should
- * cleanup the dir readahead. */
+ * cleanup the dir readahead.
+ */
void *d_opendir_key;
struct ll_statahead_info *d_sai;
/* protect statahead stuff. */
spinlock_t d_sa_lock;
- /* "opendir_pid" is the token when lookup/revalid
- * -- I am the owner of dir statahead. */
+ /* "opendir_pid" is the token when lookup/revalidate
+ * -- I am the owner of dir statahead.
+ */
pid_t d_opendir_pid;
} d;
@@ -305,7 +311,8 @@
}
/* default to about 40meg of readahead on a given system. That much tied
- * up in 512k readahead requests serviced at 40ms each is about 1GB/s. */
+ * up in 512k readahead requests serviced at 40ms each is about 1GB/s.
+ */
#define SBI_DEFAULT_READAHEAD_MAX (40UL << (20 - PAGE_CACHE_SHIFT))
/* default to read-ahead full files smaller than 2MB on the second read */
@@ -344,11 +351,13 @@
unsigned long ria_end; /* end offset of read-ahead*/
/* If stride read pattern is detected, ria_stoff means where
* stride read is started. Note: for normal read-ahead, the
- * value here is meaningless, and also it will not be accessed*/
+ * value here is meaningless, and also it will not be accessed
+ */
pgoff_t ria_stoff;
/* ria_length and ria_pages are the length and pages length in the
* stride I/O mode. And they will also be used to check whether
- * it is stride I/O read-ahead in the read-ahead pages*/
+ * it is stride I/O read-ahead in the read-ahead pages
+ */
unsigned long ria_length;
unsigned long ria_pages;
};
@@ -455,7 +464,8 @@
struct ll_sb_info {
/* this protects pglist and ra_info. It isn't safe to
- * grab from interrupt contexts */
+ * grab from interrupt contexts
+ */
spinlock_t ll_lock;
spinlock_t ll_pp_extent_lock; /* pp_extent entry*/
spinlock_t ll_process_lock; /* ll_rw_process_info */
@@ -468,10 +478,8 @@
int ll_flags;
unsigned int ll_umounting:1,
ll_xattr_cache_enabled:1;
- struct list_head ll_conn_chain; /* per-conn chain of SBs */
struct lustre_client_ocd ll_lco;
- struct list_head ll_orphan_dentry_list; /*please don't ask -p*/
struct ll_close_queue *ll_lcq;
struct lprocfs_stats *ll_stats; /* lprocfs stats counter */
@@ -502,13 +510,16 @@
/* metadata stat-ahead */
unsigned int ll_sa_max; /* max statahead RPCs */
atomic_t ll_sa_total; /* statahead thread started
- * count */
+ * count
+ */
atomic_t ll_sa_wrong; /* statahead thread stopped for
- * low hit ratio */
+ * low hit ratio
+ */
atomic_t ll_agl_total; /* AGL thread started count */
- dev_t ll_sdev_orig; /* save s_dev before assign for
- * clustered nfs */
+ dev_t ll_sdev_orig; /* save s_dev before assign for
+ * clustered nfs
+ */
struct rmtacl_ctl_table ll_rct;
struct eacl_table ll_et;
__kernel_fsid_t ll_fsid;
@@ -619,13 +630,15 @@
__u32 fd_flags;
fmode_t fd_omode;
/* openhandle if lease exists for this file.
- * Borrow lli->lli_och_mutex to protect assignment */
+ * Borrow lli->lli_och_mutex to protect assignment
+ */
struct obd_client_handle *fd_lease_och;
struct obd_client_handle *fd_och;
struct file *fd_file;
/* Indicate whether need to report failure when close.
* true: failure is known, not report again.
- * false: unknown failure, should report. */
+ * false: unknown failure, should report.
+ */
bool fd_write_failed;
};
@@ -705,10 +718,10 @@
extern struct file_operations ll_file_operations_noflock;
extern const struct inode_operations ll_file_inode_operations;
int ll_have_md_lock(struct inode *inode, __u64 *bits,
- ldlm_mode_t l_req_mode);
-ldlm_mode_t ll_take_md_lock(struct inode *inode, __u64 bits,
- struct lustre_handle *lockh, __u64 flags,
- ldlm_mode_t mode);
+ enum ldlm_mode l_req_mode);
+enum ldlm_mode ll_take_md_lock(struct inode *inode, __u64 bits,
+ struct lustre_handle *lockh, __u64 flags,
+ enum ldlm_mode mode);
int ll_file_open(struct inode *inode, struct file *file);
int ll_file_release(struct inode *inode, struct file *file);
int ll_glimpse_ioctl(struct ll_sb_info *sbi,
@@ -913,7 +926,7 @@
struct vvp_thread_info *info;
info = lu_context_key_get(&env->le_ctx, &vvp_key);
- LASSERT(info != NULL);
+ LASSERT(info);
return info;
}
@@ -937,7 +950,7 @@
struct vvp_session *ses;
ses = lu_context_key_get(env->le_ses, &vvp_session_key);
- LASSERT(ses != NULL);
+ LASSERT(ses);
return ses;
}
@@ -968,7 +981,7 @@
loff_t offset = vmpage->index << PAGE_CACHE_SHIFT;
LASSERT(PageLocked(vmpage));
- if (mapping == NULL)
+ if (!mapping)
return;
ll_teardown_mmaps(mapping, offset, offset + PAGE_CACHE_SIZE);
@@ -993,7 +1006,7 @@
{
struct obd_device *obd = sbi->ll_md_exp->exp_obd;
- if (obd == NULL)
+ if (!obd)
LBUG();
return &obd->u.cli;
}
@@ -1018,7 +1031,7 @@
{
struct lu_fid *fid;
- LASSERT(inode != NULL);
+ LASSERT(inode);
fid = &ll_i2info(inode)->lli_fid;
return fid;
@@ -1107,39 +1120,44 @@
struct ll_statahead_info {
struct inode *sai_inode;
atomic_t sai_refcount; /* when access this struct, hold
- * refcount */
+ * refcount
+ */
unsigned int sai_generation; /* generation for statahead */
unsigned int sai_max; /* max ahead of lookup */
__u64 sai_sent; /* stat requests sent count */
__u64 sai_replied; /* stat requests which received
- * reply */
+ * reply
+ */
__u64 sai_index; /* index of statahead entry */
__u64 sai_index_wait; /* index of entry which is the
- * caller is waiting for */
+ * caller is waiting for
+ */
__u64 sai_hit; /* hit count */
__u64 sai_miss; /* miss count:
- * for "ls -al" case, it includes
- * hidden dentry miss;
- * for "ls -l" case, it does not
- * include hidden dentry miss.
- * "sai_miss_hidden" is used for
- * the later case.
- */
+ * for "ls -al" case, it includes
+ * hidden dentry miss;
+ * for "ls -l" case, it does not
+ * include hidden dentry miss.
+ * "sai_miss_hidden" is used for
+ * the later case.
+ */
unsigned int sai_consecutive_miss; /* consecutive miss */
unsigned int sai_miss_hidden;/* "ls -al", but first dentry
- * is not a hidden one */
+ * is not a hidden one
+ */
unsigned int sai_skip_hidden;/* skipped hidden dentry count */
unsigned int sai_ls_all:1, /* "ls -al", do stat-ahead for
- * hidden entries */
+ * hidden entries
+ */
sai_agl_valid:1;/* AGL is valid for the dir */
- wait_queue_head_t sai_waitq; /* stat-ahead wait queue */
+ wait_queue_head_t sai_waitq; /* stat-ahead wait queue */
struct ptlrpc_thread sai_thread; /* stat-ahead thread */
struct ptlrpc_thread sai_agl_thread; /* AGL thread */
- struct list_head sai_entries; /* entry list */
- struct list_head sai_entries_received; /* entries returned */
- struct list_head sai_entries_stated; /* entries stated */
- struct list_head sai_entries_agl; /* AGL entries to be sent */
- struct list_head sai_cache[LL_SA_CACHE_SIZE];
+ struct list_head sai_entries; /* entry list */
+ struct list_head sai_entries_received; /* entries returned */
+ struct list_head sai_entries_stated; /* entries stated */
+ struct list_head sai_entries_agl; /* AGL entries to be sent */
+ struct list_head sai_cache[LL_SA_CACHE_SIZE];
spinlock_t sai_cache_lock[LL_SA_CACHE_SIZE];
atomic_t sai_cache_count; /* entry count in cache */
};
@@ -1171,8 +1189,8 @@
if (lli->lli_opendir_pid != current_pid())
return;
- LASSERT(ldd != NULL);
- if (sai != NULL)
+ LASSERT(ldd);
+ if (sai)
ldd->lld_sa_generation = sai->sai_generation;
}
@@ -1191,7 +1209,7 @@
return -EAGAIN;
/* statahead has been stopped */
- if (lli->lli_opendir_key == NULL)
+ if (!lli->lli_opendir_key)
return -EAGAIN;
ldd = ll_d2d(dentryp);
@@ -1313,13 +1331,15 @@
/** direct write pages */
struct ll_dio_pages {
/** page array to be written. we don't support
- * partial pages except the last one. */
+ * partial pages except the last one.
+ */
struct page **ldp_pages;
/* offset of each page */
loff_t *ldp_offsets;
/** if ldp_offsets is NULL, it means a sequential
* pages to be written, then this is the file offset
- * of the * first page. */
+ * of the first page.
+ */
loff_t ldp_start_offset;
/** how many bytes are to be written. */
size_t ldp_size;
@@ -1345,7 +1365,6 @@
struct ll_file_data *fd = LUSTRE_FPRIVATE(file);
struct inode *inode = file_inode(file);
- LASSERT(fd != NULL);
return ((fd->fd_flags & LL_FILE_IGNORE_LOCK) ||
(ll_i2sbi(inode)->ll_flags & LL_SBI_NOLCK));
}
@@ -1362,7 +1381,8 @@
* remote MDT, where the object is, will grant
* UPDATE|PERM lock. The inode will be attached to both
* LOOKUP and PERM locks, so revoking either locks will
- * case the dcache being cleared */
+ * case the dcache being cleared
+ */
if (it->d.lustre.it_remote_lock_mode) {
handle.cookie = it->d.lustre.it_remote_lock_handle;
CDEBUG(D_DLMTRACE, "setting l_data to inode %p(%lu/%u) for remote lock %#llx\n",
@@ -1383,7 +1403,7 @@
it->d.lustre.it_lock_set = 1;
}
- if (bits != NULL)
+ if (bits)
*bits = it->d.lustre.it_lock_bits;
}
@@ -1401,14 +1421,14 @@
{
struct ll_dentry_data *lld = ll_d2d(dentry);
- return (lld == NULL) || lld->lld_invalid;
+ return !lld || lld->lld_invalid;
}
static inline void __d_lustre_invalidate(struct dentry *dentry)
{
struct ll_dentry_data *lld = ll_d2d(dentry);
- if (lld != NULL)
+ if (lld)
lld->lld_invalid = 1;
}
@@ -1442,7 +1462,7 @@
static inline void d_lustre_revalidate(struct dentry *dentry)
{
spin_lock(&dentry->d_lock);
- LASSERT(ll_d2d(dentry) != NULL);
+ LASSERT(ll_d2d(dentry));
ll_d2d(dentry)->lld_invalid = 0;
spin_unlock(&dentry->d_lock);
}
diff --git a/drivers/staging/lustre/lustre/llite/llite_lib.c b/drivers/staging/lustre/lustre/llite/llite_lib.c
index 446e4b8..1f3e8e8 100644
--- a/drivers/staging/lustre/lustre/llite/llite_lib.c
+++ b/drivers/staging/lustre/lustre/llite/llite_lib.c
@@ -102,8 +102,6 @@
sbi->ll_ra_info.ra_max_pages = sbi->ll_ra_info.ra_max_pages_per_file;
sbi->ll_ra_info.ra_max_read_ahead_whole_pages =
SBI_DEFAULT_READAHEAD_WHOLE_MAX;
- INIT_LIST_HEAD(&sbi->ll_conn_chain);
- INIT_LIST_HEAD(&sbi->ll_orphan_dentry_list);
ll_generate_random_uuid(uuid);
class_uuid_unparse(uuid, &sbi->ll_sb_uuid);
@@ -171,7 +169,7 @@
return -ENOMEM;
}
- if (llite_root != NULL) {
+ if (llite_root) {
err = ldebugfs_register_mountpoint(llite_root, sb, dt, md);
if (err < 0)
CERROR("could not register mount in <debugfs>/lustre/llite\n");
@@ -204,7 +202,8 @@
if (OBD_FAIL_CHECK(OBD_FAIL_MDC_LIGHTWEIGHT))
/* flag mdc connection as lightweight, only used for test
- * purpose, use with care */
+ * purpose, use with care
+ */
data->ocd_connect_flags |= OBD_CONNECT_LIGHTWEIGHT;
data->ocd_ibits_known = MDS_INODELOCK_FULL;
@@ -252,7 +251,8 @@
/* For mount, we only need fs info from MDT0, and also in DNE, it
* can make sure the client can be mounted as long as MDT0 is
- * available */
+ * available
+ */
err = obd_statfs(NULL, sbi->ll_md_exp, osfs,
cfs_time_shift_64(-OBD_STATFS_CACHE_SECONDS),
OBD_STATFS_FOR_MDT0);
@@ -265,7 +265,8 @@
* we can access the MDC export directly and exp_connect_flags will
* be non-zero, but if accessing an upgraded 2.1 server it will
* have the correct flags filled in.
- * XXX: fill in the LMV exp_connect_flags from MDC(s). */
+ * XXX: fill in the LMV exp_connect_flags from MDC(s).
+ */
valid = exp_connect_flags(sbi->ll_md_exp) & CLIENT_CONNECT_MDT_REQD;
if (exp_connect_flags(sbi->ll_md_exp) != 0 &&
valid != CLIENT_CONNECT_MDT_REQD) {
@@ -382,7 +383,8 @@
/* OBD_CONNECT_CKSUM should always be set, even if checksums are
* disabled by default, because it can still be enabled on the
* fly via /sys. As a consequence, we still need to come to an
- * agreement on the supported algorithms at connect time */
+ * agreement on the supported algorithms at connect time
+ */
data->ocd_connect_flags |= OBD_CONNECT_CKSUM;
if (OBD_FAIL_CHECK(OBD_FAIL_OSC_CKSUM_ADLER_ONLY))
@@ -453,7 +455,8 @@
#endif
/* make root inode
- * XXX: move this to after cbd setup? */
+ * XXX: move this to after cbd setup?
+ */
valid = OBD_MD_FLGETATTR | OBD_MD_FLBLOCKS;
if (sbi->ll_flags & LL_SBI_RMT_CLIENT)
valid |= OBD_MD_FLRMTPERM;
@@ -493,7 +496,7 @@
md_free_lustre_md(sbi->ll_md_exp, &lmd);
ptlrpc_req_finished(request);
- if (root == NULL || IS_ERR(root)) {
+ if (!(root)) {
if (lmd.lsm)
obd_free_memmd(sbi->ll_dt_exp, &lmd.lsm);
#ifdef CONFIG_FS_POSIX_ACL
@@ -502,8 +505,7 @@
lmd.posix_acl = NULL;
}
#endif
- err = IS_ERR(root) ? PTR_ERR(root) : -EBADF;
- root = NULL;
+ err = -EBADF;
CERROR("lustre_lite: bad iget4 for root\n");
goto out_root;
}
@@ -532,7 +534,7 @@
&sbi->ll_cache, NULL);
sb->s_root = d_make_root(root);
- if (sb->s_root == NULL) {
+ if (!sb->s_root) {
CERROR("%s: can't make root dentry\n",
ll_get_fsname(sb, NULL, 0));
err = -ENOMEM;
@@ -543,11 +545,13 @@
/* We set sb->s_dev equal on all lustre clients in order to support
* NFS export clustering. NFSD requires that the FSID be the same
- * on all clients. */
+ * on all clients.
+ */
/* s_dev is also used in lt_compare() to compare two fs, but that is
- * only a node-local comparison. */
+ * only a node-local comparison.
+ */
uuid = obd_get_uuid(sbi->ll_md_exp);
- if (uuid != NULL) {
+ if (uuid) {
sb->s_dev = get_uuid2int(uuid->uuid, strlen(uuid->uuid));
get_uuid2fsid(uuid->uuid, strlen(uuid->uuid), &sbi->ll_fsid);
}
@@ -619,13 +623,12 @@
cl_sb_fini(sb);
- list_del(&sbi->ll_conn_chain);
-
obd_fid_fini(sbi->ll_dt_exp->exp_obd);
obd_disconnect(sbi->ll_dt_exp);
sbi->ll_dt_exp = NULL;
/* wait till all OSCs are gone, since cl_cache is accessing sbi.
- * see LU-2543. */
+ * see LU-2543.
+ */
obd_zombie_barrier();
ldebugfs_unregister_mountpoint(sbi);
@@ -646,7 +649,8 @@
sbi = ll_s2sbi(sb);
/* we need to restore s_dev from changed for clustered NFS before
* put_super because new kernels have cached s_dev and change sb->s_dev
- * in put_super not affected real removing devices */
+ * in put_super not affected real removing devices
+ */
if (sbi) {
sb->s_dev = sbi->ll_sdev_orig;
sbi->ll_umounting = 1;
@@ -777,7 +781,7 @@
next:
/* Find next opt */
s2 = strchr(s1, ',');
- if (s2 == NULL)
+ if (!s2)
break;
s1 = s2 + 1;
}
@@ -797,7 +801,6 @@
/* Do not set lli_fid, it has been initialized already. */
fid_zero(&lli->lli_pfid);
INIT_LIST_HEAD(&lli->lli_close_list);
- atomic_set(&lli->lli_open_count, 0);
lli->lli_rmtperm_time = 0;
lli->lli_pending_och = NULL;
lli->lli_mds_read_och = NULL;
@@ -890,8 +893,9 @@
sb->s_d_op = &ll_d_ops;
/* Generate a string unique to this super, in case some joker tries
- to mount the same fs at two mount points.
- Use the address of the super itself.*/
+ * to mount the same fs at two mount points.
+ * Use the address of the super itself.
+ */
cfg->cfg_instance = sb;
cfg->cfg_uuid = lsi->lsi_llsbi->ll_sb_uuid;
cfg->cfg_callback = class_config_llog_handler;
@@ -904,7 +908,7 @@
/* Profile set with LCFG_MOUNTOPT so we can find our mdc and osc obds */
lprof = class_get_profile(profilenm);
- if (lprof == NULL) {
+ if (!lprof) {
LCONSOLE_ERROR_MSG(0x156, "The client profile '%s' could not be read from the MGS. Does that filesystem exist?\n",
profilenm);
err = -EINVAL;
@@ -964,7 +968,8 @@
}
/* We need to set force before the lov_disconnect in
- lustre_common_put_super, since l_d cleans up osc's as well. */
+ * lustre_common_put_super, since l_d cleans up osc's as well.
+ */
if (force) {
next = 0;
while ((obd = class_devices_in_group(&sbi->ll_sb_uuid,
@@ -1036,8 +1041,8 @@
if (S_ISDIR(inode->i_mode)) {
/* these should have been cleared in ll_file_release */
- LASSERT(lli->lli_opendir_key == NULL);
- LASSERT(lli->lli_sai == NULL);
+ LASSERT(!lli->lli_opendir_key);
+ LASSERT(!lli->lli_sai);
LASSERT(lli->lli_opendir_pid == 0);
}
@@ -1065,7 +1070,7 @@
ll_xattr_cache_destroy(inode);
if (sbi->ll_flags & LL_SBI_RMT_CLIENT) {
- LASSERT(lli->lli_posix_acl == NULL);
+ LASSERT(!lli->lli_posix_acl);
if (lli->lli_remote_perms) {
free_rmtperm_hash(lli->lli_remote_perms);
lli->lli_remote_perms = NULL;
@@ -1074,7 +1079,7 @@
#ifdef CONFIG_FS_POSIX_ACL
else if (lli->lli_posix_acl) {
LASSERT(atomic_read(&lli->lli_posix_acl->a_refcount) == 1);
- LASSERT(lli->lli_remote_perms == NULL);
+ LASSERT(!lli->lli_remote_perms);
posix_acl_release(lli->lli_posix_acl);
lli->lli_posix_acl = NULL;
}
@@ -1115,7 +1120,8 @@
if (rc == -ENOENT) {
clear_nlink(inode);
/* Unlinked special device node? Or just a race?
- * Pretend we done everything. */
+ * Pretend we did everything.
+ */
if (!S_ISREG(inode->i_mode) &&
!S_ISDIR(inode->i_mode)) {
ia_valid = op_data->op_attr.ia_valid;
@@ -1138,7 +1144,8 @@
ia_valid = op_data->op_attr.ia_valid;
/* inode size will be in cl_setattr_ost, can't do it now since dirty
- * cache is not cleared yet. */
+ * cache is not cleared yet.
+ */
op_data->op_attr.ia_valid &= ~(TIMES_SET_FLAGS | ATTR_SIZE);
rc = simple_setattr(dentry, &op_data->op_attr);
op_data->op_attr.ia_valid = ia_valid;
@@ -1161,7 +1168,6 @@
struct ll_inode_info *lli = ll_i2info(inode);
int rc = 0;
- LASSERT(op_data != NULL);
if (!S_ISREG(inode->i_mode))
return 0;
@@ -1175,7 +1181,8 @@
rc = md_done_writing(ll_i2sbi(inode)->ll_md_exp, op_data, mod);
if (rc == -EAGAIN)
/* MDS has instructed us to obtain Size-on-MDS attribute
- * from OSTs and send setattr to back to MDS. */
+ * from OSTs and send setattr to back to MDS.
+ */
rc = ll_som_update(inode, op_data);
else if (rc)
CERROR("inode %lu mdc truncate failed: rc = %d\n",
@@ -1222,7 +1229,8 @@
/* The maximum Lustre file size is variable, based on the
* OST maximum object size and number of stripes. This
- * needs another check in addition to the VFS check above. */
+ * needs another check in addition to the VFS check above.
+ */
if (attr->ia_size > ll_file_maxbytes(inode)) {
CDEBUG(D_INODE, "file "DFID" too large %llu > %llu\n",
PFID(&lli->lli_fid), attr->ia_size,
@@ -1270,7 +1278,8 @@
}
/* We always do an MDS RPC, even if we're only changing the size;
- * only the MDS knows whether truncate() should fail with -ETXTBUSY */
+ * only the MDS knows whether truncate() should fail with -ETXTBUSY
+ */
op_data = kzalloc(sizeof(*op_data), GFP_NOFS);
if (!op_data)
@@ -1304,7 +1313,8 @@
/* if not in HSM import mode, clear size attr for released file
* we clear the attribute send to MDT in op_data, not the original
* received from caller in attr which is used later to
- * decide return code */
+ * decide return code
+ */
if (file_is_released && (attr->ia_valid & ATTR_SIZE) && !hsm_import)
op_data->op_attr.ia_valid &= ~ATTR_SIZE;
@@ -1342,7 +1352,8 @@
* extent lock (new_size:EOF for truncate). It may seem
* excessive to send mtime/atime updates to OSTs when not
* setting times to past, but it is necessary due to possible
- * time de-synchronization between MDT inode and OST objects */
+ * time de-synchronization between MDT inode and OST objects
+ */
if (attr->ia_valid & ATTR_SIZE)
down_write(&lli->lli_trunc_sem);
rc = cl_setattr_ost(inode, attr);
@@ -1470,7 +1481,8 @@
/* We need to downshift for all 32-bit kernels, because we can't
* tell if the kernel is being called via sys_statfs64() or not.
* Stop before overflowing f_bsize - in which case it is better
- * to just risk EOVERFLOW if caller is using old sys_statfs(). */
+ * to just risk EOVERFLOW if caller is using old sys_statfs().
+ */
if (sizeof(long) < 8) {
while (osfs.os_blocks > ~0UL && sfs->f_bsize < 0x40000000) {
sfs->f_bsize <<= 1;
@@ -1514,7 +1526,7 @@
struct ll_sb_info *sbi = ll_i2sbi(inode);
LASSERT((lsm != NULL) == ((body->valid & OBD_MD_FLEASIZE) != 0));
- if (lsm != NULL) {
+ if (lsm) {
if (!lli->lli_has_smd &&
!(sbi->ll_flags & LL_SBI_LAYOUT_LOCK))
cl_file_inode_init(inode, md);
@@ -1599,12 +1611,13 @@
if (exp_connect_som(ll_i2mdexp(inode)) &&
S_ISREG(inode->i_mode)) {
struct lustre_handle lockh;
- ldlm_mode_t mode;
+ enum ldlm_mode mode;
/* As it is possible a blocking ast has been processed
* by this time, we need to check there is an UPDATE
* lock on the client and set LLIF_MDS_SIZE_LOCK holding
- * it. */
+ * it.
+ */
mode = ll_take_md_lock(inode, MDS_INODELOCK_UPDATE,
&lockh, LDLM_FL_CBPENDING,
LCK_CR | LCK_CW |
@@ -1617,7 +1630,8 @@
inode->i_ino, lli->lli_flags);
} else {
/* Use old size assignment to avoid
- * deadlock bz14138 & bz14326 */
+ * deadlock bz14138 & bz14326
+ */
i_size_write(inode, body->size);
spin_lock(&lli->lli_lock);
lli->lli_flags |= LLIF_MDS_SIZE_LOCK;
@@ -1627,7 +1641,8 @@
}
} else {
/* Use old size assignment to avoid
- * deadlock bz14138 & bz14326 */
+ * deadlock bz14138 & bz14326
+ */
i_size_write(inode, body->size);
CDEBUG(D_VFSTRACE, "inode=%lu, updating i_size %llu\n",
@@ -1657,7 +1672,8 @@
/* Core attributes from the MDS first. This is a new inode, and
* the VFS doesn't zero times in the core inode so we have to do
* it ourselves. They will be overwritten by either MDS or OST
- * attributes - we just need to make sure they aren't newer. */
+ * attributes - we just need to make sure they aren't newer.
+ */
LTIME_S(inode->i_mtime) = 0;
LTIME_S(inode->i_atime) = 0;
LTIME_S(inode->i_ctime) = 0;
@@ -1689,9 +1705,10 @@
{
struct cl_inode_info *lli = cl_i2info(inode);
- if (S_ISREG(inode->i_mode) && lli->lli_clob != NULL)
+ if (S_ISREG(inode->i_mode) && lli->lli_clob)
/* discard all dirty pages before truncating them, required by
- * osc_extent implementation at LU-1030. */
+ * osc_extent implementation at LU-1030.
+ */
cl_sync_file_range(inode, 0, OBD_OBJECT_EOF,
CL_FSYNC_DISCARD, 1);
@@ -1831,7 +1848,7 @@
sb->s_count, atomic_read(&sb->s_active));
obd = class_exp2obd(sbi->ll_md_exp);
- if (obd == NULL) {
+ if (!obd) {
CERROR("Invalid MDC connection handle %#llx\n",
sbi->ll_md_exp->exp_handle.h_cookie);
return;
@@ -1839,7 +1856,7 @@
obd->obd_force = 1;
obd = class_exp2obd(sbi->ll_dt_exp);
- if (obd == NULL) {
+ if (!obd) {
CERROR("Invalid LOV connection handle %#llx\n",
sbi->ll_dt_exp->exp_handle.h_cookie);
return;
@@ -1941,7 +1958,7 @@
struct super_block *sb, struct lookup_intent *it)
{
struct ll_sb_info *sbi = NULL;
- struct lustre_md md;
+ struct lustre_md md = { NULL };
int rc;
LASSERT(*inode || sb);
@@ -1954,7 +1971,7 @@
if (*inode) {
ll_update_inode(*inode, &md);
} else {
- LASSERT(sb != NULL);
+ LASSERT(sb);
/*
* At this point server returns to client's same fid as client
@@ -1965,15 +1982,14 @@
*inode = ll_iget(sb, cl_fid_build_ino(&md.body->fid1,
sbi->ll_flags & LL_SBI_32BIT_API),
&md);
- if (*inode == NULL || IS_ERR(*inode)) {
+ if (!inode) {
#ifdef CONFIG_FS_POSIX_ACL
if (md.posix_acl) {
posix_acl_release(md.posix_acl);
md.posix_acl = NULL;
}
#endif
- rc = IS_ERR(*inode) ? PTR_ERR(*inode) : -ENOMEM;
- *inode = NULL;
+ rc = -ENOMEM;
CERROR("new_inode -fatal: rc %d\n", rc);
goto out;
}
@@ -1986,14 +2002,15 @@
* 1. proc1: mdt returns a lsm but not granting layout
* 2. layout was changed by another client
* 3. proc2: refresh layout and layout lock granted
- * 4. proc1: to apply a stale layout */
- if (it != NULL && it->d.lustre.it_lock_mode != 0) {
+ * 4. proc1: to apply a stale layout
+ */
+ if (it && it->d.lustre.it_lock_mode != 0) {
struct lustre_handle lockh;
struct ldlm_lock *lock;
lockh.cookie = it->d.lustre.it_lock_handle;
lock = ldlm_handle2lock(&lockh);
- LASSERT(lock != NULL);
+ LASSERT(lock);
if (ldlm_has_layout(lock)) {
struct cl_object_conf conf;
@@ -2008,7 +2025,7 @@
}
out:
- if (md.lsm != NULL)
+ if (md.lsm)
obd_free_memmd(sbi->ll_dt_exp, &md.lsm);
md_free_lustre_md(sbi->ll_md_exp, &md);
@@ -2099,7 +2116,8 @@
LASSERT(s2lsi((struct super_block *)sb)->lsi_lmd->lmd_magic == LMD_MAGIC);
/* Note we have not called client_common_fill_super yet, so
- proc fns must be able to handle that! */
+ * proc fns must be able to handle that!
+ */
rc = class_process_proc_param(PARAM_LLITE, lvars.obd_vars,
lcfg, sb);
if (rc > 0)
@@ -2113,15 +2131,13 @@
const char *name, int namelen,
int mode, __u32 opc, void *data)
{
- LASSERT(i1 != NULL);
-
if (namelen > ll_i2sbi(i1)->ll_namelen)
return ERR_PTR(-ENAMETOOLONG);
- if (op_data == NULL)
+ if (!op_data)
op_data = kzalloc(sizeof(*op_data), GFP_NOFS);
- if (op_data == NULL)
+ if (!op_data)
return ERR_PTR(-ENOMEM);
ll_i2gids(op_data->op_suppgids, i1, i2);
@@ -2141,8 +2157,8 @@
op_data->op_cap = cfs_curproc_cap_pack();
op_data->op_bias = 0;
op_data->op_cli_flags = 0;
- if ((opc == LUSTRE_OPC_CREATE) && (name != NULL) &&
- filename_is_volatile(name, namelen, NULL))
+ if ((opc == LUSTRE_OPC_CREATE) && name &&
+ filename_is_volatile(name, namelen, NULL))
op_data->op_bias |= MDS_CREATE_VOLATILE;
op_data->op_opc = opc;
op_data->op_mds = 0;
@@ -2150,7 +2166,8 @@
/* If the file is being opened after mknod() (normally due to NFS)
* try to use the default stripe data from parent directory for
- * allocating OST objects. Try to pass the parent FID to MDS. */
+ * allocating OST objects. Try to pass the parent FID to MDS.
+ */
if (opc == LUSTRE_OPC_CREATE && i1 == i2 && S_ISREG(i2->i_mode) &&
!ll_i2info(i2)->lli_has_smd) {
struct ll_inode_info *lli = ll_i2info(i2);
@@ -2177,7 +2194,7 @@
{
struct ll_sb_info *sbi;
- LASSERT((seq != NULL) && (dentry != NULL));
+ LASSERT(seq && dentry);
sbi = ll_s2sbi(dentry->d_sb);
if (sbi->ll_flags & LL_SBI_NOLCK)
@@ -2238,10 +2255,11 @@
char *ptr;
int len;
- if (buf == NULL) {
+ if (!buf) {
/* this means the caller wants to use static buffer
* and it doesn't care about race. Usually this is
- * in error reporting path */
+ * in error reporting path
+ */
buf = fsname_static;
buflen = sizeof(fsname_static);
}
@@ -2267,9 +2285,9 @@
/* this can be called inside spin lock so use GFP_ATOMIC. */
buf = (char *)__get_free_page(GFP_ATOMIC);
- if (buf != NULL) {
+ if (buf) {
dentry = d_find_alias(page->mapping->host);
- if (dentry != NULL)
+ if (dentry)
path = dentry_path_raw(dentry, buf, PAGE_SIZE);
}
@@ -2280,9 +2298,9 @@
PFID(&obj->cob_header.coh_lu.loh_fid),
(path && !IS_ERR(path)) ? path : "", ioret);
- if (dentry != NULL)
+ if (dentry)
dput(dentry);
- if (buf != NULL)
+ if (buf)
free_page((unsigned long)buf);
}
diff --git a/drivers/staging/lustre/lustre/llite/llite_mmap.c b/drivers/staging/lustre/lustre/llite/llite_mmap.c
index bbae95c..31abdb7f 100644
--- a/drivers/staging/lustre/lustre/llite/llite_mmap.c
+++ b/drivers/staging/lustre/lustre/llite/llite_mmap.c
@@ -72,7 +72,7 @@
LASSERT(!down_write_trylock(&mm->mmap_sem));
for (vma = find_vma(mm, addr);
- vma != NULL && vma->vm_start < (addr + count); vma = vma->vm_next) {
+ vma && vma->vm_start < (addr + count); vma = vma->vm_next) {
if (vma->vm_ops && vma->vm_ops == &ll_file_vm_ops &&
vma->vm_flags & VM_SHARED) {
ret = vma;
@@ -119,13 +119,13 @@
*/
env = cl_env_nested_get(nest);
if (IS_ERR(env))
- return ERR_PTR(-EINVAL);
+ return ERR_PTR(-EINVAL);
*env_ret = env;
io = ccc_env_thread_io(env);
io->ci_obj = ll_i2info(inode)->lli_clob;
- LASSERT(io->ci_obj != NULL);
+ LASSERT(io->ci_obj);
fio = &io->u.ci_fault;
fio->ft_index = index;
@@ -136,7 +136,7 @@
* the kernel will not read other pages not covered by ldlm in
* filemap_nopage. we do our readahead in ll_readpage.
*/
- if (ra_flags != NULL)
+ if (ra_flags)
*ra_flags = vma->vm_flags & (VM_RAND_READ|VM_SEQ_READ);
vma->vm_flags &= ~VM_SEQ_READ;
vma->vm_flags |= VM_RAND_READ;
@@ -151,8 +151,7 @@
LASSERT(cio->cui_cl.cis_io == io);
- /* mmap lock must be MANDATORY it has to cache
- * pages. */
+ /* mmap lock must be MANDATORY it has to cache pages. */
io->ci_lockreq = CILR_MANDATORY;
cio->cui_fd = fd;
} else {
@@ -178,8 +177,6 @@
struct inode *inode;
struct ll_inode_info *lli;
- LASSERT(vmpage != NULL);
-
io = ll_fault_io_init(vma, &env, &nest, vmpage->index, NULL);
if (IS_ERR(io)) {
result = PTR_ERR(io);
@@ -201,7 +198,8 @@
/* we grab lli_trunc_sem to exclude truncate case.
* Otherwise, we could add dirty pages into osc cache
- * while truncate is on-going. */
+ * while truncate is on-going.
+ */
inode = ccc_object_inode(io->ci_obj);
lli = ll_i2info(inode);
down_read(&lli->lli_trunc_sem);
@@ -217,12 +215,13 @@
struct ll_inode_info *lli = ll_i2info(inode);
lock_page(vmpage);
- if (vmpage->mapping == NULL) {
+ if (!vmpage->mapping) {
unlock_page(vmpage);
/* page was truncated and lock was cancelled, return
* ENODATA so that VM_FAULT_NOPAGE will be returned
- * to handle_mm_fault(). */
+ * to handle_mm_fault().
+ */
if (result == 0)
result = -ENODATA;
} else if (!PageDirty(vmpage)) {
@@ -315,12 +314,13 @@
result = cl_io_loop(env, io);
/* ft_flags are only valid if we reached
- * the call to filemap_fault */
+ * the call to filemap_fault
+ */
if (vio->u.fault.fault.ft_flags_valid)
fault_ret = vio->u.fault.fault.ft_flags;
vmpage = vio->u.fault.ft_vmpage;
- if (result != 0 && vmpage != NULL) {
+ if (result != 0 && vmpage) {
page_cache_release(vmpage);
vmf->page = NULL;
}
@@ -344,9 +344,10 @@
int result;
sigset_t set;
- /* Only SIGKILL and SIGTERM is allowed for fault/nopage/mkwrite
+ /* Only SIGKILL and SIGTERM are allowed for fault/nopage/mkwrite
* so that it can be killed by admin but not cause segfault by
- * other signals. */
+ * other signals.
+ */
set = cfs_block_sigsinv(sigmask(SIGKILL) | sigmask(SIGTERM));
restart:
@@ -357,7 +358,7 @@
/* check if this page has been truncated */
lock_page(vmpage);
- if (unlikely(vmpage->mapping == NULL)) { /* unlucky */
+ if (unlikely(!vmpage->mapping)) { /* unlucky */
unlock_page(vmpage);
page_cache_release(vmpage);
vmf->page = NULL;
@@ -447,7 +448,8 @@
}
/* XXX put nice comment here. talk about __free_pte -> dirty pages and
- * nopage's reference passing to the pte */
+ * nopage's reference passing to the pte
+ */
int ll_teardown_mmaps(struct address_space *mapping, __u64 first, __u64 last)
{
int rc = -ENOENT;
diff --git a/drivers/staging/lustre/lustre/llite/llite_nfs.c b/drivers/staging/lustre/lustre/llite/llite_nfs.c
index 9f64dd1..74a8868 100644
--- a/drivers/staging/lustre/lustre/llite/llite_nfs.c
+++ b/drivers/staging/lustre/lustre/llite/llite_nfs.c
@@ -105,7 +105,8 @@
return ERR_PTR(rc);
/* Because inode is NULL, ll_prep_md_op_data can not
- * be used here. So we allocate op_data ourselves */
+ * be used here. So we allocate op_data ourselves
+ */
op_data = kzalloc(sizeof(*op_data), GFP_NOFS);
if (!op_data)
return ERR_PTR(-ENOMEM);
@@ -141,10 +142,11 @@
struct inode *inode;
struct dentry *result;
- CDEBUG(D_INFO, "Get dentry for fid: "DFID"\n", PFID(fid));
if (!fid_is_sane(fid))
return ERR_PTR(-ESTALE);
+ CDEBUG(D_INFO, "Get dentry for fid: " DFID "\n", PFID(fid));
+
inode = search_inode_for_lustre(sb, fid);
if (IS_ERR(inode))
return ERR_CAST(inode);
@@ -160,7 +162,7 @@
* We have to find the parent to tell MDS how to init lov objects.
*/
if (S_ISREG(inode->i_mode) && !ll_i2info(inode)->lli_has_smd &&
- parent != NULL) {
+ parent && !fid_is_zero(parent)) {
struct ll_inode_info *lli = ll_i2info(inode);
spin_lock(&lli->lli_lock);
@@ -174,8 +176,6 @@
return result;
}
-#define LUSTRE_NFS_FID 0x97
-
/**
* \a connectable - is nfsd will connect himself or this should be done
* at lustre
@@ -188,20 +188,25 @@
static int ll_encode_fh(struct inode *inode, __u32 *fh, int *plen,
struct inode *parent)
{
+ int fileid_len = sizeof(struct lustre_nfs_fid) / 4;
struct lustre_nfs_fid *nfs_fid = (void *)fh;
CDEBUG(D_INFO, "encoding for (%lu,"DFID") maxlen=%d minlen=%d\n",
- inode->i_ino, PFID(ll_inode2fid(inode)), *plen,
- (int)sizeof(struct lustre_nfs_fid));
+ inode->i_ino, PFID(ll_inode2fid(inode)), *plen, fileid_len);
- if (*plen < sizeof(struct lustre_nfs_fid) / 4)
- return 255;
+ if (*plen < fileid_len) {
+ *plen = fileid_len;
+ return FILEID_INVALID;
+ }
nfs_fid->lnf_child = *ll_inode2fid(inode);
- nfs_fid->lnf_parent = *ll_inode2fid(parent);
- *plen = sizeof(struct lustre_nfs_fid) / 4;
+ if (parent)
+ nfs_fid->lnf_parent = *ll_inode2fid(parent);
+ else
+ fid_zero(&nfs_fid->lnf_parent);
+ *plen = fileid_len;
- return LUSTRE_NFS_FID;
+ return FILEID_LUSTRE;
}
static int ll_nfs_get_name_filldir(struct dir_context *ctx, const char *name,
@@ -209,7 +214,8 @@
unsigned type)
{
/* It is hack to access lde_fid for comparison with lgd_fid.
- * So the input 'name' must be part of the 'lu_dirent'. */
+ * So the input 'name' must be part of the 'lu_dirent'.
+ */
struct lu_dirent *lde = container_of0(name, struct lu_dirent, lde_name);
struct ll_getname_data *lgd =
container_of(ctx, struct ll_getname_data, ctx);
@@ -259,7 +265,7 @@
{
struct lustre_nfs_fid *nfs_fid = (struct lustre_nfs_fid *)fid;
- if (fh_type != LUSTRE_NFS_FID)
+ if (fh_type != FILEID_LUSTRE)
return ERR_PTR(-EPROTO);
return ll_iget_for_nfs(sb, &nfs_fid->lnf_child, &nfs_fid->lnf_parent);
@@ -270,7 +276,7 @@
{
struct lustre_nfs_fid *nfs_fid = (struct lustre_nfs_fid *)fid;
- if (fh_type != LUSTRE_NFS_FID)
+ if (fh_type != FILEID_LUSTRE)
return ERR_PTR(-EPROTO);
return ll_iget_for_nfs(sb, &nfs_fid->lnf_parent, NULL);
diff --git a/drivers/staging/lustre/lustre/llite/llite_rmtacl.c b/drivers/staging/lustre/lustre/llite/llite_rmtacl.c
index b27c3f2..63e4847 100644
--- a/drivers/staging/lustre/lustre/llite/llite_rmtacl.c
+++ b/drivers/staging/lustre/lustre/llite/llite_rmtacl.c
@@ -125,12 +125,12 @@
struct rmtacl_ctl_entry *rce, *e;
rce = rce_alloc(key, ops);
- if (rce == NULL)
+ if (!rce)
return -ENOMEM;
spin_lock(&rct->rct_lock);
e = __rct_search(rct, key);
- if (unlikely(e != NULL)) {
+ if (unlikely(e)) {
CWARN("Unexpected stale rmtacl_entry found: [key: %d] [ops: %d]\n",
(int)key, ops);
rce_free(e);
@@ -213,7 +213,7 @@
struct eacl_entry *ee;
struct list_head *head = &et->et_entries[ee_hashfunc(key)];
- LASSERT(fid != NULL);
+ LASSERT(fid);
list_for_each_entry(ee, head, ee_list)
if (ee->ee_key == key) {
if (lu_fid_eq(&ee->ee_fid, fid) &&
@@ -256,12 +256,12 @@
struct eacl_entry *ee, *e;
ee = ee_alloc(key, fid, type, header);
- if (ee == NULL)
+ if (!ee)
return -ENOMEM;
spin_lock(&et->et_lock);
e = __et_search_del(et, key, fid, type);
- if (unlikely(e != NULL)) {
+ if (unlikely(e)) {
CWARN("Unexpected stale eacl_entry found: [key: %d] [fid: " DFID "] [type: %d]\n",
(int)key, PFID(fid), type);
ee_free(e);
diff --git a/drivers/staging/lustre/lustre/llite/lloop.c b/drivers/staging/lustre/lustre/llite/lloop.c
index 69c41af..3f53292 100644
--- a/drivers/staging/lustre/lustre/llite/lloop.c
+++ b/drivers/staging/lustre/lustre/llite/lloop.c
@@ -211,9 +211,8 @@
return io->ci_result;
io->ci_lockreq = CILR_NEVER;
- LASSERT(head != NULL);
rw = head->bi_rw;
- for (bio = head; bio != NULL; bio = bio->bi_next) {
+ for (bio = head; bio ; bio = bio->bi_next) {
LASSERT(rw == bio->bi_rw);
offset = (pgoff_t)(bio->bi_iter.bi_sector << 9) + lo->lo_offset;
@@ -297,7 +296,7 @@
spin_lock_irq(&lo->lo_lock);
first = lo->lo_bio;
- if (unlikely(first == NULL)) {
+ if (unlikely(!first)) {
spin_unlock_irq(&lo->lo_lock);
return 0;
}
@@ -308,7 +307,7 @@
rw = first->bi_rw;
bio = &lo->lo_bio;
while (*bio && (*bio)->bi_rw == rw) {
- CDEBUG(D_INFO, "bio sector %llu size %u count %u vcnt%u \n",
+ CDEBUG(D_INFO, "bio sector %llu size %u count %u vcnt%u\n",
(unsigned long long)(*bio)->bi_iter.bi_sector,
(*bio)->bi_iter.bi_size,
page_count, (*bio)->bi_vcnt);
@@ -458,7 +457,7 @@
total_count, times, total_count / times);
}
- LASSERT(bio != NULL);
+ LASSERT(bio);
LASSERT(count <= atomic_read(&lo->lo_pending));
loop_handle_bio(lo, bio);
atomic_sub(count, &lo->lo_pending);
@@ -560,7 +559,7 @@
if (lo->lo_refcnt > count) /* we needed one fd for the ioctl */
return -EBUSY;
- if (filp == NULL)
+ if (!filp)
return -EINVAL;
spin_lock_irq(&lo->lo_lock);
@@ -625,11 +624,11 @@
case LL_IOC_LLOOP_INFO: {
struct lu_fid fid;
- if (lo->lo_backing_file == NULL) {
+ if (!lo->lo_backing_file) {
err = -ENOENT;
break;
}
- if (inode == NULL)
+ if (!inode)
inode = file_inode(lo->lo_backing_file);
if (lo->lo_state == LLOOP_BOUND)
fid = ll_i2info(inode)->lli_fid;
@@ -676,7 +675,7 @@
if (magic != ll_iocontrol_magic)
return LLIOC_CONT;
- if (disks == NULL) {
+ if (!disks) {
err = -ENODEV;
goto out1;
}
@@ -793,7 +792,7 @@
lloop_major, max_loop);
ll_iocontrol_magic = ll_iocontrol_register(lloop_ioctl, 2, cmdlist);
- if (ll_iocontrol_magic == NULL)
+ if (!ll_iocontrol_magic)
goto out_mem1;
loop_dev = kcalloc(max_loop, sizeof(*loop_dev), GFP_KERNEL);
diff --git a/drivers/staging/lustre/lustre/llite/lproc_llite.c b/drivers/staging/lustre/lustre/llite/lproc_llite.c
index f134ad9..f5fe928 100644
--- a/drivers/staging/lustre/lustre/llite/lproc_llite.c
+++ b/drivers/staging/lustre/lustre/llite/lproc_llite.c
@@ -43,7 +43,7 @@
#include "llite_internal.h"
#include "vvp_internal.h"
-/* /proc/lustre/llite mount point registration */
+/* debugfs llite mount point registration */
static struct file_operations ll_rw_extents_stats_fops;
static struct file_operations ll_rw_extents_stats_pp_fops;
static struct file_operations ll_rw_offset_stats_fops;
@@ -345,7 +345,8 @@
return rc;
/* Cap this at the current max readahead window size, the readahead
- * algorithm does this anyway so it's pointless to set it larger. */
+ * algorithm does this anyway so it's pointless to set it larger.
+ */
if (pages_number > sbi->ll_ra_info.ra_max_pages_per_file) {
CERROR("can't set max_read_ahead_whole_mb more than max_read_ahead_per_file_mb: %lu\n",
sbi->ll_ra_info.ra_max_pages_per_file >> (20 - PAGE_CACHE_SHIFT));
@@ -453,7 +454,7 @@
if (diff <= 0)
break;
- if (sbi->ll_dt_exp == NULL) { /* being initialized */
+ if (!sbi->ll_dt_exp) { /* being initialized */
rc = -ENODEV;
break;
}
@@ -966,9 +967,9 @@
name[MAX_STRING_SIZE] = '\0';
- LASSERT(sbi != NULL);
- LASSERT(mdc != NULL);
- LASSERT(osc != NULL);
+ LASSERT(sbi);
+ LASSERT(mdc);
+ LASSERT(osc);
/* Get fsname */
len = strlen(lsi->lsi_lmd->lmd_profile);
@@ -1012,7 +1013,7 @@
/* File operations stats */
sbi->ll_stats = lprocfs_alloc_stats(LPROC_LL_FILE_OPCODES,
LPROCFS_STATS_FLAG_NONE);
- if (sbi->ll_stats == NULL) {
+ if (!sbi->ll_stats) {
err = -ENOMEM;
goto out;
}
@@ -1039,7 +1040,7 @@
sbi->ll_ra_stats = lprocfs_alloc_stats(ARRAY_SIZE(ra_stat_string),
LPROCFS_STATS_FLAG_NONE);
- if (sbi->ll_ra_stats == NULL) {
+ if (!sbi->ll_ra_stats) {
err = -ENOMEM;
goto out;
}
diff --git a/drivers/staging/lustre/lustre/llite/namei.c b/drivers/staging/lustre/lustre/llite/namei.c
index da5f443..56d2d1d 100644
--- a/drivers/staging/lustre/lustre/llite/namei.c
+++ b/drivers/staging/lustre/lustre/llite/namei.c
@@ -118,7 +118,7 @@
ll_read_inode2(inode, md);
if (S_ISREG(inode->i_mode) &&
- ll_i2info(inode)->lli_clob == NULL) {
+ !ll_i2info(inode)->lli_clob) {
CDEBUG(D_INODE,
"%s: apply lsm %p to inode "DFID".\n",
ll_get_fsname(sb, NULL, 0), md->lsm,
@@ -127,7 +127,7 @@
}
if (rc != 0) {
iget_failed(inode);
- inode = ERR_PTR(rc);
+ inode = NULL;
} else
unlock_new_inode(inode);
} else if (!(inode->i_state & (I_FREEING | I_CLEAR)))
@@ -180,10 +180,11 @@
__u64 bits = lock->l_policy_data.l_inodebits.bits;
/* Inode is set to lock->l_resource->lr_lvb_inode
- * for mdc - bug 24555 */
- LASSERT(lock->l_ast_data == NULL);
+ * for mdc - bug 24555
+ */
+ LASSERT(!lock->l_ast_data);
- if (inode == NULL)
+ if (!inode)
break;
/* Invalidate all dentries associated with this inode */
@@ -202,7 +203,8 @@
}
/* For OPEN locks we differentiate between lock modes
- * LCK_CR, LCK_CW, LCK_PR - bug 22891 */
+ * LCK_CR, LCK_CW, LCK_PR - bug 22891
+ */
if (bits & MDS_INODELOCK_OPEN)
ll_have_md_lock(inode, &bits, lock->l_req_mode);
@@ -260,7 +262,7 @@
}
if ((bits & (MDS_INODELOCK_LOOKUP | MDS_INODELOCK_PERM)) &&
- inode->i_sb->s_root != NULL &&
+ inode->i_sb->s_root &&
!is_root_inode(inode))
ll_invalidate_aliases(inode);
@@ -285,15 +287,11 @@
/* Pack the required supplementary groups into the supplied groups array.
* If we don't need to use the groups from the target inode(s) then we
* instead pack one or more groups from the user's supplementary group
- * array in case it might be useful. Not needed if doing an MDS-side upcall. */
+ * array in case it might be useful. Not needed if doing an MDS-side upcall.
+ */
void ll_i2gids(__u32 *suppgids, struct inode *i1, struct inode *i2)
{
-#if 0
- int i;
-#endif
-
- LASSERT(i1 != NULL);
- LASSERT(suppgids != NULL);
+ LASSERT(i1);
suppgids[0] = ll_i2suppgid(i1);
@@ -301,22 +299,6 @@
suppgids[1] = ll_i2suppgid(i2);
else
suppgids[1] = -1;
-
-#if 0
- for (i = 0; i < current_ngroups; i++) {
- if (suppgids[0] == -1) {
- if (current_groups[i] != suppgids[1])
- suppgids[0] = current_groups[i];
- continue;
- }
- if (suppgids[1] == -1) {
- if (current_groups[i] != suppgids[0])
- suppgids[1] = current_groups[i];
- continue;
- }
- break;
- }
-#endif
}
/*
@@ -409,7 +391,8 @@
int rc = 0;
/* NB 1 request reference will be taken away by ll_intent_lock()
- * when I return */
+ * when I return
+ */
CDEBUG(D_DENTRY, "it %p it_disposition %x\n", it,
it->d.lustre.it_disposition);
if (!it_disposition(it, DISP_LOOKUP_NEG)) {
@@ -420,13 +403,14 @@
ll_set_lock_data(ll_i2sbi(parent)->ll_md_exp, inode, it, &bits);
/* We used to query real size from OSTs here, but actually
- this is not needed. For stat() calls size would be updated
- from subsequent do_revalidate()->ll_inode_revalidate_it() in
- 2.4 and
- vfs_getattr_it->ll_getattr()->ll_inode_revalidate_it() in 2.6
- Everybody else who needs correct file size would call
- ll_glimpse_size or some equivalent themselves anyway.
- Also see bug 7198. */
+ * this is not needed. For stat() calls size would be updated
+ * from subsequent do_revalidate()->ll_inode_revalidate_it() in
+ * 2.4 and
+ * vfs_getattr_it->ll_getattr()->ll_inode_revalidate_it() in 2.6
+ * Everybody else who needs correct file size would call
+ * ll_glimpse_size or some equivalent themselves anyway.
+ * Also see bug 7198.
+ */
}
/* Only hash *de if it is unhashed (new dentry).
@@ -443,9 +427,10 @@
*de = alias;
} else if (!it_disposition(it, DISP_LOOKUP_NEG) &&
!it_disposition(it, DISP_OPEN_CREATE)) {
- /* With DISP_OPEN_CREATE dentry will
- instantiated in ll_create_it. */
- LASSERT(d_inode(*de) == NULL);
+ /* With DISP_OPEN_CREATE dentry will be
+ * instantiated in ll_create_it.
+ */
+ LASSERT(!d_inode(*de));
d_instantiate(*de, inode);
}
@@ -498,7 +483,7 @@
if (d_mountpoint(dentry))
CERROR("Tell Peter, lookup on mtpt, it %s\n", LL_IT2STR(it));
- if (it == NULL || it->it_op == IT_GETXATTR)
+ if (!it || it->it_op == IT_GETXATTR)
it = &lookup_it;
if (it->it_op == IT_GETATTR) {
@@ -557,7 +542,7 @@
out:
if (req)
ptlrpc_req_finished(req);
- if (it->it_op == IT_GETATTR && (retval == NULL || retval == dentry))
+ if (it->it_op == IT_GETATTR && (!retval || retval == dentry))
ll_statahead_mark(parent, dentry);
return retval;
}
@@ -582,7 +567,7 @@
itp = ⁢
de = ll_lookup_it(parent, dentry, itp, 0);
- if (itp != NULL)
+ if (itp)
ll_intent_release(itp);
return de;
@@ -622,7 +607,7 @@
de = ll_lookup_it(dir, dentry, it, lookup_flags);
if (IS_ERR(de))
rc = PTR_ERR(de);
- else if (de != NULL)
+ else if (de)
dentry = de;
if (!rc) {
@@ -631,7 +616,7 @@
rc = ll_create_it(dir, dentry, mode, it);
if (rc) {
/* We dget in ll_splice_alias. */
- if (de != NULL)
+ if (de)
dput(de);
goto out_release;
}
@@ -655,7 +640,7 @@
/* We dget in ll_splice_alias. finish_open takes
* care of dget for fd open.
*/
- if (de != NULL)
+ if (de)
dput(de);
}
} else {
@@ -693,7 +678,8 @@
/* We asked for a lock on the directory, but were granted a
* lock on the inode. Since we finally have an inode pointer,
- * stuff it in the lock. */
+ * stuff it in the lock.
+ */
CDEBUG(D_DLMTRACE, "setting l_ast_data to inode %p (%lu/%u)\n",
inode, inode->i_ino, inode->i_generation);
ll_set_lock_data(sbi->ll_md_exp, inode, it, NULL);
@@ -767,7 +753,7 @@
int tgt_len = 0;
int err;
- if (unlikely(tgt != NULL))
+ if (unlikely(tgt))
tgt_len = strlen(tgt) + 1;
op_data = ll_prep_md_op_data(NULL, dir, NULL,
@@ -888,10 +874,11 @@
/* The MDS sent back the EA because we unlinked the last reference
* to this file. Use this EA to unlink the objects on the OST.
* It's opaque so we don't swab here; we leave it to obd_unpackmd() to
- * check it is complete and sensible. */
+ * check it is complete and sensible.
+ */
eadata = req_capsule_server_sized_get(&request->rq_pill, &RMF_MDT_MD,
body->eadatasize);
- LASSERT(eadata != NULL);
+ LASSERT(eadata);
rc = obd_unpackmd(ll_i2dtexp(dir), &lsm, eadata, body->eadatasize);
if (rc < 0) {
@@ -901,7 +888,7 @@
LASSERT(rc >= sizeof(*lsm));
oa = kmem_cache_alloc(obdo_cachep, GFP_NOFS | __GFP_ZERO);
- if (oa == NULL) {
+ if (!oa) {
rc = -ENOMEM;
goto out_free_memmd;
}
@@ -917,7 +904,7 @@
&RMF_LOGCOOKIES,
sizeof(struct llog_cookie) *
lsm->lsm_stripe_count);
- if (oti.oti_logcookies == NULL) {
+ if (!oti.oti_logcookies) {
oa->o_valid &= ~OBD_MD_FLCOOKIE;
body->valid &= ~OBD_MD_FLCOOKIE;
}
@@ -938,7 +925,8 @@
/* ll_unlink() doesn't update the inode with the new link count.
* Instead, ll_ddelete() and ll_d_iput() will update it based upon if there
* is any lock existing. They will recycle dentries and inodes based upon locks
- * too. b=20433 */
+ * too. b=20433
+ */
static int ll_unlink(struct inode *dir, struct dentry *dentry)
{
struct ptlrpc_request *request = NULL;
diff --git a/drivers/staging/lustre/lustre/llite/rw.c b/drivers/staging/lustre/lustre/llite/rw.c
index 6b587d6..671039a 100644
--- a/drivers/staging/lustre/lustre/llite/rw.c
+++ b/drivers/staging/lustre/lustre/llite/rw.c
@@ -70,9 +70,9 @@
struct cl_page *page = lcc->lcc_page;
LASSERT(lcc->lcc_cookie == current);
- LASSERT(env != NULL);
+ LASSERT(env);
- if (page != NULL) {
+ if (page) {
lu_ref_del(&page->cp_reference, "cl_io", io);
cl_page_put(env, page);
}
@@ -97,7 +97,7 @@
int result = 0;
clob = ll_i2info(vmpage->mapping->host)->lli_clob;
- LASSERT(clob != NULL);
+ LASSERT(clob);
env = cl_env_get(&refcheck);
if (IS_ERR(env))
@@ -111,7 +111,7 @@
cio = ccc_env_io(env);
io = cio->cui_cl.cis_io;
- if (io == NULL && create) {
+ if (!io && create) {
struct inode *inode = vmpage->mapping->host;
loff_t pos;
@@ -120,7 +120,8 @@
/* this is too bad. Someone is trying to write the
* page w/o holding inode mutex. This means we can
- * add dirty pages into cache during truncate */
+ * add dirty pages into cache during truncate
+ */
CERROR("Proc %s is dirtying page w/o inode lock, this will break truncate\n",
current->comm);
dump_stack();
@@ -163,12 +164,11 @@
}
lcc->lcc_io = io;
- if (io == NULL)
+ if (!io)
result = -EIO;
if (result == 0) {
struct cl_page *page;
- LASSERT(io != NULL);
LASSERT(io->ci_state == CIS_IO_GOING);
LASSERT(cio->cui_fd == LUSTRE_FPRIVATE(file));
page = cl_page_find(env, clob, vmpage->index, vmpage,
@@ -240,7 +240,8 @@
ll_cl_fini(lcc);
}
/* returning 0 in prepare assumes commit must be called
- * afterwards */
+ * afterwards
+ */
} else {
result = PTR_ERR(lcc);
}
@@ -296,8 +297,8 @@
* to get an ra budget that is larger than the remaining readahead pages
* and reach here at exactly the same time. They will compute /a ret to
* consume the remaining pages, but will fail at atomic_add_return() and
- * get a zero ra window, although there is still ra space remaining. - Jay */
-
+ * get a zero ra window, although there is still ra space remaining. - Jay
+ */
static unsigned long ll_ra_count_get(struct ll_sb_info *sbi,
struct ra_io_arg *ria,
unsigned long pages)
@@ -307,7 +308,8 @@
/* If read-ahead pages left are less than 1M, do not do read-ahead,
* otherwise it will form small read RPC(< 1M), which hurt server
- * performance a lot. */
+ * performance a lot.
+ */
ret = min(ra->ra_max_pages - atomic_read(&ra->ra_cur_pages), pages);
if (ret < 0 || ret < min_t(long, PTLRPC_MAX_BRW_PAGES, pages)) {
ret = 0;
@@ -324,7 +326,8 @@
* branch is more expensive than subtracting zero from the result.
*
* Strided read is left unaligned to avoid small fragments beyond
- * the RPC boundary from needing an extra read RPC. */
+ * the RPC boundary from needing an extra read RPC.
+ */
if (ria->ria_pages == 0) {
long beyond_rpc = (ria->ria_start + ret) % PTLRPC_MAX_BRW_PAGES;
@@ -364,7 +367,7 @@
#define RAS_CDEBUG(ras) \
CDEBUG(D_READA, \
"lrp %lu cr %lu cp %lu ws %lu wl %lu nra %lu r %lu ri %lu" \
- "csr %lu sf %lu sp %lu sl %lu \n", \
+ "csr %lu sf %lu sp %lu sl %lu\n", \
ras->ras_last_readpage, ras->ras_consecutive_requests, \
ras->ras_consecutive_pages, ras->ras_window_start, \
ras->ras_window_len, ras->ras_next_readahead, \
@@ -378,9 +381,9 @@
unsigned long start = point - before, end = point + after;
if (start > point)
- start = 0;
+ start = 0;
if (end < point)
- end = ~0;
+ end = ~0;
return start <= index && index <= end;
}
@@ -473,7 +476,7 @@
const char *msg = NULL;
vmpage = grab_cache_page_nowait(mapping, index);
- if (vmpage != NULL) {
+ if (vmpage) {
/* Check if vmpage was truncated or reclaimed */
if (vmpage->mapping == mapping) {
page = cl_page_find(env, clob, vmpage->index,
@@ -500,7 +503,7 @@
which = RA_STAT_FAILED_GRAB_PAGE;
msg = "g_c_p_n failed";
}
- if (msg != NULL) {
+ if (msg) {
ll_ra_stats_inc(mapping, which);
CDEBUG(D_READA, "%s\n", msg);
}
@@ -515,13 +518,15 @@
/* Limit this to the blocksize instead of PTLRPC_BRW_MAX_SIZE, since we don't
* know what the actual RPC size is. If this needs to change, it makes more
* sense to tune the i_blkbits value for the file based on the OSTs it is
- * striped over, rather than having a constant value for all files here. */
+ * striped over, rather than having a constant value for all files here.
+ */
/* RAS_INCREASE_STEP should be (1UL << (inode->i_blkbits - PAGE_CACHE_SHIFT)).
* Temporarily set RAS_INCREASE_STEP to 1MB. After 4MB RPC is enabled
* by default, this should be adjusted corresponding with max_read_ahead_mb
* and max_read_ahead_per_file_mb otherwise the readahead budget can be used
- * up quickly which will affect read performance significantly. See LU-2816 */
+ * up quickly which will affect read performance significantly. See LU-2816
+ */
#define RAS_INCREASE_STEP(inode) (ONE_MB_BRW_SIZE >> PAGE_CACHE_SHIFT)
static inline int stride_io_mode(struct ll_readahead_state *ras)
@@ -570,7 +575,7 @@
if (end_left > st_pgs)
end_left = st_pgs;
- CDEBUG(D_READA, "start %llu, end %llu start_left %lu end_left %lu \n",
+ CDEBUG(D_READA, "start %llu, end %llu start_left %lu end_left %lu\n",
start, end, start_left, end_left);
if (start == end)
@@ -600,7 +605,8 @@
/* If ria_length == ria_pages, it means non-stride I/O mode,
* idx should always inside read-ahead window in this case
* For stride I/O mode, just check whether the idx is inside
- * the ria_pages. */
+ * the ria_pages.
+ */
return ria->ria_length == 0 || ria->ria_length == ria->ria_pages ||
(idx >= ria->ria_stoff && (idx - ria->ria_stoff) %
ria->ria_length < ria->ria_pages);
@@ -616,7 +622,7 @@
int rc, count = 0, stride_ria;
unsigned long page_idx;
- LASSERT(ria != NULL);
+ LASSERT(ria);
RIA_DEBUG(ria);
stride_ria = ria->ria_length > ria->ria_pages && ria->ria_pages > 0;
@@ -634,11 +640,13 @@
} else if (stride_ria) {
/* If it is not in the read-ahead window, and it is
* read-ahead mode, then check whether it should skip
- * the stride gap */
+ * the stride gap
+ */
pgoff_t offset;
/* FIXME: This assertion only is valid when it is for
* forward read-ahead, it will be fixed when backward
- * read-ahead is implemented */
+ * read-ahead is implemented
+ */
LASSERTF(page_idx > ria->ria_stoff, "Invalid page_idx %lu rs %lu re %lu ro %lu rl %lu rp %lu\n",
page_idx,
ria->ria_start, ria->ria_end, ria->ria_stoff,
@@ -647,7 +655,7 @@
offset = offset % (ria->ria_length);
if (offset > ria->ria_pages) {
page_idx += ria->ria_length - offset;
- CDEBUG(D_READA, "i %lu skip %lu \n", page_idx,
+ CDEBUG(D_READA, "i %lu skip %lu\n", page_idx,
ria->ria_length - offset);
continue;
}
@@ -699,7 +707,7 @@
bead = NULL;
/* Enlarge the RA window to encompass the full read */
- if (bead != NULL && ras->ras_window_start + ras->ras_window_len <
+ if (bead && ras->ras_window_start + ras->ras_window_len <
bead->lrr_start + bead->lrr_count) {
ras->ras_window_len = bead->lrr_start + bead->lrr_count -
ras->ras_window_start;
@@ -721,7 +729,8 @@
*/
/* Note: we only trim the RPC, instead of extending the RPC
* to the boundary, so to avoid reading too much pages during
- * random reading. */
+ * random reading.
+ */
rpc_boundary = (end + 1) & (~(PTLRPC_MAX_BRW_PAGES - 1));
if (rpc_boundary > 0)
rpc_boundary--;
@@ -774,8 +783,9 @@
* the ras we need to go back and update the ras so that the
* next read-ahead tries from where we left off. we only do so
* if the region we failed to issue read-ahead on is still ahead
- * of the app and behind the next index to start read-ahead from */
- CDEBUG(D_READA, "ra_end %lu end %lu stride end %lu \n",
+ * of the app and behind the next index to start read-ahead from
+ */
+ CDEBUG(D_READA, "ra_end %lu end %lu stride end %lu\n",
ra_end, end, ria->ria_end);
if (ra_end != end + 1) {
@@ -880,7 +890,8 @@
}
/* Stride Read-ahead window will be increased inc_len according to
- * stride I/O pattern */
+ * stride I/O pattern
+ */
static void ras_stride_increase_window(struct ll_readahead_state *ras,
struct ll_ra_info *ra,
unsigned long inc_len)
@@ -951,7 +962,8 @@
* or reads to some other part of the file. Secondly if we get a
* read-ahead miss that we think we've previously issued. This can
* be a symptom of there being so many read-ahead pages that the VM is
- * reclaiming it before we get to it. */
+ * reclaiming it before we get to it.
+ */
if (!index_in_window(index, ras->ras_last_readpage, 8, 8)) {
zero = 1;
ll_ra_stats_inc_sbi(sbi, RA_STAT_DISTANT_READPAGE);
@@ -968,7 +980,8 @@
* file up to ra_max_pages_per_file. This is simply a best effort
* and only occurs once per open file. Normal RA behavior is reverted
* to for subsequent IO. The mmap case does not increment
- * ras_requests and thus can never trigger this behavior. */
+ * ras_requests and thus can never trigger this behavior.
+ */
if (ras->ras_requests == 2 && !ras->ras_request_index) {
__u64 kms_pages;
@@ -1014,14 +1027,16 @@
stride_io_mode(ras)) {
/*If stride-RA hit cache miss, the stride dector
*will not be reset to avoid the overhead of
- *redetecting read-ahead mode */
+ *redetecting read-ahead mode
+ */
if (index != ras->ras_last_readpage + 1)
ras->ras_consecutive_pages = 0;
ras_reset(inode, ras, index);
RAS_CDEBUG(ras);
} else {
/* Reset both stride window and normal RA
- * window */
+ * window
+ */
ras_reset(inode, ras, index);
ras->ras_consecutive_pages++;
ras_stride_reset(ras);
@@ -1030,7 +1045,8 @@
} else if (stride_io_mode(ras)) {
/* If this is contiguous read but in stride I/O mode
* currently, check whether stride step still is valid,
- * if invalid, it will reset the stride ra window*/
+ * if invalid, it will reset the stride ra window
+ */
if (!index_in_stride_window(ras, index)) {
/* Shrink stride read-ahead window to be zero */
ras_stride_reset(ras);
@@ -1046,7 +1062,8 @@
if (stride_io_mode(ras))
/* Since stride readahead is sensitive to the offset
* of read-ahead, so we use original offset here,
- * instead of ras_window_start, which is RPC aligned */
+ * instead of ras_window_start, which is RPC aligned
+ */
ras->ras_next_readahead = max(index, ras->ras_next_readahead);
else
ras->ras_next_readahead = max(ras->ras_window_start,
@@ -1054,7 +1071,8 @@
RAS_CDEBUG(ras);
/* Trigger RA in the mmap case where ras_consecutive_requests
- * is not incremented and thus can't be used to trigger RA */
+ * is not incremented and thus can't be used to trigger RA
+ */
if (!ras->ras_window_len && ras->ras_consecutive_pages == 4) {
ras->ras_window_len = RAS_INCREASE_STEP(inode);
goto out_unlock;
@@ -1100,7 +1118,7 @@
LASSERT(PageLocked(vmpage));
LASSERT(!PageWriteback(vmpage));
- LASSERT(ll_i2dtexp(inode) != NULL);
+ LASSERT(ll_i2dtexp(inode));
env = cl_env_nested_get(&nest);
if (IS_ERR(env)) {
@@ -1109,7 +1127,7 @@
}
clob = ll_i2info(inode)->lli_clob;
- LASSERT(clob != NULL);
+ LASSERT(clob);
io = ccc_env_thread_io(env);
io->ci_obj = clob;
@@ -1152,14 +1170,16 @@
/* Flush page failed because the extent is being written out.
* Wait for the write of extent to be finished to avoid
* breaking kernel which assumes ->writepage should mark
- * PageWriteback or clean the page. */
+ * PageWriteback or clean the page.
+ */
result = cl_sync_file_range(inode, offset,
offset + PAGE_CACHE_SIZE - 1,
CL_FSYNC_LOCAL, 1);
if (result > 0) {
/* actually we may have written more than one page.
* decreasing this page because the caller will count
- * it. */
+ * it.
+ */
wbc->nr_to_write -= result - 1;
result = 0;
}
@@ -1209,7 +1229,8 @@
if (sbi->ll_umounting)
/* if the mountpoint is being umounted, all pages have to be
* evicted to avoid hitting LBUG when truncate_inode_pages()
- * is called later on. */
+ * is called later on.
+ */
ignore_layout = 1;
result = cl_sync_file_range(inode, start, end, mode, ignore_layout);
if (result > 0) {
diff --git a/drivers/staging/lustre/lustre/llite/rw26.c b/drivers/staging/lustre/lustre/llite/rw26.c
index 711fda9..b0f7ffa 100644
--- a/drivers/staging/lustre/lustre/llite/rw26.c
+++ b/drivers/staging/lustre/lustre/llite/rw26.c
@@ -92,9 +92,9 @@
if (!IS_ERR(env)) {
inode = vmpage->mapping->host;
obj = ll_i2info(inode)->lli_clob;
- if (obj != NULL) {
+ if (obj) {
page = cl_vmpage_page(vmpage, obj);
- if (page != NULL) {
+ if (page) {
lu_ref_add(&page->cp_reference,
"delete", vmpage);
cl_page_delete(env, page);
@@ -128,11 +128,11 @@
return 0;
mapping = vmpage->mapping;
- if (mapping == NULL)
+ if (!mapping)
return 1;
obj = ll_i2info(mapping->host)->lli_clob;
- if (obj == NULL)
+ if (!obj)
return 1;
/* 1 for page allocator, 1 for cl_page and 1 for page cache */
@@ -145,12 +145,13 @@
/* If we can't allocate an env we won't call cl_page_put()
* later on which further means it's impossible to drop
* page refcount by cl_page, so ask kernel to not free
- * this page. */
+ * this page.
+ */
return 0;
page = cl_vmpage_page(vmpage, obj);
- result = page == NULL;
- if (page != NULL) {
+ result = !page;
+ if (page) {
if (!cl_page_in_use(page)) {
result = 1;
cl_page_delete(env, page);
@@ -212,7 +213,8 @@
}
/* ll_free_user_pages - tear down page struct array
- * @pages: array of page struct pointers underlying target buffer */
+ * @pages: array of page struct pointers underlying target buffer
+ */
static void ll_free_user_pages(struct page **pages, int npages, int do_dirty)
{
int i;
@@ -246,7 +248,7 @@
cl_2queue_init(queue);
for (i = 0; i < page_count; i++) {
if (pv->ldp_offsets)
- file_offset = pv->ldp_offsets[i];
+ file_offset = pv->ldp_offsets[i];
LASSERT(!(file_offset & (page_size - 1)));
clp = cl_page_find(env, obj, cl_index(obj, file_offset),
@@ -266,7 +268,8 @@
do_io = true;
/* check the page type: if the page is a host page, then do
- * write directly */
+ * write directly
+ */
if (clp->cp_type == CPT_CACHEABLE) {
struct page *vmpage = cl_page_vmpage(env, clp);
struct page *src_page;
@@ -284,14 +287,16 @@
kunmap_atomic(src);
/* make sure page will be added to the transfer by
- * cl_io_submit()->...->vvp_page_prep_write(). */
+ * cl_io_submit()->...->vvp_page_prep_write().
+ */
if (rw == WRITE)
set_page_dirty(vmpage);
if (rw == READ) {
/* do not issue the page for read, since it
* may reread a ra page which has NOT uptodate
- * bit set. */
+ * bit set.
+ */
cl_page_disown(env, io, clp);
do_io = false;
}
@@ -359,7 +364,8 @@
* kmalloc limit. We need to fit all of the brw_page structs, each one
* representing PAGE_SIZE worth of user data, into a single buffer, and
* then truncate this to be a full-sized RPC. For 4kB PAGE_SIZE this is
- * up to 22MB for 128kB kmalloc and up to 682MB for 4MB kmalloc. */
+ * up to 22MB for 128kB kmalloc and up to 682MB for 4MB kmalloc.
+ */
#define MAX_DIO_SIZE ((MAX_MALLOC / sizeof(struct brw_page) * PAGE_CACHE_SIZE) & \
~(DT_MAX_BRW_SIZE - 1))
static ssize_t ll_direct_IO_26(struct kiocb *iocb, struct iov_iter *iter,
@@ -396,7 +402,7 @@
env = cl_env_get(&refcheck);
LASSERT(!IS_ERR(env));
io = ccc_env_io(env)->cui_cl.cis_io;
- LASSERT(io != NULL);
+ LASSERT(io);
/* 0. Need locking between buffered and direct access. and race with
* size changing by concurrent truncates and writes.
@@ -433,7 +439,8 @@
* for the request, shrink it to a smaller
* PAGE_SIZE multiple and try again.
* We should always be able to kmalloc for a
- * page worth of page pointers = 4MB on i386. */
+ * page worth of page pointers = 4MB on i386.
+ */
if (result == -ENOMEM &&
size > (PAGE_CACHE_SIZE / sizeof(*pages)) *
PAGE_CACHE_SIZE) {
@@ -461,7 +468,7 @@
struct lov_stripe_md *lsm;
lsm = ccc_inode_lsm_get(inode);
- LASSERT(lsm != NULL);
+ LASSERT(lsm);
lov_stripe_lock(lsm);
obd_adjust_kms(ll_i2dtexp(inode), lsm, file_offset, 0);
lov_stripe_unlock(lsm);
diff --git a/drivers/staging/lustre/lustre/llite/statahead.c b/drivers/staging/lustre/lustre/llite/statahead.c
index 88ffd8e..d4a3722 100644
--- a/drivers/staging/lustre/lustre/llite/statahead.c
+++ b/drivers/staging/lustre/lustre/llite/statahead.c
@@ -49,13 +49,13 @@
#define SA_OMITTED_ENTRY_MAX 8ULL
-typedef enum {
+enum se_stat {
/** negative values are for error cases */
SA_ENTRY_INIT = 0, /** init entry */
SA_ENTRY_SUCC = 1, /** stat succeed */
SA_ENTRY_INVA = 2, /** invalid entry */
SA_ENTRY_DEST = 3, /** entry to be destroyed */
-} se_stat_t;
+};
struct ll_sa_entry {
/* link into sai->sai_entries */
@@ -71,7 +71,7 @@
/* low layer ldlm lock handle */
__u64 se_handle;
/* entry status */
- se_stat_t se_stat;
+ enum se_stat se_stat;
/* entry size, contains name */
int se_size;
/* pointer to async getattr enqueue info */
@@ -130,7 +130,7 @@
static inline int agl_should_run(struct ll_statahead_info *sai,
struct inode *inode)
{
- return (inode != NULL && S_ISREG(inode->i_mode) && sai->sai_agl_valid);
+ return (inode && S_ISREG(inode->i_mode) && sai->sai_agl_valid);
}
static inline int sa_sent_full(struct ll_statahead_info *sai)
@@ -366,7 +366,7 @@
*/
static void
do_sa_entry_to_stated(struct ll_statahead_info *sai,
- struct ll_sa_entry *entry, se_stat_t stat)
+ struct ll_sa_entry *entry, enum se_stat stat)
{
struct ll_sa_entry *se;
struct list_head *pos = &sai->sai_entries_stated;
@@ -392,7 +392,7 @@
*/
static int
ll_sa_entry_to_stated(struct ll_statahead_info *sai,
- struct ll_sa_entry *entry, se_stat_t stat)
+ struct ll_sa_entry *entry, enum se_stat stat)
{
struct ll_inode_info *lli = ll_i2info(sai->sai_inode);
int ret = 1;
@@ -494,12 +494,13 @@
if (unlikely(atomic_read(&sai->sai_refcount) > 0)) {
/* It is race case, the interpret callback just hold
- * a reference count */
+ * a reference count
+ */
spin_unlock(&lli->lli_sa_lock);
return;
}
- LASSERT(lli->lli_opendir_key == NULL);
+ LASSERT(!lli->lli_opendir_key);
LASSERT(thread_is_stopped(&sai->sai_thread));
LASSERT(thread_is_stopped(&sai->sai_agl_thread));
@@ -618,20 +619,21 @@
it = &minfo->mi_it;
req = entry->se_req;
body = req_capsule_server_get(&req->rq_pill, &RMF_MDT_BODY);
- if (body == NULL) {
+ if (!body) {
rc = -EFAULT;
goto out;
}
child = entry->se_inode;
- if (child == NULL) {
+ if (!child) {
/*
* lookup.
*/
LASSERT(fid_is_zero(&minfo->mi_data.op_fid2));
/* XXX: No fid in reply, this is probably cross-ref case.
- * SA can't handle it yet. */
+ * SA can't handle it yet.
+ */
if (body->valid & OBD_MD_MDS) {
rc = -EAGAIN;
goto out;
@@ -672,7 +674,8 @@
/* The "ll_sa_entry_to_stated()" will drop related ldlm ibits lock
* reference count by calling "ll_intent_drop_lock()" in spite of the
* above operations failed or not. Do not worry about calling
- * "ll_intent_drop_lock()" more than once. */
+ * "ll_intent_drop_lock()" more than once.
+ */
rc = ll_sa_entry_to_stated(sai, entry,
rc < 0 ? SA_ENTRY_INVA : SA_ENTRY_SUCC);
if (rc == 0 && entry->se_index == sai->sai_index_wait)
@@ -698,14 +701,15 @@
/* release ibits lock ASAP to avoid deadlock when statahead
* thread enqueues lock on parent in readdir and another
* process enqueues lock on child with parent lock held, eg.
- * unlink. */
+ * unlink.
+ */
handle = it->d.lustre.it_lock_handle;
ll_intent_drop_lock(it);
}
spin_lock(&lli->lli_sa_lock);
/* stale entry */
- if (unlikely(lli->lli_sai == NULL ||
+ if (unlikely(!lli->lli_sai ||
lli->lli_sai->sai_generation != minfo->mi_generation)) {
spin_unlock(&lli->lli_sa_lock);
rc = -ESTALE;
@@ -720,7 +724,7 @@
}
entry = ll_sa_entry_get_byindex(sai, minfo->mi_cbdata);
- if (entry == NULL) {
+ if (!entry) {
sai->sai_replied++;
spin_unlock(&lli->lli_sa_lock);
rc = -EIDRM;
@@ -736,7 +740,8 @@
/* Release the async ibits lock ASAP to avoid deadlock
* when statahead thread tries to enqueue lock on parent
* for readpage and other tries to enqueue lock on child
- * with parent's lock held, for example: unlink. */
+ * with parent's lock held, for example: unlink.
+ */
entry->se_handle = handle;
wakeup = list_empty(&sai->sai_entries_received);
list_add_tail(&entry->se_list,
@@ -756,7 +761,7 @@
iput(dir);
kfree(minfo);
}
- if (sai != NULL)
+ if (sai)
ll_sai_put(sai);
return rc;
}
@@ -853,7 +858,7 @@
struct ldlm_enqueue_info *einfo;
int rc;
- if (unlikely(inode == NULL))
+ if (unlikely(!inode))
return 1;
if (d_mountpoint(dentry))
@@ -908,10 +913,9 @@
rc = do_sa_revalidate(dir, entry, dentry);
if (rc == 1 && agl_should_run(sai, d_inode(dentry)))
ll_agl_add(sai, d_inode(dentry), entry->se_index);
- }
- if (dentry != NULL)
dput(dentry);
+ }
if (rc) {
rc1 = ll_sa_entry_to_stated(sai, entry,
@@ -948,7 +952,8 @@
if (thread_is_init(thread))
/* If someone else has changed the thread state
* (e.g. already changed to SVC_STOPPING), we can't just
- * blindly overwrite that setting. */
+ * blindly overwrite that setting.
+ */
thread_set_flags(thread, SVC_RUNNING);
spin_unlock(&plli->lli_agl_lock);
wake_up(&thread->t_ctl_waitq);
@@ -964,7 +969,8 @@
spin_lock(&plli->lli_agl_lock);
/* The statahead thread maybe help to process AGL entries,
- * so check whether list empty again. */
+ * so check whether list empty again.
+ */
if (!list_empty(&sai->sai_entries_agl)) {
clli = list_entry(sai->sai_entries_agl.next,
struct ll_inode_info, lli_agl_list);
@@ -1049,7 +1055,8 @@
if (thread_is_init(thread))
/* If someone else has changed the thread state
* (e.g. already changed to SVC_STOPPING), we can't just
- * blindly overwrite that setting. */
+ * blindly overwrite that setting.
+ */
thread_set_flags(thread, SVC_RUNNING);
spin_unlock(&plli->lli_sa_lock);
wake_up(&thread->t_ctl_waitq);
@@ -1070,7 +1077,7 @@
}
dp = page_address(page);
- for (ent = lu_dirent_start(dp); ent != NULL;
+ for (ent = lu_dirent_start(dp); ent;
ent = lu_dirent_next(ent)) {
__u64 hash;
int namelen;
@@ -1137,7 +1144,8 @@
/* If no window for metadata statahead, but there are
* some AGL entries to be triggered, then try to help
- * to process the AGL entries. */
+ * to process the AGL entries.
+ */
if (sa_sent_full(sai)) {
spin_lock(&plli->lli_agl_lock);
while (!list_empty(&sai->sai_entries_agl)) {
@@ -1274,7 +1282,7 @@
{
struct ll_inode_info *lli = ll_i2info(dir);
- if (unlikely(key == NULL))
+ if (unlikely(!key))
return;
spin_lock(&lli->lli_sa_lock);
@@ -1357,7 +1365,7 @@
}
dp = page_address(page);
- for (ent = lu_dirent_start(dp); ent != NULL;
+ for (ent = lu_dirent_start(dp); ent;
ent = lu_dirent_next(ent)) {
__u64 hash;
int namelen;
@@ -1365,7 +1373,8 @@
hash = le64_to_cpu(ent->lde_hash);
/* The ll_get_dir_page() can return any page containing
- * the given hash which may be not the start hash. */
+ * the given hash which may be not the start hash.
+ */
if (unlikely(hash < pos))
continue;
@@ -1448,7 +1457,7 @@
struct ll_sb_info *sbi = ll_i2sbi(sai->sai_inode);
int hit;
- if (entry != NULL && entry->se_stat == SA_ENTRY_SUCC)
+ if (entry && entry->se_stat == SA_ENTRY_SUCC)
hit = 1;
else
hit = 0;
@@ -1498,6 +1507,7 @@
struct ll_sa_entry *entry;
struct ptlrpc_thread *thread;
struct l_wait_info lwi = { 0 };
+ struct task_struct *task;
int rc = 0;
struct ll_inode_info *plli;
@@ -1540,7 +1550,7 @@
}
entry = ll_sa_entry_get_byname(sai, &(*dentryp)->d_name);
- if (entry == NULL || only_unplug) {
+ if (!entry || only_unplug) {
ll_sai_unplug(sai, entry);
return entry ? 1 : -EAGAIN;
}
@@ -1559,8 +1569,7 @@
}
}
- if (entry->se_stat == SA_ENTRY_SUCC &&
- entry->se_inode != NULL) {
+ if (entry->se_stat == SA_ENTRY_SUCC && entry->se_inode) {
struct inode *inode = entry->se_inode;
struct lookup_intent it = { .it_op = IT_GETATTR,
.d.lustre.it_lock_handle =
@@ -1570,7 +1579,7 @@
rc = md_revalidate_lock(ll_i2mdexp(dir), &it,
ll_inode2fid(inode), &bits);
if (rc == 1) {
- if (d_inode(*dentryp) == NULL) {
+ if (!d_inode(*dentryp)) {
struct dentry *alias;
alias = ll_splice_alias(inode,
@@ -1616,14 +1625,14 @@
}
sai = ll_sai_alloc();
- if (sai == NULL) {
+ if (!sai) {
rc = -ENOMEM;
goto out;
}
sai->sai_ls_all = (rc == LS_FIRST_DOT_DE);
sai->sai_inode = igrab(dir);
- if (unlikely(sai->sai_inode == NULL)) {
+ if (unlikely(!sai->sai_inode)) {
CWARN("Do not start stat ahead on dying inode "DFID"\n",
PFID(&lli->lli_fid));
rc = -ESTALE;
@@ -1651,25 +1660,28 @@
* but as soon as we expose the sai by attaching it to the lli that
* default reference can be dropped by another thread calling
* ll_stop_statahead. We need to take a local reference to protect
- * the sai buffer while we intend to access it. */
+ * the sai buffer while we intend to access it.
+ */
ll_sai_get(sai);
lli->lli_sai = sai;
plli = ll_i2info(d_inode(parent));
- rc = PTR_ERR(kthread_run(ll_statahead_thread, parent,
- "ll_sa_%u", plli->lli_opendir_pid));
+ task = kthread_run(ll_statahead_thread, parent, "ll_sa_%u",
+ plli->lli_opendir_pid);
thread = &sai->sai_thread;
- if (IS_ERR_VALUE(rc)) {
+ if (IS_ERR(task)) {
+ rc = PTR_ERR(task);
CERROR("can't start ll_sa thread, rc: %d\n", rc);
dput(parent);
lli->lli_opendir_key = NULL;
thread_set_flags(thread, SVC_STOPPED);
thread_set_flags(&sai->sai_agl_thread, SVC_STOPPED);
/* Drop both our own local reference and the default
- * reference from allocation time. */
+ * reference from allocation time.
+ */
ll_sai_put(sai);
ll_sai_put(sai);
- LASSERT(lli->lli_sai == NULL);
+ LASSERT(!lli->lli_sai);
return -EAGAIN;
}
diff --git a/drivers/staging/lustre/lustre/llite/super25.c b/drivers/staging/lustre/lustre/llite/super25.c
index 86c371e..63f090a 100644
--- a/drivers/staging/lustre/lustre/llite/super25.c
+++ b/drivers/staging/lustre/lustre/llite/super25.c
@@ -54,7 +54,7 @@
ll_stats_ops_tally(ll_s2sbi(sb), LPROC_LL_ALLOC_INODE, 1);
lli = kmem_cache_alloc(ll_inode_cachep, GFP_NOFS | __GFP_ZERO);
- if (lli == NULL)
+ if (!lli)
return NULL;
inode_init_once(&lli->lli_vfs_inode);
@@ -99,7 +99,8 @@
/* print an address of _any_ initialized kernel symbol from this
* module, to allow debugging with gdb that doesn't support data
- * symbols from modules.*/
+ * symbols from modules.
+ */
CDEBUG(D_INFO, "Lustre client module (%p).\n",
&lustre_super_operations);
@@ -108,26 +109,26 @@
sizeof(struct ll_inode_info),
0, SLAB_HWCACHE_ALIGN|SLAB_ACCOUNT,
NULL);
- if (ll_inode_cachep == NULL)
+ if (!ll_inode_cachep)
goto out_cache;
ll_file_data_slab = kmem_cache_create("ll_file_data",
sizeof(struct ll_file_data), 0,
SLAB_HWCACHE_ALIGN, NULL);
- if (ll_file_data_slab == NULL)
+ if (!ll_file_data_slab)
goto out_cache;
ll_remote_perm_cachep = kmem_cache_create("ll_remote_perm_cache",
sizeof(struct ll_remote_perm),
0, 0, NULL);
- if (ll_remote_perm_cachep == NULL)
+ if (!ll_remote_perm_cachep)
goto out_cache;
ll_rmtperm_hash_cachep = kmem_cache_create("ll_rmtperm_hash_cache",
REMOTE_PERM_HASHSIZE *
sizeof(struct list_head),
0, 0, NULL);
- if (ll_rmtperm_hash_cachep == NULL)
+ if (!ll_rmtperm_hash_cachep)
goto out_cache;
llite_root = debugfs_create_dir("llite", debugfs_lustre_root);
@@ -146,7 +147,8 @@
cfs_get_random_bytes(seed, sizeof(seed));
/* Nodes with small feet have little entropy. The NID for this
- * node gives the most entropy in the low bits */
+ * node gives the most entropy in the low bits
+ */
for (i = 0;; i++) {
if (LNetGetId(i, &lnet_id) == -ENOENT)
break;
diff --git a/drivers/staging/lustre/lustre/llite/symlink.c b/drivers/staging/lustre/lustre/llite/symlink.c
index 2610348..4435a63 100644
--- a/drivers/staging/lustre/lustre/llite/symlink.c
+++ b/drivers/staging/lustre/lustre/llite/symlink.c
@@ -59,7 +59,8 @@
*symname = lli->lli_symlink_name;
/* If the total CDEBUG() size is larger than a page, it
* will print a warning to the console, avoid this by
- * printing just the last part of the symlink. */
+ * printing just the last part of the symlink.
+ */
CDEBUG(D_INODE, "using cached symlink %s%.*s, len = %d\n",
print_limit < symlen ? "..." : "", print_limit,
(*symname) + symlen - print_limit, symlen);
@@ -81,7 +82,6 @@
}
body = req_capsule_server_get(&(*request)->rq_pill, &RMF_MDT_BODY);
- LASSERT(body != NULL);
if ((body->valid & OBD_MD_LINKNAME) == 0) {
CERROR("OBD_MD_LINKNAME not set on reply\n");
rc = -EPROTO;
@@ -97,7 +97,7 @@
}
*symname = req_capsule_server_get(&(*request)->rq_pill, &RMF_MDT_MD);
- if (*symname == NULL ||
+ if (!*symname ||
strnlen(*symname, symlen) != symlen - 1) {
/* not full/NULL terminated */
CERROR("inode %lu: symlink not NULL terminated string of length %d\n",
diff --git a/drivers/staging/lustre/lustre/llite/vvp_dev.c b/drivers/staging/lustre/lustre/llite/vvp_dev.c
index fdca4ec..6075ccf 100644
--- a/drivers/staging/lustre/lustre/llite/vvp_dev.c
+++ b/drivers/staging/lustre/lustre/llite/vvp_dev.c
@@ -80,7 +80,7 @@
struct vvp_thread_info *info;
info = kmem_cache_alloc(vvp_thread_kmem, GFP_NOFS | __GFP_ZERO);
- if (info == NULL)
+ if (!info)
info = ERR_PTR(-ENOMEM);
return info;
}
@@ -99,7 +99,7 @@
struct vvp_session *session;
session = kmem_cache_alloc(vvp_session_kmem, GFP_NOFS | __GFP_ZERO);
- if (session == NULL)
+ if (!session)
session = ERR_PTR(-ENOMEM);
return session;
}
@@ -228,7 +228,7 @@
if (!IS_ERR(env)) {
cld = sbi->ll_cl;
- if (cld != NULL) {
+ if (cld) {
cl_stack_fini(env, cld);
sbi->ll_cl = NULL;
sbi->ll_site = NULL;
@@ -325,11 +325,11 @@
cfs_hash_hlist_for_each(dev->ld_site->ls_obj_hash, id->vpi_bucket,
vvp_pgcache_obj_get, id);
- if (id->vpi_obj != NULL) {
+ if (id->vpi_obj) {
struct lu_object *lu_obj;
lu_obj = lu_object_locate(id->vpi_obj, dev->ld_type);
- if (lu_obj != NULL) {
+ if (lu_obj) {
lu_object_ref_add(lu_obj, "dump", current);
return lu2cl(lu_obj);
}
@@ -355,7 +355,7 @@
if (id.vpi_bucket >= CFS_HASH_NHLIST(site->ls_obj_hash))
return ~0ULL;
clob = vvp_pgcache_obj(env, dev, &id);
- if (clob != NULL) {
+ if (clob) {
struct cl_object_header *hdr;
int nr;
struct cl_page *pg;
@@ -443,7 +443,7 @@
vvp_pgcache_id_unpack(pos, &id);
sbi = f->private;
clob = vvp_pgcache_obj(env, &sbi->ll_cl->cd_lu_dev, &id);
- if (clob != NULL) {
+ if (clob) {
hdr = cl_object_header(clob);
spin_lock(&hdr->coh_page_guard);
@@ -452,7 +452,7 @@
seq_printf(f, "%8x@"DFID": ",
id.vpi_index, PFID(&hdr->coh_lu.loh_fid));
- if (page != NULL) {
+ if (page) {
vvp_pgcache_page_show(env, f, page);
cl_page_put(env, page);
} else
diff --git a/drivers/staging/lustre/lustre/llite/vvp_io.c b/drivers/staging/lustre/lustre/llite/vvp_io.c
index 0920ac6..cca4b6c 100644
--- a/drivers/staging/lustre/lustre/llite/vvp_io.c
+++ b/drivers/staging/lustre/lustre/llite/vvp_io.c
@@ -78,7 +78,8 @@
case CIT_READ:
case CIT_WRITE:
/* don't need lock here to check lli_layout_gen as we have held
- * extent lock and GROUP lock has to hold to swap layout */
+ * extent lock and GROUP lock has to hold to swap layout
+ */
if (ll_layout_version_get(lli) != cio->cui_layout_gen) {
io->ci_need_restart = 1;
/* this will return application a short read/write */
@@ -134,7 +135,8 @@
*/
rc = ll_layout_restore(ccc_object_inode(obj));
/* if restore registration failed, no restart,
- * we will return -ENODATA */
+ * we will return -ENODATA
+ */
/* The layout will change after restore, so we need to
* block on layout lock hold by the MDT
* as MDT will not send new layout in lvb (see LU-3124)
@@ -164,8 +166,7 @@
DFID" layout changed from %d to %d.\n",
PFID(lu_object_fid(&obj->co_lu)),
cio->cui_layout_gen, gen);
- /* today successful restore is the only possible
- * case */
+ /* today successful restore is the only possible case */
/* restore was done, clear restoring state */
ll_i2info(ccc_object_inode(obj))->lli_flags &=
~LLIF_FILE_RESTORING;
@@ -181,7 +182,7 @@
CLOBINVRNT(env, io->ci_obj, ccc_object_invariant(io->ci_obj));
- if (page != NULL) {
+ if (page) {
lu_ref_del(&page->cp_reference, "fault", io);
cl_page_put(env, page);
io->u.ci_fault.ft_page = NULL;
@@ -220,11 +221,11 @@
if (!cl_is_normalio(env, io))
return 0;
- if (vio->cui_iter == NULL) /* nfs or loop back device write */
+ if (!vio->cui_iter) /* nfs or loop back device write */
return 0;
/* No MM (e.g. NFS)? No vmas too. */
- if (mm == NULL)
+ if (!mm)
return 0;
iov_for_each(iov, i, *(vio->cui_iter)) {
@@ -456,7 +457,8 @@
if (cl_io_is_trunc(io))
/* Truncate in memory pages - they must be clean pages
- * because osc has already notified to destroy osc_extents. */
+ * because osc has already notified to destroy osc_extents.
+ */
vvp_do_vmtruncate(inode, io->u.ci_setattr.sa_attr.lvb_size);
inode_unlock(inode);
@@ -529,7 +531,8 @@
vio->u.splice.cui_flags);
/* LU-1109: do splice read stripe by stripe otherwise if it
* may make nfsd stuck if this read occupied all internal pipe
- * buffers. */
+ * buffers.
+ */
io->ci_continue = 0;
break;
default:
@@ -587,7 +590,7 @@
CDEBUG(D_VFSTRACE, "write: [%lli, %lli)\n", pos, pos + (long long)cnt);
- if (cio->cui_iter == NULL) /* from a temp io in ll_cl_init(). */
+ if (!cio->cui_iter) /* from a temp io in ll_cl_init(). */
result = 0;
else
result = generic_file_write_iter(cio->cui_iocb, cio->cui_iter);
@@ -673,7 +676,7 @@
/* must return locked page */
if (fio->ft_mkwrite) {
- LASSERT(cfio->ft_vmpage != NULL);
+ LASSERT(cfio->ft_vmpage);
lock_page(cfio->ft_vmpage);
} else {
result = vvp_io_kernel_fault(cfio);
@@ -689,13 +692,15 @@
size = i_size_read(inode);
/* Though we have already held a cl_lock upon this page, but
- * it still can be truncated locally. */
+ * it still can be truncated locally.
+ */
if (unlikely((vmpage->mapping != inode->i_mapping) ||
(page_offset(vmpage) > size))) {
CDEBUG(D_PAGE, "llite: fault and truncate race happened!\n");
/* return +1 to stop cl_io_loop() and ll_fault() will catch
- * and retry. */
+ * and retry.
+ */
result = 1;
goto out;
}
@@ -736,7 +741,8 @@
}
/* if page is going to be written, we should add this page into cache
- * earlier. */
+ * earlier.
+ */
if (fio->ft_mkwrite) {
wait_on_page_writeback(vmpage);
if (set_page_dirty(vmpage)) {
@@ -750,7 +756,8 @@
/* Do not set Dirty bit here so that in case IO is
* started before the page is really made dirty, we
- * still have chance to detect it. */
+ * still have chance to detect it.
+ */
result = cl_page_cache_add(env, io, page, CRT_WRITE);
LASSERT(cl_page_is_owned(page, io));
@@ -792,7 +799,7 @@
out:
/* return unlocked vmpage to avoid deadlocking */
- if (vmpage != NULL)
+ if (vmpage)
unlock_page(vmpage);
cfio->fault.ft_flags &= ~VM_FAULT_LOCKED;
return result;
@@ -803,7 +810,8 @@
{
/* we should mark TOWRITE bit to each dirty page in radix tree to
* verify pages have been written, but this is difficult because of
- * race. */
+ * race.
+ */
return 0;
}
@@ -1003,7 +1011,7 @@
*
* (3) IO is batched up to the RPC size and is async until the
* client max cache is hit
- * (/proc/fs/lustre/osc/OSC.../max_dirty_mb)
+ * (/sys/fs/lustre/osc/OSC.../max_dirty_mb)
*
*/
if (!PageDirty(vmpage)) {
@@ -1153,7 +1161,8 @@
count = io->u.ci_rw.crw_count;
/* "If nbyte is 0, read() will return 0 and have no other
- * results." -- Single Unix Spec */
+ * results." -- Single Unix Spec
+ */
if (count == 0)
result = 1;
else
@@ -1173,20 +1182,23 @@
/* ignore layout change for generic CIT_MISC but not for glimpse.
* io context for glimpse must set ci_verify_layout to true,
- * see cl_glimpse_size0() for details. */
+ * see cl_glimpse_size0() for details.
+ */
if (io->ci_type == CIT_MISC && !io->ci_verify_layout)
io->ci_ignore_layout = 1;
/* Enqueue layout lock and get layout version. We need to do this
* even for operations requiring to open file, such as read and write,
- * because it might not grant layout lock in IT_OPEN. */
+ * because it might not grant layout lock in IT_OPEN.
+ */
if (result == 0 && !io->ci_ignore_layout) {
result = ll_layout_refresh(inode, &cio->cui_layout_gen);
if (result == -ENOENT)
/* If the inode on MDS has been removed, but the objects
* on OSTs haven't been destroyed (async unlink), layout
* fetch will return -ENOENT, we'd ignore this error
- * and continue with dirty flush. LU-3230. */
+ * and continue with dirty flush. LU-3230.
+ */
result = 0;
if (result < 0)
CERROR("%s: refresh file layout " DFID " error %d.\n",
diff --git a/drivers/staging/lustre/lustre/llite/vvp_object.c b/drivers/staging/lustre/lustre/llite/vvp_object.c
index c82714e..03c887d 100644
--- a/drivers/staging/lustre/lustre/llite/vvp_object.c
+++ b/drivers/staging/lustre/lustre/llite/vvp_object.c
@@ -137,7 +137,8 @@
* page may be stale due to layout change, and the process
* will never be notified.
* This operation is expensive but mmap processes have to pay
- * a price themselves. */
+ * a price themselves.
+ */
unmap_mapping_range(conf->coc_inode->i_mapping,
0, OBD_OBJECT_EOF, 0);
@@ -147,7 +148,7 @@
if (conf->coc_opc != OBJECT_CONF_SET)
return 0;
- if (conf->u.coc_md != NULL && conf->u.coc_md->lsm != NULL) {
+ if (conf->u.coc_md && conf->u.coc_md->lsm) {
CDEBUG(D_VFSTRACE, DFID ": layout version change: %u -> %u\n",
PFID(&lli->lli_fid), lli->lli_layout_gen,
conf->u.coc_md->lsm->lsm_layout_gen);
@@ -186,9 +187,8 @@
struct cl_object *obj = lli->lli_clob;
struct lu_object *lu;
- LASSERT(obj != NULL);
lu = lu_object_locate(obj->co_lu.lo_header, &vvp_device_type);
- LASSERT(lu != NULL);
+ LASSERT(lu);
return lu2ccc(lu);
}
diff --git a/drivers/staging/lustre/lustre/llite/vvp_page.c b/drivers/staging/lustre/lustre/llite/vvp_page.c
index a133475..f0b26e3 100644
--- a/drivers/staging/lustre/lustre/llite/vvp_page.c
+++ b/drivers/staging/lustre/lustre/llite/vvp_page.c
@@ -56,7 +56,7 @@
{
struct page *vmpage = cp->cpg_page;
- LASSERT(vmpage != NULL);
+ LASSERT(vmpage);
page_cache_release(vmpage);
}
@@ -81,7 +81,7 @@
struct ccc_page *vpg = cl2ccc_page(slice);
struct page *vmpage = vpg->cpg_page;
- LASSERT(vmpage != NULL);
+ LASSERT(vmpage);
if (nonblock) {
if (!trylock_page(vmpage))
return -EAGAIN;
@@ -105,7 +105,7 @@
{
struct page *vmpage = cl2vm_page(slice);
- LASSERT(vmpage != NULL);
+ LASSERT(vmpage);
LASSERT(PageLocked(vmpage));
wait_on_page_writeback(vmpage);
}
@@ -116,7 +116,7 @@
{
struct page *vmpage = cl2vm_page(slice);
- LASSERT(vmpage != NULL);
+ LASSERT(vmpage);
LASSERT(PageLocked(vmpage));
}
@@ -125,7 +125,7 @@
{
struct page *vmpage = cl2vm_page(slice);
- LASSERT(vmpage != NULL);
+ LASSERT(vmpage);
LASSERT(PageLocked(vmpage));
unlock_page(cl2vm_page(slice));
@@ -139,7 +139,7 @@
struct address_space *mapping;
struct ccc_page *cpg = cl2ccc_page(slice);
- LASSERT(vmpage != NULL);
+ LASSERT(vmpage);
LASSERT(PageLocked(vmpage));
mapping = vmpage->mapping;
@@ -161,7 +161,7 @@
struct page *vmpage = cl2vm_page(slice);
__u64 offset;
- LASSERT(vmpage != NULL);
+ LASSERT(vmpage);
LASSERT(PageLocked(vmpage));
offset = vmpage->index << PAGE_CACHE_SHIFT;
@@ -199,7 +199,7 @@
{
struct page *vmpage = cl2vm_page(slice);
- LASSERT(vmpage != NULL);
+ LASSERT(vmpage);
LASSERT(PageLocked(vmpage));
if (uptodate)
SetPageUptodate(vmpage);
@@ -232,7 +232,8 @@
LASSERT(!PageDirty(vmpage));
/* ll_writepage path is not a sync write, so need to set page writeback
- * flag */
+ * flag
+ */
if (!pg->cp_sync_io)
set_page_writeback(vmpage);
@@ -290,7 +291,7 @@
} else
cp->cpg_defer_uptodate = 0;
- if (page->cp_sync_io == NULL)
+ if (!page->cp_sync_io)
unlock_page(vmpage);
}
@@ -317,7 +318,7 @@
cp->cpg_write_queued = 0;
vvp_write_complete(cl2ccc(slice->cpl_obj), cp);
- if (pg->cp_sync_io != NULL) {
+ if (pg->cp_sync_io) {
LASSERT(PageLocked(vmpage));
LASSERT(!PageWriteback(vmpage));
} else {
@@ -356,15 +357,15 @@
lock_page(vmpage);
if (clear_page_dirty_for_io(vmpage)) {
LASSERT(pg->cp_state == CPS_CACHED);
- /* This actually clears the dirty bit in the radix
- * tree. */
+ /* This actually clears the dirty bit in the radix tree. */
set_page_writeback(vmpage);
vvp_write_pending(cl2ccc(slice->cpl_obj),
cl2ccc_page(slice));
CL_PAGE_HEADER(D_PAGE, env, pg, "readied\n");
} else if (pg->cp_state == CPS_PAGEOUT) {
/* is it possible for osc_flush_async_page() to already
- * make it ready? */
+ * make it ready?
+ */
result = -EALREADY;
} else {
CL_PAGE_DEBUG(D_ERROR, env, pg, "Unexpecting page state %d.\n",
@@ -385,7 +386,7 @@
(*printer)(env, cookie, LUSTRE_VVP_NAME "-page@%p(%d:%d:%d) vm@%p ",
vp, vp->cpg_defer_uptodate, vp->cpg_ra_used,
vp->cpg_write_queued, vmpage);
- if (vmpage != NULL) {
+ if (vmpage) {
(*printer)(env, cookie, "%lx %d:%d %lx %lu %slru",
(long)vmpage->flags, page_count(vmpage),
page_mapcount(vmpage), vmpage->private,
diff --git a/drivers/staging/lustre/lustre/llite/xattr.c b/drivers/staging/lustre/lustre/llite/xattr.c
index fa477e7..c34df37 100644
--- a/drivers/staging/lustre/lustre/llite/xattr.c
+++ b/drivers/staging/lustre/lustre/llite/xattr.c
@@ -148,7 +148,7 @@
(xattr_type == XATTR_ACL_ACCESS_T ||
xattr_type == XATTR_ACL_DEFAULT_T)) {
rce = rct_search(&sbi->ll_rct, current_pid());
- if (rce == NULL ||
+ if (!rce ||
(rce->rce_ops != RMT_LSETFACL &&
rce->rce_ops != RMT_RSETFACL))
return -EOPNOTSUPP;
@@ -158,7 +158,6 @@
ee = et_search_del(&sbi->ll_et, current_pid(),
ll_inode2fid(inode), xattr_type);
- LASSERT(ee != NULL);
if (valid & OBD_MD_FLXATTR) {
acl = lustre_acl_xattr_merge2ext(
(posix_acl_xattr_header *)value,
@@ -196,7 +195,7 @@
* Release the posix ACL space.
*/
kfree(new_value);
- if (acl != NULL)
+ if (acl)
lustre_ext_acl_xattr_free(acl);
#endif
if (rc) {
@@ -238,11 +237,12 @@
/* Attributes that are saved via getxattr will always have
* the stripe_offset as 0. Instead, the MDS should be
- * allowed to pick the starting OST index. b=17846 */
- if (lump != NULL && lump->lmm_stripe_offset == 0)
+ * allowed to pick the starting OST index. b=17846
+ */
+ if (lump && lump->lmm_stripe_offset == 0)
lump->lmm_stripe_offset = -1;
- if (lump != NULL && S_ISREG(inode->i_mode)) {
+ if (lump && S_ISREG(inode->i_mode)) {
int flags = FMODE_WRITE;
int lum_size = (lump->lmm_magic == LOV_USER_MAGIC_V1) ?
sizeof(*lump) : sizeof(struct lov_user_md_v3);
@@ -324,7 +324,7 @@
(xattr_type == XATTR_ACL_ACCESS_T ||
xattr_type == XATTR_ACL_DEFAULT_T)) {
rce = rct_search(&sbi->ll_rct, current_pid());
- if (rce == NULL ||
+ if (!rce ||
(rce->rce_ops != RMT_LSETFACL &&
rce->rce_ops != RMT_LGETFACL &&
rce->rce_ops != RMT_RSETFACL &&
@@ -365,7 +365,7 @@
goto out_xattr;
/* Add "system.posix_acl_access" to the list */
- if (lli->lli_posix_acl != NULL && valid & OBD_MD_FLXATTRLS) {
+ if (lli->lli_posix_acl && valid & OBD_MD_FLXATTRLS) {
if (size == 0) {
rc += sizeof(XATTR_NAME_ACL_ACCESS);
} else if (size - rc >= sizeof(XATTR_NAME_ACL_ACCESS)) {
@@ -481,13 +481,14 @@
if (size == 0 && S_ISDIR(inode->i_mode)) {
/* XXX directory EA is fix for now, optimize to save
- * RPC transfer */
+ * RPC transfer
+ */
rc = sizeof(struct lov_user_md);
goto out;
}
lsm = ccc_inode_lsm_get(inode);
- if (lsm == NULL) {
+ if (!lsm) {
if (S_ISDIR(inode->i_mode)) {
rc = ll_dir_getstripe(inode, &lmm,
&lmmsize, &request);
@@ -496,7 +497,8 @@
}
} else {
/* LSM is present already after lookup/getattr call.
- * we need to grab layout lock once it is implemented */
+ * we need to grab layout lock once it is implemented
+ */
rc = obd_packmd(ll_i2dtexp(inode), &lmm, lsm);
lmmsize = rc;
}
@@ -509,7 +511,8 @@
/* used to call ll_get_max_mdsize() forward to get
* the maximum buffer size, while some apps (such as
* rsync 3.0.x) care much about the exact xattr value
- * size */
+ * size
+ */
rc = lmmsize;
goto out;
}
@@ -525,7 +528,8 @@
memcpy(lump, lmm, lmmsize);
/* do not return layout gen for getxattr otherwise it would
* confuse tar --xattr by recognizing layout gen as stripe
- * offset when the file is restored. See LU-2809. */
+ * offset when the file is restored. See LU-2809.
+ */
lump->lmm_layout_gen = 0;
rc = lmmsize;
@@ -559,7 +563,7 @@
if (rc < 0)
goto out;
- if (buffer != NULL) {
+ if (buffer) {
struct ll_sb_info *sbi = ll_i2sbi(inode);
char *xattr_name = buffer;
int xlen, rem = rc;
@@ -597,12 +601,12 @@
const size_t name_len = sizeof("lov") - 1;
const size_t total_len = prefix_len + name_len + 1;
- if (((rc + total_len) > size) && (buffer != NULL)) {
+ if (((rc + total_len) > size) && buffer) {
ptlrpc_req_finished(request);
return -ERANGE;
}
- if (buffer != NULL) {
+ if (buffer) {
buffer += rc;
memcpy(buffer, XATTR_LUSTRE_PREFIX, prefix_len);
memcpy(buffer + prefix_len, "lov", name_len);
diff --git a/drivers/staging/lustre/lustre/llite/xattr_cache.c b/drivers/staging/lustre/lustre/llite/xattr_cache.c
index d140276..460b7c5 100644
--- a/drivers/staging/lustre/lustre/llite/xattr_cache.c
+++ b/drivers/staging/lustre/lustre/llite/xattr_cache.c
@@ -23,7 +23,8 @@
*/
struct ll_xattr_entry {
struct list_head xe_list; /* protected with
- * lli_xattrs_list_rwsem */
+ * lli_xattrs_list_rwsem
+ */
char *xe_name; /* xattr name, \0-terminated */
char *xe_value; /* xattr value */
unsigned xe_namelen; /* strlen(xe_name) + 1 */
@@ -59,9 +60,6 @@
*/
static void ll_xattr_cache_init(struct ll_inode_info *lli)
{
-
- LASSERT(lli != NULL);
-
INIT_LIST_HEAD(&lli->lli_xattrs);
lli->lli_flags |= LLIF_XATTR_CACHE;
}
@@ -83,8 +81,7 @@
list_for_each_entry(entry, cache, xe_list) {
/* xattr_name == NULL means look for any entry */
- if (xattr_name == NULL ||
- strcmp(xattr_name, entry->xe_name) == 0) {
+ if (!xattr_name || strcmp(xattr_name, entry->xe_name) == 0) {
*xattr = entry;
CDEBUG(D_CACHE, "find: [%s]=%.*s\n",
entry->xe_name, entry->xe_vallen,
@@ -118,7 +115,7 @@
}
xattr = kmem_cache_alloc(xattr_kmem, GFP_NOFS | __GFP_ZERO);
- if (xattr == NULL) {
+ if (!xattr) {
CDEBUG(D_CACHE, "failed to allocate xattr\n");
return -ENOMEM;
}
@@ -270,7 +267,7 @@
struct lookup_intent *oit,
struct ptlrpc_request **req)
{
- ldlm_mode_t mode;
+ enum ldlm_mode mode;
struct lustre_handle lockh = { 0 };
struct md_op_data *op_data;
struct ll_inode_info *lli = ll_i2info(inode);
@@ -284,7 +281,8 @@
mutex_lock(&lli->lli_xattrs_enq_lock);
/* inode may have been shrunk and recreated, so data is gone, match lock
- * only when data exists. */
+ * only when data exists.
+ */
if (ll_xattr_cache_valid(lli)) {
/* Try matching first. */
mode = ll_take_md_lock(inode, MDS_INODELOCK_XATTR, &lockh, 0,
@@ -359,7 +357,7 @@
}
/* Matched but no cache? Cancelled on error by a parallel refill. */
- if (unlikely(req == NULL)) {
+ if (unlikely(!req)) {
CDEBUG(D_CACHE, "cancelled by a parallel getxattr\n");
rc = -EIO;
goto out_maybe_drop;
@@ -376,7 +374,7 @@
}
body = req_capsule_server_get(&req->rq_pill, &RMF_MDT_BODY);
- if (body == NULL) {
+ if (!body) {
CERROR("no MDT BODY in the refill xattr reply\n");
rc = -EPROTO;
goto out_destroy;
@@ -388,7 +386,7 @@
body->aclsize);
xsizes = req_capsule_server_sized_get(&req->rq_pill, &RMF_EAVALS_LENS,
body->max_mdsize * sizeof(__u32));
- if (xdata == NULL || xval == NULL || xsizes == NULL) {
+ if (!xdata || !xval || !xsizes) {
CERROR("wrong setxattr reply\n");
rc = -EPROTO;
goto out_destroy;
@@ -404,7 +402,7 @@
for (i = 0; i < body->max_mdsize; i++) {
CDEBUG(D_CACHE, "caching [%s]=%.*s\n", xdata, *xsizes, xval);
/* Perform consistency checks: attr names and vals in pill */
- if (memchr(xdata, 0, xtail - xdata) == NULL) {
+ if (!memchr(xdata, 0, xtail - xdata)) {
CERROR("xattr protocol violation (names are broken)\n");
rc = -EPROTO;
} else if (xval + *xsizes > xvtail) {
diff --git a/drivers/staging/lustre/lustre/lmv/lmv_fld.c b/drivers/staging/lustre/lustre/lmv/lmv_fld.c
index ee23592..378691b 100644
--- a/drivers/staging/lustre/lustre/lmv/lmv_fld.c
+++ b/drivers/staging/lustre/lustre/lmv/lmv_fld.c
@@ -58,7 +58,8 @@
int rc;
/* FIXME: Currently ZFS still use local seq for ROOT unfortunately, and
- * this fid_is_local check should be removed once LU-2240 is fixed */
+ * this fid_is_local check should be removed once LU-2240 is fixed
+ */
LASSERTF((fid_seq_in_fldb(fid_seq(fid)) ||
fid_seq_is_local_file(fid_seq(fid))) &&
fid_is_sane(fid), DFID" is insane!\n", PFID(fid));
diff --git a/drivers/staging/lustre/lustre/lmv/lmv_intent.c b/drivers/staging/lustre/lustre/lmv/lmv_intent.c
index 66de27f..259b211 100644
--- a/drivers/staging/lustre/lustre/lmv/lmv_intent.c
+++ b/drivers/staging/lustre/lustre/lmv/lmv_intent.c
@@ -69,7 +69,7 @@
int rc = 0;
body = req_capsule_server_get(&(*reqp)->rq_pill, &RMF_MDT_BODY);
- if (body == NULL)
+ if (!body)
return -EPROTO;
LASSERT((body->valid & OBD_MD_MDS));
@@ -107,14 +107,16 @@
op_data->op_fid1 = body->fid1;
/* Sent the parent FID to the remote MDT */
- if (parent_fid != NULL) {
+ if (parent_fid) {
/* The parent fid is only for remote open to
* check whether the open is from OBF,
- * see mdt_cross_open */
+ * see mdt_cross_open
+ */
LASSERT(it->it_op & IT_OPEN);
op_data->op_fid2 = *parent_fid;
/* Add object FID to op_fid3, in case it needs to check stale
- * (M_CHECK_STALE), see mdc_finish_intent_lock */
+ * (M_CHECK_STALE), see mdc_finish_intent_lock
+ */
op_data->op_fid3 = body->fid1;
}
@@ -173,7 +175,8 @@
return PTR_ERR(tgt);
/* If it is ready to open the file by FID, do not need
- * allocate FID at all, otherwise it will confuse MDT */
+ * allocate FID at all, otherwise it will confuse MDT
+ */
if ((it->it_op & IT_CREAT) &&
!(it->it_flags & MDS_OPEN_BY_FID)) {
/*
@@ -204,7 +207,7 @@
return rc;
body = req_capsule_server_get(&(*reqp)->rq_pill, &RMF_MDT_BODY);
- if (body == NULL)
+ if (!body)
return -EPROTO;
/*
* Not cross-ref case, just get out of here.
@@ -270,7 +273,7 @@
rc = md_intent_lock(tgt->ltd_exp, op_data, lmm, lmmsize, it,
flags, reqp, cb_blocking, extra_lock_flags);
- if (rc < 0 || *reqp == NULL)
+ if (rc < 0 || !*reqp)
return rc;
/*
@@ -278,7 +281,7 @@
* remote inode. Let's check this.
*/
body = req_capsule_server_get(&(*reqp)->rq_pill, &RMF_MDT_BODY);
- if (body == NULL)
+ if (!body)
return -EPROTO;
/* Not cross-ref case, just get out of here. */
if (likely(!(body->valid & OBD_MD_MDS)))
@@ -299,7 +302,6 @@
struct obd_device *obd = exp->exp_obd;
int rc;
- LASSERT(it != NULL);
LASSERT(fid_is_sane(&op_data->op_fid1));
CDEBUG(D_INODE, "INTENT LOCK '%s' for '%*s' on "DFID"\n",
diff --git a/drivers/staging/lustre/lustre/lmv/lmv_internal.h b/drivers/staging/lustre/lustre/lmv/lmv_internal.h
index eb8e673..041d30f33 100644
--- a/drivers/staging/lustre/lustre/lmv/lmv_internal.h
+++ b/drivers/staging/lustre/lustre/lmv/lmv_internal.h
@@ -66,7 +66,7 @@
struct mdt_body *body;
struct lmv_stripe_md *mea;
- LASSERT(req != NULL);
+ LASSERT(req);
body = req_capsule_server_get(&req->rq_pill, &RMF_MDT_BODY);
@@ -75,8 +75,6 @@
mea = req_capsule_server_sized_get(&req->rq_pill, &RMF_MDT_MD,
body->eadatasize);
- LASSERT(mea != NULL);
-
if (mea->mea_count == 0)
return NULL;
if (mea->mea_magic != MEA_MAGIC_LAST_CHAR &&
@@ -101,7 +99,7 @@
int i;
for (i = 0; i < count; i++) {
- if (lmv->tgts[i] == NULL)
+ if (!lmv->tgts[i])
continue;
if (lmv->tgts[i]->ltd_idx == mds)
diff --git a/drivers/staging/lustre/lustre/lmv/lmv_obd.c b/drivers/staging/lustre/lustre/lmv/lmv_obd.c
index 733222c..67746c9 100644
--- a/drivers/staging/lustre/lustre/lmv/lmv_obd.c
+++ b/drivers/staging/lustre/lustre/lmv/lmv_obd.c
@@ -88,7 +88,7 @@
spin_lock(&lmv->lmv_lock);
for (i = 0; i < lmv->desc.ld_tgt_count; i++) {
tgt = lmv->tgts[i];
- if (tgt == NULL || tgt->ltd_exp == NULL)
+ if (!tgt || !tgt->ltd_exp)
continue;
CDEBUG(D_INFO, "Target idx %d is %s conn %#llx\n", i,
@@ -104,7 +104,7 @@
}
obd = class_exp2obd(tgt->ltd_exp);
- if (obd == NULL) {
+ if (!obd) {
rc = -ENOTCONN;
goto out_lmv_lock;
}
@@ -262,7 +262,7 @@
for (i = 0; i < lmv->desc.ld_tgt_count; i++) {
tgt = lmv->tgts[i];
- if (tgt == NULL || tgt->ltd_exp == NULL || tgt->ltd_active == 0)
+ if (!tgt || !tgt->ltd_exp || tgt->ltd_active == 0)
continue;
obd_set_info_async(NULL, tgt->ltd_exp, sizeof(KEY_INTERMDS),
@@ -302,8 +302,7 @@
return 0;
for (i = 0; i < lmv->desc.ld_tgt_count; i++) {
- if (lmv->tgts[i] == NULL ||
- lmv->tgts[i]->ltd_exp == NULL ||
+ if (!lmv->tgts[i] || !lmv->tgts[i]->ltd_exp ||
lmv->tgts[i]->ltd_active == 0) {
CWARN("%s: NULL export for %d\n", obd->obd_name, i);
continue;
@@ -410,7 +409,7 @@
static void lmv_del_target(struct lmv_obd *lmv, int index)
{
- if (lmv->tgts[index] == NULL)
+ if (!lmv->tgts[index])
return;
kfree(lmv->tgts[index]);
@@ -442,7 +441,7 @@
}
}
- if ((index < lmv->tgts_size) && (lmv->tgts[index] != NULL)) {
+ if ((index < lmv->tgts_size) && lmv->tgts[index]) {
tgt = lmv->tgts[index];
CERROR("%s: UUID %s already assigned at LOV target index %d: rc = %d\n",
obd->obd_name,
@@ -460,7 +459,7 @@
while (newsize < index + 1)
newsize <<= 1;
newtgts = kcalloc(newsize, sizeof(*newtgts), GFP_NOFS);
- if (newtgts == NULL) {
+ if (!newtgts) {
lmv_init_unlock(lmv);
return -ENOMEM;
}
@@ -539,11 +538,9 @@
CDEBUG(D_CONFIG, "Time to connect %s to %s\n",
lmv->cluuid.uuid, obd->obd_name);
- LASSERT(lmv->tgts != NULL);
-
for (i = 0; i < lmv->desc.ld_tgt_count; i++) {
tgt = lmv->tgts[i];
- if (tgt == NULL)
+ if (!tgt)
continue;
rc = lmv_connect_mdc(obd, tgt);
if (rc)
@@ -563,7 +560,7 @@
int rc2;
tgt = lmv->tgts[i];
- if (tgt == NULL)
+ if (!tgt)
continue;
tgt->ltd_active = 0;
if (tgt->ltd_exp) {
@@ -586,9 +583,6 @@
struct obd_device *mdc_obd;
int rc;
- LASSERT(tgt != NULL);
- LASSERT(obd != NULL);
-
mdc_obd = class_exp2obd(tgt->ltd_exp);
if (mdc_obd) {
@@ -641,7 +635,7 @@
goto out_local;
for (i = 0; i < lmv->desc.ld_tgt_count; i++) {
- if (lmv->tgts[i] == NULL || lmv->tgts[i]->ltd_exp == NULL)
+ if (!lmv->tgts[i] || !lmv->tgts[i]->ltd_exp)
continue;
lmv_disconnect_mdc(obd, lmv->tgts[i]);
@@ -685,8 +679,9 @@
goto out_fid2path;
/* If remote_gf != NULL, it means just building the
- * path on the remote MDT, copy this path segment to gf */
- if (remote_gf != NULL) {
+ * path on the remote MDT, copy this path segment to gf
+ */
+ if (remote_gf) {
struct getinfo_fid2path *ori_gf;
char *ptr;
@@ -716,7 +711,7 @@
goto out_fid2path;
/* sigh, has to go to another MDT to do path building further */
- if (remote_gf == NULL) {
+ if (!remote_gf) {
remote_gf_size = sizeof(*remote_gf) + PATH_MAX;
remote_gf = kzalloc(remote_gf_size, GFP_NOFS);
if (!remote_gf) {
@@ -803,7 +798,8 @@
/* unregister request (call from llapi_hsm_copytool_fini) */
for (i = 0; i < lmv->desc.ld_tgt_count; i++) {
/* best effort: try to clean as much as possible
- * (continue on error) */
+ * (continue on error)
+ */
obd_iocontrol(cmd, lmv->tgts[i]->ltd_exp, len, lk, uarg);
}
@@ -827,7 +823,8 @@
/* All or nothing: try to register to all MDS.
* In case of failure, unregister from previous MDS,
- * except if it because of inactive target. */
+ * except if it because of inactive target.
+ */
for (i = 0; i < lmv->desc.ld_tgt_count; i++) {
err = obd_iocontrol(cmd, lmv->tgts[i]->ltd_exp,
len, lk, uarg);
@@ -847,8 +844,8 @@
return rc;
}
/* else: transient error.
- * kuc will register to the missing MDT
- * when it is back */
+ * kuc will register to the missing MDT when it is back
+ */
} else {
any_set = true;
}
@@ -901,8 +898,7 @@
if (index >= count)
return -ENODEV;
- if (lmv->tgts[index] == NULL ||
- lmv->tgts[index]->ltd_active == 0)
+ if (!lmv->tgts[index] || lmv->tgts[index]->ltd_active == 0)
return -ENODATA;
mdc_obd = class_exp2obd(lmv->tgts[index]->ltd_exp);
@@ -936,18 +932,18 @@
return -EINVAL;
tgt = lmv->tgts[qctl->qc_idx];
- if (tgt == NULL || tgt->ltd_exp == NULL)
+ if (!tgt || !tgt->ltd_exp)
return -EINVAL;
} else if (qctl->qc_valid == QC_UUID) {
for (i = 0; i < count; i++) {
tgt = lmv->tgts[i];
- if (tgt == NULL)
+ if (!tgt)
continue;
if (!obd_uuid_equals(&tgt->ltd_uuid,
&qctl->obd_uuid))
continue;
- if (tgt->ltd_exp == NULL)
+ if (!tgt->ltd_exp)
return -EINVAL;
break;
@@ -981,8 +977,8 @@
if (icc->icc_mdtindex >= count)
return -ENODEV;
- if (lmv->tgts[icc->icc_mdtindex] == NULL ||
- lmv->tgts[icc->icc_mdtindex]->ltd_exp == NULL ||
+ if (!lmv->tgts[icc->icc_mdtindex] ||
+ !lmv->tgts[icc->icc_mdtindex]->ltd_exp ||
lmv->tgts[icc->icc_mdtindex]->ltd_active == 0)
return -ENODEV;
rc = obd_iocontrol(cmd, lmv->tgts[icc->icc_mdtindex]->ltd_exp,
@@ -990,7 +986,7 @@
break;
}
case LL_IOC_GET_CONNECT_FLAGS: {
- if (lmv->tgts[0] == NULL)
+ if (!lmv->tgts[0])
return -ENODATA;
rc = obd_iocontrol(cmd, lmv->tgts[0]->ltd_exp, len, karg, uarg);
break;
@@ -1007,10 +1003,10 @@
tgt = lmv_find_target(lmv, &op_data->op_fid1);
if (IS_ERR(tgt))
- return PTR_ERR(tgt);
+ return PTR_ERR(tgt);
- if (tgt->ltd_exp == NULL)
- return -EINVAL;
+ if (!tgt->ltd_exp)
+ return -EINVAL;
rc = obd_iocontrol(cmd, tgt->ltd_exp, len, karg, uarg);
break;
@@ -1035,7 +1031,8 @@
/* if the request is about a single fid
* or if there is a single MDS, no need to split
- * the request. */
+ * the request.
+ */
if (reqcount == 1 || count == 1) {
tgt = lmv_find_target(lmv,
&hur->hur_user_item[0].hui_fid);
@@ -1058,7 +1055,7 @@
hur_user_item[nr])
+ hur->hur_request.hr_data_len;
req = libcfs_kvzalloc(reqlen, GFP_NOFS);
- if (req == NULL)
+ if (!req)
return -ENOMEM;
lmv_hsm_req_build(lmv, hur, lmv->tgts[i], req);
@@ -1084,7 +1081,7 @@
if (IS_ERR(tgt2))
return PTR_ERR(tgt2);
- if ((tgt1->ltd_exp == NULL) || (tgt2->ltd_exp == NULL))
+ if (!tgt1->ltd_exp || !tgt2->ltd_exp)
return -EINVAL;
/* only files on same MDT can have their layouts swapped */
@@ -1108,11 +1105,11 @@
struct obd_device *mdc_obd;
int err;
- if (lmv->tgts[i] == NULL ||
- lmv->tgts[i]->ltd_exp == NULL)
+ if (!lmv->tgts[i] || !lmv->tgts[i]->ltd_exp)
continue;
/* ll_umount_begin() sets force flag but for lmv, not
- * mdc. Let's pass it through */
+ * mdc. Let's pass it through
+ */
mdc_obd = class_exp2obd(lmv->tgts[i]->ltd_exp);
mdc_obd->obd_force = obddev->obd_force;
err = obd_iocontrol(cmd, lmv->tgts[i]->ltd_exp, len,
@@ -1189,7 +1186,7 @@
{
struct lmv_obd *lmv = &obd->u.lmv;
- LASSERT(mds != NULL);
+ LASSERT(mds);
if (lmv->desc.ld_tgt_count == 1) {
*mds = 0;
@@ -1219,7 +1216,8 @@
}
/* Allocate new fid on target according to operation type and parent
- * home mds. */
+ * home mds.
+ */
*mds = op_data->op_mds;
return 0;
}
@@ -1239,7 +1237,7 @@
*/
mutex_lock(&tgt->ltd_fid_mutex);
- if (tgt->ltd_active == 0 || tgt->ltd_exp == NULL) {
+ if (tgt->ltd_active == 0 || !tgt->ltd_exp) {
rc = -ENODEV;
goto out;
}
@@ -1266,8 +1264,8 @@
u32 mds = 0;
int rc;
- LASSERT(op_data != NULL);
- LASSERT(fid != NULL);
+ LASSERT(op_data);
+ LASSERT(fid);
rc = lmv_placement_policy(obd, op_data, &mds);
if (rc) {
@@ -1305,7 +1303,7 @@
}
lmv->tgts = kcalloc(32, sizeof(*lmv->tgts), GFP_NOFS);
- if (lmv->tgts == NULL)
+ if (!lmv->tgts)
return -ENOMEM;
lmv->tgts_size = 32;
@@ -1346,11 +1344,11 @@
struct lmv_obd *lmv = &obd->u.lmv;
fld_client_fini(&lmv->lmv_fld);
- if (lmv->tgts != NULL) {
+ if (lmv->tgts) {
int i;
for (i = 0; i < lmv->desc.ld_tgt_count; i++) {
- if (lmv->tgts[i] == NULL)
+ if (!lmv->tgts[i])
continue;
lmv_del_target(lmv, i);
}
@@ -1371,7 +1369,8 @@
switch (lcfg->lcfg_command) {
case LCFG_ADD_MDC:
/* modify_mdc_tgts add 0:lustre-clilmv 1:lustre-MDT0000_UUID
- * 2:0 3:1 4:lustre-MDT0000-mdc_UUID */
+ * 2:0 3:1 4:lustre-MDT0000-mdc_UUID
+ */
if (LUSTRE_CFG_BUFLEN(lcfg, 1) > sizeof(obd_uuid.uuid)) {
rc = -EINVAL;
goto out;
@@ -1416,7 +1415,7 @@
return -ENOMEM;
for (i = 0; i < lmv->desc.ld_tgt_count; i++) {
- if (lmv->tgts[i] == NULL || lmv->tgts[i]->ltd_exp == NULL)
+ if (!lmv->tgts[i] || !lmv->tgts[i]->ltd_exp)
continue;
rc = obd_statfs(env, lmv->tgts[i]->ltd_exp, temp,
@@ -1435,7 +1434,8 @@
* i.e. mount does not need the merged osfs
* from all of MDT.
* And also clients can be mounted as long as
- * MDT0 is in service*/
+ * MDT0 is in service
+ */
if (flags & OBD_STATFS_FOR_MDT0)
goto out_free_temp;
} else {
@@ -1561,7 +1561,7 @@
* space of MDT storing inode.
*/
for (i = 0; i < lmv->desc.ld_tgt_count; i++) {
- if (lmv->tgts[i] == NULL || lmv->tgts[i]->ltd_exp == NULL)
+ if (!lmv->tgts[i] || !lmv->tgts[i]->ltd_exp)
continue;
md_null_inode(lmv->tgts[i]->ltd_exp, fid);
}
@@ -1589,7 +1589,7 @@
* space of MDT storing inode.
*/
for (i = 0; i < lmv->desc.ld_tgt_count; i++) {
- if (lmv->tgts[i] == NULL || lmv->tgts[i]->ltd_exp == NULL)
+ if (!lmv->tgts[i] || !lmv->tgts[i]->ltd_exp)
continue;
rc = md_find_cbdata(lmv->tgts[i]->ltd_exp, fid, it, data);
if (rc)
@@ -1669,7 +1669,7 @@
cap_effective, rdev, request);
if (rc == 0) {
- if (*request == NULL)
+ if (!*request)
return rc;
CDEBUG(D_INODE, "Created - "DFID"\n", PFID(&op_data->op_fid2));
}
@@ -1715,7 +1715,6 @@
int pmode;
body = req_capsule_server_get(&req->rq_pill, &RMF_MDT_BODY);
- LASSERT(body != NULL);
if (!(body->valid & OBD_MD_MDS))
return 0;
@@ -1822,7 +1821,6 @@
body = req_capsule_server_get(&(*request)->rq_pill,
&RMF_MDT_BODY);
- LASSERT(body != NULL);
if (body->valid & OBD_MD_MDS) {
struct lu_fid rid = body->fid1;
@@ -1856,7 +1854,8 @@
NULL)
static int lmv_early_cancel(struct obd_export *exp, struct md_op_data *op_data,
- int op_tgt, ldlm_mode_t mode, int bits, int flag)
+ int op_tgt, enum ldlm_mode mode, int bits,
+ int flag)
{
struct lu_fid *fid = md_op_data_fid(op_data, flag);
struct obd_device *obd = exp->exp_obd;
@@ -2111,7 +2110,7 @@
while (--nlupgs > 0) {
ent = lu_dirent_start(dp);
- for (end_dirent = ent; ent != NULL;
+ for (end_dirent = ent; ent;
end_dirent = ent, ent = lu_dirent_next(ent))
;
@@ -2131,7 +2130,8 @@
break;
/* Enlarge the end entry lde_reclen from 0 to
- * first entry of next lu_dirpage. */
+ * first entry of next lu_dirpage.
+ */
LASSERT(le16_to_cpu(end_dirent->lde_reclen) == 0);
end_dirent->lde_reclen =
cpu_to_le16((char *)(dp->ldp_entries) -
@@ -2241,7 +2241,7 @@
return rc;
body = req_capsule_server_get(&(*request)->rq_pill, &RMF_MDT_BODY);
- if (body == NULL)
+ if (!body)
return -EPROTO;
/* Not cross-ref case, just get out of here. */
@@ -2269,7 +2269,8 @@
* 4. Then A will resend unlink RPC to MDT0. (retry 2nd times).
*
* In theory, it might try unlimited time here, but it should
- * be very rare case. */
+ * be very rare case.
+ */
op_data->op_fid2 = body->fid1;
ptlrpc_req_finished(*request);
*request = NULL;
@@ -2284,7 +2285,8 @@
switch (stage) {
case OBD_CLEANUP_EARLY:
/* XXX: here should be calling obd_precleanup() down to
- * stack. */
+ * stack.
+ */
break;
case OBD_CLEANUP_EXPORTS:
fld_client_debugfs_fini(&lmv->lmv_fld);
@@ -2305,7 +2307,7 @@
int rc = 0;
obd = class_exp2obd(exp);
- if (obd == NULL) {
+ if (!obd) {
CDEBUG(D_IOCTL, "Invalid client cookie %#llx\n",
exp->exp_handle.h_cookie);
return -EINVAL;
@@ -2326,7 +2328,7 @@
/*
* All tgts should be connected when this gets called.
*/
- if (tgt == NULL || tgt->ltd_exp == NULL)
+ if (!tgt || !tgt->ltd_exp)
continue;
if (!obd_get_info(env, tgt->ltd_exp, keylen, key,
@@ -2369,7 +2371,7 @@
int rc = 0;
obd = class_exp2obd(exp);
- if (obd == NULL) {
+ if (!obd) {
CDEBUG(D_IOCTL, "Invalid client cookie %#llx\n",
exp->exp_handle.h_cookie);
return -EINVAL;
@@ -2382,7 +2384,7 @@
for (i = 0; i < lmv->desc.ld_tgt_count; i++) {
tgt = lmv->tgts[i];
- if (tgt == NULL || tgt->ltd_exp == NULL)
+ if (!tgt || !tgt->ltd_exp)
continue;
err = obd_set_info_async(env, tgt->ltd_exp,
@@ -2417,9 +2419,9 @@
return 0;
}
- if (*lmmp == NULL) {
+ if (!*lmmp) {
*lmmp = libcfs_kvzalloc(mea_size, GFP_NOFS);
- if (*lmmp == NULL)
+ if (!*lmmp)
return -ENOMEM;
}
@@ -2457,10 +2459,10 @@
__u32 magic;
mea_size = lmv_get_easize(lmv);
- if (lsmp == NULL)
+ if (!lsmp)
return mea_size;
- if (*lsmp != NULL && lmm == NULL) {
+ if (*lsmp && !lmm) {
kvfree(*tmea);
*lsmp = NULL;
return 0;
@@ -2469,7 +2471,7 @@
LASSERT(mea_size == lmm_size);
*tmea = libcfs_kvzalloc(mea_size, GFP_NOFS);
- if (*tmea == NULL)
+ if (!*tmea)
return -ENOMEM;
if (!lmm)
@@ -2499,8 +2501,8 @@
}
static int lmv_cancel_unused(struct obd_export *exp, const struct lu_fid *fid,
- ldlm_policy_data_t *policy, ldlm_mode_t mode,
- ldlm_cancel_flags_t flags, void *opaque)
+ ldlm_policy_data_t *policy, enum ldlm_mode mode,
+ enum ldlm_cancel_flags flags, void *opaque)
{
struct obd_device *obd = exp->exp_obd;
struct lmv_obd *lmv = &obd->u.lmv;
@@ -2508,10 +2510,10 @@
int err;
int i;
- LASSERT(fid != NULL);
+ LASSERT(fid);
for (i = 0; i < lmv->desc.ld_tgt_count; i++) {
- if (lmv->tgts[i] == NULL || lmv->tgts[i]->ltd_exp == NULL ||
+ if (!lmv->tgts[i] || !lmv->tgts[i]->ltd_exp ||
lmv->tgts[i]->ltd_active == 0)
continue;
@@ -2533,14 +2535,16 @@
return rc;
}
-static ldlm_mode_t lmv_lock_match(struct obd_export *exp, __u64 flags,
- const struct lu_fid *fid, ldlm_type_t type,
- ldlm_policy_data_t *policy, ldlm_mode_t mode,
- struct lustre_handle *lockh)
+static enum ldlm_mode lmv_lock_match(struct obd_export *exp, __u64 flags,
+ const struct lu_fid *fid,
+ enum ldlm_type type,
+ ldlm_policy_data_t *policy,
+ enum ldlm_mode mode,
+ struct lustre_handle *lockh)
{
struct obd_device *obd = exp->exp_obd;
struct lmv_obd *lmv = &obd->u.lmv;
- ldlm_mode_t rc;
+ enum ldlm_mode rc;
int i;
CDEBUG(D_INODE, "Lock match for "DFID"\n", PFID(fid));
@@ -2552,8 +2556,7 @@
* one fid was created in.
*/
for (i = 0; i < lmv->desc.ld_tgt_count; i++) {
- if (lmv->tgts[i] == NULL ||
- lmv->tgts[i]->ltd_exp == NULL ||
+ if (!lmv->tgts[i] || !lmv->tgts[i]->ltd_exp ||
lmv->tgts[i]->ltd_active == 0)
continue;
@@ -2709,7 +2712,7 @@
tgt = lmv->tgts[i];
- if (tgt == NULL || tgt->ltd_exp == NULL || tgt->ltd_active == 0)
+ if (!tgt || !tgt->ltd_exp || tgt->ltd_active == 0)
continue;
if (!tgt->ltd_active) {
CDEBUG(D_HA, "mdt %d is inactive.\n", i);
@@ -2744,7 +2747,7 @@
int err;
tgt = lmv->tgts[i];
- if (tgt == NULL || tgt->ltd_exp == NULL || !tgt->ltd_active) {
+ if (!tgt || !tgt->ltd_exp || !tgt->ltd_active) {
CERROR("lmv idx %d inactive\n", i);
return -EIO;
}
diff --git a/drivers/staging/lustre/lustre/lmv/lproc_lmv.c b/drivers/staging/lustre/lustre/lmv/lproc_lmv.c
index 40cf4d9..b39e364 100644
--- a/drivers/staging/lustre/lustre/lmv/lproc_lmv.c
+++ b/drivers/staging/lustre/lustre/lmv/lproc_lmv.c
@@ -138,7 +138,7 @@
struct obd_device *dev = (struct obd_device *)m->private;
struct lmv_obd *lmv;
- LASSERT(dev != NULL);
+ LASSERT(dev);
lmv = &dev->u.lmv;
seq_printf(m, "%s\n", lmv->desc.ld_uuid.uuid);
return 0;
@@ -171,7 +171,7 @@
{
struct lmv_tgt_desc *tgt = v;
- if (tgt == NULL)
+ if (!tgt)
return 0;
seq_printf(p, "%d: %s %sACTIVE\n",
tgt->ltd_idx, tgt->ltd_uuid.uuid,
diff --git a/drivers/staging/lustre/lustre/lov/lov_cl_internal.h b/drivers/staging/lustre/lustre/lov/lov_cl_internal.h
index 66a2492..b5fc159 100644
--- a/drivers/staging/lustre/lustre/lov/lov_cl_internal.h
+++ b/drivers/staging/lustre/lustre/lov/lov_cl_internal.h
@@ -651,7 +651,7 @@
struct lov_session *ses;
ses = lu_context_key_get(env->le_ses, &lov_session_key);
- LASSERT(ses != NULL);
+ LASSERT(ses);
return ses;
}
@@ -759,7 +759,7 @@
const struct cl_lock_slice *slice;
slice = cl_lock_at(lock, &lovsub_device_type);
- LASSERT(slice != NULL);
+ LASSERT(slice);
return cl2lovsub_lock(slice);
}
@@ -817,7 +817,7 @@
struct lov_thread_info *info;
info = lu_context_key_get(&env->le_ctx, &lov_key);
- LASSERT(info != NULL);
+ LASSERT(info);
return info;
}
diff --git a/drivers/staging/lustre/lustre/lov/lov_dev.c b/drivers/staging/lustre/lustre/lov/lov_dev.c
index 2c33cbc..ee093e0 100644
--- a/drivers/staging/lustre/lustre/lov/lov_dev.c
+++ b/drivers/staging/lustre/lustre/lov/lov_dev.c
@@ -143,7 +143,7 @@
struct lov_thread_info *info;
info = kmem_cache_alloc(lov_thread_kmem, GFP_NOFS | __GFP_ZERO);
- if (info != NULL)
+ if (info)
INIT_LIST_HEAD(&info->lti_closure.clc_list);
else
info = ERR_PTR(-ENOMEM);
@@ -171,7 +171,7 @@
struct lov_session *info;
info = kmem_cache_alloc(lov_session_kmem, GFP_NOFS | __GFP_ZERO);
- if (info == NULL)
+ if (!info)
info = ERR_PTR(-ENOMEM);
return info;
}
@@ -199,15 +199,15 @@
int i;
struct lov_device *ld = lu2lov_dev(d);
- LASSERT(ld->ld_lov != NULL);
- if (ld->ld_target == NULL)
+ LASSERT(ld->ld_lov);
+ if (!ld->ld_target)
return NULL;
lov_foreach_target(ld, i) {
struct lovsub_device *lsd;
lsd = ld->ld_target[i];
- if (lsd != NULL) {
+ if (lsd) {
cl_stack_fini(env, lovsub2cl_dev(lsd));
ld->ld_target[i] = NULL;
}
@@ -222,8 +222,8 @@
int i;
int rc = 0;
- LASSERT(d->ld_site != NULL);
- if (ld->ld_target == NULL)
+ LASSERT(d->ld_site);
+ if (!ld->ld_target)
return rc;
lov_foreach_target(ld, i) {
@@ -232,7 +232,7 @@
struct lov_tgt_desc *desc;
desc = ld->ld_lov->lov_tgts[i];
- if (desc == NULL)
+ if (!desc)
continue;
cl = cl_type_setup(env, d->ld_site, &lovsub_device_type,
@@ -262,7 +262,7 @@
int result;
lr = kmem_cache_alloc(lov_req_kmem, GFP_NOFS | __GFP_ZERO);
- if (lr != NULL) {
+ if (lr) {
cl_req_slice_add(req, &lr->lr_cl, dev, &lov_req_ops);
result = 0;
} else
@@ -282,9 +282,9 @@
struct lov_device_emerg *em;
em = emrg[i];
- if (em != NULL) {
+ if (em) {
LASSERT(em->emrg_page_list.pl_nr == 0);
- if (em->emrg_env != NULL)
+ if (em->emrg_env)
cl_env_put(em->emrg_env, &em->emrg_refcheck);
kfree(em);
}
@@ -300,7 +300,7 @@
cl_device_fini(lu2cl_dev(d));
kfree(ld->ld_target);
- if (ld->ld_emrg != NULL)
+ if (ld->ld_emrg)
lov_emerg_free(ld->ld_emrg, nr);
kfree(ld);
return NULL;
@@ -311,7 +311,7 @@
{
struct lov_device *ld = lu2lov_dev(dev);
- if (ld->ld_target[index] != NULL) {
+ if (ld->ld_target[index]) {
cl_stack_fini(env, lovsub2cl_dev(ld->ld_target[index]));
ld->ld_target[index] = NULL;
}
@@ -324,13 +324,13 @@
int result;
emerg = kcalloc(nr, sizeof(emerg[0]), GFP_NOFS);
- if (emerg == NULL)
+ if (!emerg)
return ERR_PTR(-ENOMEM);
for (result = i = 0; i < nr && result == 0; i++) {
struct lov_device_emerg *em;
em = kzalloc(sizeof(*em), GFP_NOFS);
- if (em != NULL) {
+ if (em) {
emerg[i] = em;
cl_page_list_init(&em->emrg_page_list);
em->emrg_env = cl_env_alloc(&em->emrg_refcheck,
@@ -370,7 +370,7 @@
return PTR_ERR(emerg);
newd = kcalloc(tgt_size, sz, GFP_NOFS);
- if (newd != NULL) {
+ if (newd) {
mutex_lock(&dev->ld_mutex);
if (sub_size > 0) {
memcpy(newd, dev->ld_target, sub_size * sz);
@@ -379,7 +379,7 @@
dev->ld_target = newd;
dev->ld_target_nr = tgt_size;
- if (dev->ld_emrg != NULL)
+ if (dev->ld_emrg)
lov_emerg_free(dev->ld_emrg, sub_size);
dev->ld_emrg = emerg;
mutex_unlock(&dev->ld_mutex);
@@ -404,8 +404,6 @@
obd_getref(obd);
tgt = obd->u.lov.lov_tgts[index];
- LASSERT(tgt != NULL);
- LASSERT(tgt->ltd_obd != NULL);
if (!tgt->ltd_obd->obd_set_up) {
CERROR("Target %s not set up\n", obd_uuid2str(&tgt->ltd_uuid));
@@ -414,7 +412,7 @@
rc = lov_expand_targets(env, ld);
if (rc == 0 && ld->ld_flags & LOV_DEV_INITIALIZED) {
- LASSERT(dev->ld_site != NULL);
+ LASSERT(dev->ld_site);
cl = cl_type_setup(env, dev->ld_site, &lovsub_device_type,
tgt->ltd_obd->obd_lu_dev);
@@ -492,7 +490,7 @@
/* setup the LOV OBD */
obd = class_name2obd(lustre_cfg_string(cfg, 0));
- LASSERT(obd != NULL);
+ LASSERT(obd);
rc = lov_setup(obd, cfg);
if (rc) {
lov_device_free(env, d);
diff --git a/drivers/staging/lustre/lustre/lov/lov_ea.c b/drivers/staging/lustre/lustre/lov/lov_ea.c
index b3c9c85..c27b884 100644
--- a/drivers/staging/lustre/lustre/lov/lov_ea.c
+++ b/drivers/staging/lustre/lustre/lov/lov_ea.c
@@ -101,7 +101,7 @@
for (i = 0; i < stripe_count; i++) {
loi = kmem_cache_alloc(lov_oinfo_slab, GFP_NOFS | __GFP_ZERO);
- if (loi == NULL)
+ if (!loi)
goto err;
lsm->lsm_oinfo[i] = loi;
}
@@ -162,12 +162,13 @@
}
/* Find minimum stripe maxbytes value. For inactive or
- * reconnecting targets use LUSTRE_STRIPE_MAXBYTES. */
+ * reconnecting targets use LUSTRE_STRIPE_MAXBYTES.
+ */
static void lov_tgt_maxbytes(struct lov_tgt_desc *tgt, __u64 *stripe_maxbytes)
{
struct obd_import *imp = tgt->ltd_obd->u.cli.cl_import;
- if (imp == NULL || !tgt->ltd_active) {
+ if (!imp || !tgt->ltd_active) {
*stripe_maxbytes = LUSTRE_STRIPE_MAXBYTES;
return;
}
diff --git a/drivers/staging/lustre/lustre/lov/lov_internal.h b/drivers/staging/lustre/lustre/lov/lov_internal.h
index f8e92fe..725ebf9 100644
--- a/drivers/staging/lustre/lustre/lov/lov_internal.h
+++ b/drivers/staging/lustre/lustre/lov/lov_internal.h
@@ -43,7 +43,8 @@
/* lov_do_div64(a, b) returns a % b, and a = a / b.
* The 32-bit code is LOV-specific due to knowing about stripe limits in
* order to reduce the divisor to a 32-bit number. If the divisor is
- * already a 32-bit value the compiler handles this directly. */
+ * already a 32-bit value the compiler handles this directly.
+ */
#if BITS_PER_LONG == 64
# define lov_do_div64(n, base) ({ \
uint64_t __base = (base); \
@@ -92,7 +93,8 @@
atomic_t set_refcount;
struct obd_export *set_exp;
/* XXX: There is @set_exp already, however obd_statfs gets obd_device
- only. */
+ * only.
+ */
struct obd_device *set_obd;
int set_count;
atomic_t set_completes;
@@ -114,7 +116,6 @@
static inline void lov_get_reqset(struct lov_request_set *set)
{
- LASSERT(set != NULL);
LASSERT(atomic_read(&set->set_refcount) > 0);
atomic_inc(&set->set_refcount);
}
diff --git a/drivers/staging/lustre/lustre/lov/lov_io.c b/drivers/staging/lustre/lustre/lov/lov_io.c
index 93fe69e..50954ce 100644
--- a/drivers/staging/lustre/lustre/lov/lov_io.c
+++ b/drivers/staging/lustre/lustre/lov/lov_io.c
@@ -60,7 +60,7 @@
static void lov_io_sub_fini(const struct lu_env *env, struct lov_io *lio,
struct lov_io_sub *sub)
{
- if (sub->sub_io != NULL) {
+ if (sub->sub_io) {
if (sub->sub_io_initialized) {
lov_sub_enter(sub);
cl_io_fini(sub->sub_env, sub->sub_io);
@@ -74,7 +74,7 @@
kfree(sub->sub_io);
sub->sub_io = NULL;
}
- if (sub->sub_env != NULL && !IS_ERR(sub->sub_env)) {
+ if (!IS_ERR_OR_NULL(sub->sub_env)) {
if (!sub->sub_borrowed)
cl_env_put(sub->sub_env, &sub->sub_refcheck);
sub->sub_env = NULL;
@@ -143,11 +143,11 @@
int stripe = sub->sub_stripe;
int result;
- LASSERT(sub->sub_io == NULL);
- LASSERT(sub->sub_env == NULL);
+ LASSERT(!sub->sub_io);
+ LASSERT(!sub->sub_env);
LASSERT(sub->sub_stripe < lio->lis_stripe_count);
- if (unlikely(lov_r0(lov)->lo_sub[stripe] == NULL))
+ if (unlikely(!lov_r0(lov)->lo_sub[stripe]))
return -EIO;
result = 0;
@@ -252,7 +252,6 @@
subobj = lu2lovsub(
lu_object_locate(page->cp_child->cp_obj->co_lu.lo_header,
&lovsub_device_type));
- LASSERT(subobj != NULL);
return subobj->lso_index;
}
@@ -263,9 +262,9 @@
struct cl_page *page = slice->cpl_page;
int stripe;
- LASSERT(lio->lis_cl.cis_io != NULL);
+ LASSERT(lio->lis_cl.cis_io);
LASSERT(cl2lov(slice->cpl_obj) == lio->lis_object);
- LASSERT(lsm != NULL);
+ LASSERT(lsm);
LASSERT(lio->lis_nr_subios > 0);
stripe = lov_page_stripe(page);
@@ -278,7 +277,7 @@
struct lov_stripe_md *lsm = lio->lis_object->lo_lsm;
int result;
- LASSERT(lio->lis_object != NULL);
+ LASSERT(lio->lis_object);
/*
* Need to be optimized, we can't afford to allocate a piece of memory
@@ -288,7 +287,7 @@
libcfs_kvzalloc(lsm->lsm_stripe_count *
sizeof(lio->lis_subs[0]),
GFP_NOFS);
- if (lio->lis_subs != NULL) {
+ if (lio->lis_subs) {
lio->lis_nr_subios = lio->lis_stripe_count;
lio->lis_single_subio_index = -1;
lio->lis_active_subios = 0;
@@ -304,7 +303,6 @@
io->ci_result = 0;
lio->lis_object = obj;
- LASSERT(obj->lo_lsm != NULL);
lio->lis_stripe_count = obj->lo_lsm->lsm_stripe_count;
switch (io->ci_type) {
@@ -358,7 +356,7 @@
struct lov_object *lov = cl2lov(ios->cis_obj);
int i;
- if (lio->lis_subs != NULL) {
+ if (lio->lis_subs) {
for (i = 0; i < lio->lis_nr_subios; i++)
lov_io_sub_fini(env, lio, &lio->lis_subs[i]);
kvfree(lio->lis_subs);
@@ -395,7 +393,7 @@
endpos, &start, &end))
continue;
- if (unlikely(lov_r0(lio->lis_object)->lo_sub[stripe] == NULL)) {
+ if (unlikely(!lov_r0(lio->lis_object)->lo_sub[stripe])) {
if (ios->cis_io->ci_type == CIT_READ ||
ios->cis_io->ci_type == CIT_WRITE ||
ios->cis_io->ci_type == CIT_FAULT)
@@ -601,13 +599,13 @@
return rc;
}
- LASSERT(lio->lis_subs != NULL);
+ LASSERT(lio->lis_subs);
if (alloc) {
stripes_qin =
libcfs_kvzalloc(sizeof(*stripes_qin) *
lio->lis_nr_subios,
GFP_NOFS);
- if (stripes_qin == NULL)
+ if (!stripes_qin)
return -ENOMEM;
for (stripe = 0; stripe < lio->lis_nr_subios; stripe++)
@@ -955,7 +953,7 @@
struct lov_io *lio = lov_env_io(env);
int result;
- LASSERT(lov->lo_lsm != NULL);
+ LASSERT(lov->lo_lsm);
lio->lis_object = lov;
switch (io->ci_type) {
diff --git a/drivers/staging/lustre/lustre/lov/lov_lock.c b/drivers/staging/lustre/lustre/lov/lov_lock.c
index d866791..e0a6438 100644
--- a/drivers/staging/lustre/lustre/lov/lov_lock.c
+++ b/drivers/staging/lustre/lustre/lov/lov_lock.c
@@ -115,7 +115,7 @@
/*
* check that sub-lock doesn't have lock link to this top-lock.
*/
- LASSERT(lov_lock_link_find(env, lck, lsl) == NULL);
+ LASSERT(!lov_lock_link_find(env, lck, lsl));
LASSERT(idx < lck->lls_nr);
lck->lls_sub[idx].sub_lock = lsl;
@@ -145,7 +145,7 @@
LASSERT(idx < lck->lls_nr);
link = kmem_cache_alloc(lov_lock_link_kmem, GFP_NOFS | __GFP_ZERO);
- if (link != NULL) {
+ if (link) {
struct lov_sublock_env *subenv;
struct lov_lock_sub *lls;
struct cl_lock_descr *descr;
@@ -160,7 +160,8 @@
* to remember the subio. This is because lock is able
* to be cached, but this is not true for IO. This
* further means a sublock might be referenced in
- * different io context. -jay */
+ * different io context. -jay
+ */
sublock = cl_lock_hold(subenv->lse_env, subenv->lse_io,
descr, "lov-parent", parent);
@@ -220,7 +221,7 @@
LASSERT(!(lls->sub_flags & LSF_HELD));
link = lov_lock_link_find(env, lck, sublock);
- LASSERT(link != NULL);
+ LASSERT(link);
lov_lock_unlink(env, link, sublock);
lov_sublock_unlock(env, sublock, closure, NULL);
lck->lls_cancel_race = 1;
@@ -309,14 +310,14 @@
* XXX for wide striping smarter algorithm is desirable,
* breaking out of the loop, early.
*/
- if (likely(r0->lo_sub[i] != NULL) &&
+ if (likely(r0->lo_sub[i]) &&
lov_stripe_intersects(loo->lo_lsm, i,
file_start, file_end, &start, &end))
nr++;
}
LASSERT(nr > 0);
lck->lls_sub = libcfs_kvzalloc(nr * sizeof(lck->lls_sub[0]), GFP_NOFS);
- if (lck->lls_sub == NULL)
+ if (!lck->lls_sub)
return -ENOMEM;
lck->lls_nr = nr;
@@ -328,14 +329,14 @@
* top-lock.
*/
for (i = 0, nr = 0; i < r0->lo_nr; ++i) {
- if (likely(r0->lo_sub[i] != NULL) &&
+ if (likely(r0->lo_sub[i]) &&
lov_stripe_intersects(loo->lo_lsm, i,
file_start, file_end, &start, &end)) {
struct cl_lock_descr *descr;
descr = &lck->lls_sub[nr].sub_descr;
- LASSERT(descr->cld_obj == NULL);
+ LASSERT(!descr->cld_obj);
descr->cld_obj = lovsub2cl(r0->lo_sub[i]);
descr->cld_start = cl_index(descr->cld_obj, start);
descr->cld_end = cl_index(descr->cld_obj, end);
@@ -369,7 +370,6 @@
struct cl_lock *sublock;
int dying;
- LASSERT(lck->lls_sub[i].sub_lock != NULL);
sublock = lck->lls_sub[i].sub_lock->lss_cl.cls_lock;
LASSERT(cl_lock_is_mutexed(sublock));
@@ -413,7 +413,6 @@
if (!(lck->lls_sub[i].sub_flags & LSF_HELD)) {
struct cl_lock *sublock;
- LASSERT(lck->lls_sub[i].sub_lock != NULL);
sublock = lck->lls_sub[i].sub_lock->lss_cl.cls_lock;
LASSERT(cl_lock_is_mutexed(sublock));
LASSERT(sublock->cll_state != CLS_FREEING);
@@ -435,13 +434,13 @@
lck = cl2lov_lock(slice);
LASSERT(lck->lls_nr_filled == 0);
- if (lck->lls_sub != NULL) {
+ if (lck->lls_sub) {
for (i = 0; i < lck->lls_nr; ++i)
/*
* No sub-locks exists at this point, as sub-lock has
* a reference on its parent.
*/
- LASSERT(lck->lls_sub[i].sub_lock == NULL);
+ LASSERT(!lck->lls_sub[i].sub_lock);
kvfree(lck->lls_sub);
}
kmem_cache_free(lov_lock_kmem, lck);
@@ -479,7 +478,8 @@
result = cl_enqueue_try(env, sublock, io, enqflags);
if ((sublock->cll_state == CLS_ENQUEUED) && !(enqflags & CEF_AGL)) {
/* if it is enqueued, try to `wait' on it---maybe it's already
- * granted */
+ * granted
+ */
result = cl_wait_try(env, sublock);
if (result == CLO_REENQUEUED)
result = CLO_WAIT;
@@ -515,12 +515,13 @@
if (!IS_ERR(sublock)) {
cl_lock_get_trust(sublock);
if (parent->cll_state == CLS_QUEUING &&
- lck->lls_sub[idx].sub_lock == NULL) {
+ !lck->lls_sub[idx].sub_lock) {
lov_sublock_adopt(env, lck, sublock, idx, link);
} else {
kmem_cache_free(lov_lock_link_kmem, link);
/* other thread allocated sub-lock, or enqueue is no
- * longer going on */
+ * longer going on
+ */
cl_lock_mutex_put(env, parent);
cl_lock_unhold(env, sublock, "lov-parent", parent);
cl_lock_mutex_get(env, parent);
@@ -574,10 +575,11 @@
* Sub-lock might have been canceled, while top-lock was
* cached.
*/
- if (sub == NULL) {
+ if (!sub) {
result = lov_sublock_fill(env, lock, io, lck, i);
/* lov_sublock_fill() released @lock mutex,
- * restart. */
+ * restart.
+ */
break;
}
sublock = sub->lss_cl.cls_lock;
@@ -605,7 +607,8 @@
/* take recursive mutex of sublock */
cl_lock_mutex_get(env, sublock);
/* need to release all locks in closure
- * otherwise it may deadlock. LU-2683.*/
+ * otherwise it may deadlock. LU-2683.
+ */
lov_sublock_unlock(env, sub, closure,
subenv);
/* sublock and parent are held. */
@@ -620,7 +623,7 @@
break;
}
} else {
- LASSERT(sublock->cll_conflict == NULL);
+ LASSERT(!sublock->cll_conflict);
lov_sublock_unlock(env, sub, closure, subenv);
}
}
@@ -649,11 +652,12 @@
/* top-lock state cannot change concurrently, because single
* thread (one that released the last hold) carries unlocking
- * to the completion. */
+ * to the completion.
+ */
LASSERT(slice->cls_lock->cll_state == CLS_INTRANSIT);
lls = &lck->lls_sub[i];
sub = lls->sub_lock;
- if (sub == NULL)
+ if (!sub)
continue;
sublock = sub->lss_cl.cls_lock;
@@ -695,10 +699,11 @@
/* top-lock state cannot change concurrently, because single
* thread (one that released the last hold) carries unlocking
- * to the completion. */
+ * to the completion.
+ */
lls = &lck->lls_sub[i];
sub = lls->sub_lock;
- if (sub == NULL)
+ if (!sub)
continue;
sublock = sub->lss_cl.cls_lock;
@@ -757,7 +762,6 @@
lls = &lck->lls_sub[i];
sub = lls->sub_lock;
- LASSERT(sub != NULL);
sublock = sub->lss_cl.cls_lock;
rc = lov_sublock_lock(env, lck, lls, closure, &subenv);
if (rc == 0) {
@@ -776,8 +780,9 @@
if (result != 0)
break;
}
- /* Each sublock only can be reenqueued once, so will not loop for
- * ever. */
+ /* Each sublock only can be reenqueued once, so will not loop
+ * forever.
+ */
if (result == 0 && reenqueued != 0)
goto again;
cl_lock_closure_fini(closure);
@@ -805,7 +810,7 @@
lls = &lck->lls_sub[i];
sub = lls->sub_lock;
- if (sub == NULL) {
+ if (!sub) {
/*
* Sub-lock might have been canceled, while top-lock was
* cached.
@@ -826,7 +831,8 @@
i, 1, rc);
} else if (sublock->cll_state == CLS_NEW) {
/* Sub-lock might have been canceled, while
- * top-lock was cached. */
+ * top-lock was cached.
+ */
result = -ESTALE;
lov_sublock_release(env, lck, i, 1, result);
}
@@ -852,45 +858,6 @@
return result;
}
-#if 0
-static int lock_lock_multi_match()
-{
- struct cl_lock *lock = slice->cls_lock;
- struct cl_lock_descr *subneed = &lov_env_info(env)->lti_ldescr;
- struct lov_object *loo = cl2lov(lov->lls_cl.cls_obj);
- struct lov_layout_raid0 *r0 = lov_r0(loo);
- struct lov_lock_sub *sub;
- struct cl_object *subobj;
- u64 fstart;
- u64 fend;
- u64 start;
- u64 end;
- int i;
-
- fstart = cl_offset(need->cld_obj, need->cld_start);
- fend = cl_offset(need->cld_obj, need->cld_end + 1) - 1;
- subneed->cld_mode = need->cld_mode;
- cl_lock_mutex_get(env, lock);
- for (i = 0; i < lov->lls_nr; ++i) {
- sub = &lov->lls_sub[i];
- if (sub->sub_lock == NULL)
- continue;
- subobj = sub->sub_descr.cld_obj;
- if (!lov_stripe_intersects(loo->lo_lsm, sub->sub_stripe,
- fstart, fend, &start, &end))
- continue;
- subneed->cld_start = cl_index(subobj, start);
- subneed->cld_end = cl_index(subobj, end);
- subneed->cld_obj = subobj;
- if (!cl_lock_ext_match(&sub->sub_got, subneed)) {
- result = 0;
- break;
- }
- }
- cl_lock_mutex_put(env, lock);
-}
-#endif
-
/**
* Check if the extent region \a descr is covered by \a child against the
* specific \a stripe.
@@ -922,10 +889,10 @@
idx = lov_stripe_number(lsm, start);
if (idx == stripe ||
- unlikely(lov_r0(lov)->lo_sub[idx] == NULL)) {
+ unlikely(!lov_r0(lov)->lo_sub[idx])) {
idx = lov_stripe_number(lsm, end);
if (idx == stripe ||
- unlikely(lov_r0(lov)->lo_sub[idx] == NULL))
+ unlikely(!lov_r0(lov)->lo_sub[idx]))
result = 1;
}
}
@@ -970,7 +937,8 @@
LASSERT(lov->lls_nr > 0);
/* for top lock, it's necessary to match enq flags otherwise it will
- * run into problem if a sublock is missing and reenqueue. */
+ * run into problem if a sublock is missing and reenqueue.
+ */
if (need->cld_enq_flags != lov->lls_orig.cld_enq_flags)
return 0;
@@ -1074,7 +1042,7 @@
struct lov_lock_sub *lls = &lck->lls_sub[i];
struct lovsub_lock *lsl = lls->sub_lock;
- if (lsl == NULL) /* already removed */
+ if (!lsl) /* already removed */
continue;
rc = lov_sublock_lock(env, lck, lls, closure, NULL);
@@ -1090,9 +1058,9 @@
lov_sublock_release(env, lck, i, 1, 0);
link = lov_lock_link_find(env, lck, lsl);
- LASSERT(link != NULL);
+ LASSERT(link);
lov_lock_unlink(env, link, lsl);
- LASSERT(lck->lls_sub[i].sub_lock == NULL);
+ LASSERT(!lck->lls_sub[i].sub_lock);
lov_sublock_unlock(env, lsl, closure, NULL);
}
@@ -1112,7 +1080,7 @@
sub = &lck->lls_sub[i];
(*p)(env, cookie, " %d %x: ", i, sub->sub_flags);
- if (sub->sub_lock != NULL)
+ if (sub->sub_lock)
cl_lock_print(env, cookie, p,
sub->sub_lock->lss_cl.cls_lock);
else
@@ -1140,7 +1108,7 @@
int result;
lck = kmem_cache_alloc(lov_lock_kmem, GFP_NOFS | __GFP_ZERO);
- if (lck != NULL) {
+ if (lck) {
cl_lock_slice_add(lock, &lck->lls_cl, obj, &lov_lock_ops);
result = lov_lock_sub_init(env, lck, io);
} else
@@ -1176,7 +1144,7 @@
int result = -ENOMEM;
lck = kmem_cache_alloc(lov_lock_kmem, GFP_NOFS | __GFP_ZERO);
- if (lck != NULL) {
+ if (lck) {
cl_lock_slice_add(lock, &lck->lls_cl, obj, &lov_empty_lock_ops);
lck->lls_orig = lock->cll_descr;
result = 0;
diff --git a/drivers/staging/lustre/lustre/lov/lov_merge.c b/drivers/staging/lustre/lustre/lov/lov_merge.c
index 97115be..029cd4d 100644
--- a/drivers/staging/lustre/lustre/lov/lov_merge.c
+++ b/drivers/staging/lustre/lustre/lov/lov_merge.c
@@ -129,7 +129,8 @@
"stripe %d KMS %sing %llu->%llu\n",
stripe, kms > loi->loi_kms ? "increase":"shrink",
loi->loi_kms, kms);
- loi_kms_set(loi, loi->loi_lvb.lvb_size = kms);
+ loi->loi_lvb.lvb_size = kms;
+ loi_kms_set(loi, loi->loi_lvb.lvb_size);
}
return 0;
}
diff --git a/drivers/staging/lustre/lustre/lov/lov_obd.c b/drivers/staging/lustre/lustre/lov/lov_obd.c
index 65077b7..7ba670a 100644
--- a/drivers/staging/lustre/lustre/lov/lov_obd.c
+++ b/drivers/staging/lustre/lustre/lov/lov_obd.c
@@ -61,7 +61,8 @@
#include "lov_internal.h"
/* Keep a refcount of lov->tgt usage to prevent racing with addition/deletion.
- Any function that expects lov_tgts to remain stationary must take a ref. */
+ * Any function that expects lov_tgts to remain stationary must take a ref.
+ */
static void lov_getref(struct obd_device *obd)
{
struct lov_obd *lov = &obd->u.lov;
@@ -96,7 +97,8 @@
list_add(&tgt->ltd_kill, &kill);
/* XXX - right now there is a dependency on ld_tgt_count
* being the maximum tgt index for computing the
- * mds_max_easize. So we can't shrink it. */
+ * mds_max_easize. So we can't shrink it.
+ */
lov_ost_pool_remove(&lov->lov_packed, i);
lov->lov_tgts[i] = NULL;
lov->lov_death_row--;
@@ -158,7 +160,8 @@
if (activate) {
tgt_obd->obd_no_recov = 0;
/* FIXME this is probably supposed to be
- ptlrpc_set_import_active. Horrible naming. */
+ * ptlrpc_set_import_active. Horrible naming.
+ */
ptlrpc_activate_import(imp);
}
@@ -315,7 +318,8 @@
}
/* Let's hold another reference so lov_del_obd doesn't spin through
- putref every time */
+ * putref every time
+ */
obd_getref(obd);
for (i = 0; i < lov->desc.ld_tgt_count; i++) {
@@ -358,7 +362,7 @@
* LU-642, initially inactive OSC could miss the obd_connect,
* we make up for it here.
*/
- if (ev == OBD_NOTIFY_ACTIVATE && tgt->ltd_exp == NULL &&
+ if (ev == OBD_NOTIFY_ACTIVATE && !tgt->ltd_exp &&
obd_uuid_equals(uuid, &tgt->ltd_uuid)) {
struct obd_uuid lov_osc_uuid = {"LOV_OSC_UUID"};
@@ -399,10 +403,9 @@
CDEBUG(D_INFO, "OSC %s already %sactive!\n",
uuid->uuid, active ? "" : "in");
goto out;
- } else {
- CDEBUG(D_CONFIG, "Marking OSC %s %sactive\n",
- obd_uuid2str(uuid), active ? "" : "in");
}
+ CDEBUG(D_CONFIG, "Marking OSC %s %sactive\n",
+ obd_uuid2str(uuid), active ? "" : "in");
lov->lov_tgts[index]->ltd_active = active;
if (active) {
@@ -481,7 +484,8 @@
continue;
/* don't send sync event if target not
- * connected/activated */
+ * connected/activated
+ */
if (is_sync && !lov->lov_tgts[i]->ltd_active)
continue;
@@ -521,12 +525,12 @@
tgt_obd = class_find_client_obd(uuidp, LUSTRE_OSC_NAME,
&obd->obd_uuid);
- if (tgt_obd == NULL)
+ if (!tgt_obd)
return -EINVAL;
mutex_lock(&lov->lov_lock);
- if ((index < lov->lov_tgt_size) && (lov->lov_tgts[index] != NULL)) {
+ if ((index < lov->lov_tgt_size) && lov->lov_tgts[index]) {
tgt = lov->lov_tgts[index];
CERROR("UUID %s already assigned at LOV target index %d\n",
obd_uuid2str(&tgt->ltd_uuid), index);
@@ -543,7 +547,7 @@
while (newsize < index + 1)
newsize <<= 1;
newtgts = kcalloc(newsize, sizeof(*newtgts), GFP_NOFS);
- if (newtgts == NULL) {
+ if (!newtgts) {
mutex_unlock(&lov->lov_lock);
return -ENOMEM;
}
@@ -596,8 +600,9 @@
if (lov->lov_connects == 0) {
/* lov_connect hasn't been called yet. We'll do the
- lov_connect_obd on this target when that fn first runs,
- because we don't know the connect flags yet. */
+ * lov_connect_obd on this target when that fn first runs,
+ * because we don't know the connect flags yet.
+ */
return 0;
}
@@ -613,7 +618,7 @@
goto out;
}
- if (lov->lov_cache != NULL) {
+ if (lov->lov_cache) {
rc = obd_set_info_async(NULL, tgt->ltd_exp,
sizeof(KEY_CACHE_SET), KEY_CACHE_SET,
sizeof(struct cl_client_cache), lov->lov_cache,
@@ -702,8 +707,9 @@
kfree(tgt);
/* Manual cleanup - no cleanup logs to clean up the osc's. We must
- do it ourselves. And we can't do it from lov_cleanup,
- because we just lost our only reference to it. */
+ * do it ourselves. And we can't do it from lov_cleanup,
+ * because we just lost our only reference to it.
+ */
if (osc_obd)
class_manual_cleanup(osc_obd);
}
@@ -773,9 +779,9 @@
if (desc->ld_magic != LOV_DESC_MAGIC) {
if (desc->ld_magic == __swab32(LOV_DESC_MAGIC)) {
- CDEBUG(D_OTHER, "%s: Swabbing lov desc %p\n",
- obd->obd_name, desc);
- lustre_swab_lov_desc(desc);
+ CDEBUG(D_OTHER, "%s: Swabbing lov desc %p\n",
+ obd->obd_name, desc);
+ lustre_swab_lov_desc(desc);
} else {
CERROR("%s: Bad lov desc magic: %#x\n",
obd->obd_name, desc->ld_magic);
@@ -859,7 +865,8 @@
/* free pool structs */
CDEBUG(D_INFO, "delete pool %p\n", pool);
/* In the function below, .hs_keycmp resolves to
- * pool_hashkey_keycmp() */
+ * pool_hashkey_keycmp()
+ */
/* coverity[overrun-buffer-val] */
lov_pool_del(obd, pool->pool_name);
}
@@ -879,8 +886,9 @@
if (lov->lov_tgts[i]->ltd_active ||
atomic_read(&lov->lov_refcount))
/* We should never get here - these
- should have been removed in the
- disconnect. */
+ * should have been removed in the
+ * disconnect.
+ */
CERROR("lov tgt %d not cleaned! deathrow=%d, lovrc=%d\n",
i, lov->lov_death_row,
atomic_read(&lov->lov_refcount));
@@ -981,7 +989,7 @@
ost_idx = src_oa->o_nlink;
lsm = *ea;
- if (lsm == NULL) {
+ if (!lsm) {
rc = -EINVAL;
goto out;
}
@@ -1025,8 +1033,8 @@
struct lov_obd *lov;
int rc = 0;
- LASSERT(ea != NULL);
- if (exp == NULL)
+ LASSERT(ea);
+ if (!exp)
return -EINVAL;
if ((src_oa->o_valid & OBD_MD_FLFLAGS) &&
@@ -1043,7 +1051,7 @@
/* Recreate a specific object id at the given OST index */
if ((src_oa->o_valid & OBD_MD_FLFLAGS) &&
(src_oa->o_flags & OBD_FL_RECREATE_OBJS)) {
- rc = lov_recreate(exp, src_oa, ea, oti);
+ rc = lov_recreate(exp, src_oa, ea, oti);
}
obd_putref(exp->exp_obd);
@@ -1052,7 +1060,7 @@
#define ASSERT_LSM_MAGIC(lsmp) \
do { \
- LASSERT((lsmp) != NULL); \
+ LASSERT((lsmp)); \
LASSERTF(((lsmp)->lsm_magic == LOV_MAGIC_V1 || \
(lsmp)->lsm_magic == LOV_MAGIC_V3), \
"%p->lsm_magic=%x\n", (lsmp), (lsmp)->lsm_magic); \
@@ -1065,7 +1073,6 @@
struct lov_request_set *set;
struct obd_info oinfo;
struct lov_request *req;
- struct list_head *pos;
struct lov_obd *lov;
int rc = 0, err = 0;
@@ -1085,9 +1092,7 @@
if (rc)
goto out;
- list_for_each(pos, &set->set_list) {
- req = list_entry(pos, struct lov_request, rq_link);
-
+ list_for_each_entry(req, &set->set_list, rq_link) {
if (oa->o_valid & OBD_MD_FLCOOKIE)
oti->oti_logcookies = set->set_cookies + req->rq_stripe;
@@ -1105,10 +1110,9 @@
}
}
- if (rc == 0) {
- LASSERT(lsm_op_find(lsm->lsm_magic) != NULL);
+ if (rc == 0)
rc = lsm_op_find(lsm->lsm_magic)->lsm_destroy(lsm, oa, md_exp);
- }
+
err = lov_fini_destroy_set(set);
out:
obd_putref(exp->exp_obd);
@@ -1133,7 +1137,6 @@
{
struct lov_request_set *lovset;
struct lov_obd *lov;
- struct list_head *pos;
struct lov_request *req;
int rc = 0, err;
@@ -1153,9 +1156,7 @@
POSTID(&oinfo->oi_md->lsm_oi), oinfo->oi_md->lsm_stripe_count,
oinfo->oi_md->lsm_stripe_size);
- list_for_each(pos, &lovset->set_list) {
- req = list_entry(pos, struct lov_request, rq_link);
-
+ list_for_each_entry(req, &lovset->set_list, rq_link) {
CDEBUG(D_INFO, "objid " DOSTID "[%d] has subobj " DOSTID " at idx%u\n",
POSTID(&oinfo->oi_oa->o_oi), req->rq_stripe,
POSTID(&req->rq_oi.oi_oa->o_oi), req->rq_idx);
@@ -1174,7 +1175,7 @@
if (!list_empty(&rqset->set_requests)) {
LASSERT(rc == 0);
- LASSERT(rqset->set_interpret == NULL);
+ LASSERT(!rqset->set_interpret);
rqset->set_interpret = lov_getattr_interpret;
rqset->set_arg = (void *)lovset;
return rc;
@@ -1199,14 +1200,14 @@
}
/* If @oti is given, the request goes from MDS and responses from OSTs are not
- needed. Otherwise, a client is waiting for responses. */
+ * needed. Otherwise, a client is waiting for responses.
+ */
static int lov_setattr_async(struct obd_export *exp, struct obd_info *oinfo,
struct obd_trans_info *oti,
struct ptlrpc_request_set *rqset)
{
struct lov_request_set *set;
struct lov_request *req;
- struct list_head *pos;
struct lov_obd *lov;
int rc = 0;
@@ -1230,9 +1231,7 @@
oinfo->oi_md->lsm_stripe_count,
oinfo->oi_md->lsm_stripe_size);
- list_for_each(pos, &set->set_list) {
- req = list_entry(pos, struct lov_request, rq_link);
-
+ list_for_each_entry(req, &set->set_list, rq_link) {
if (oinfo->oi_oa->o_valid & OBD_MD_FLCOOKIE)
oti->oti_logcookies = set->set_cookies + req->rq_stripe;
@@ -1262,7 +1261,7 @@
return rc ? rc : err;
}
- LASSERT(rqset->set_interpret == NULL);
+ LASSERT(!rqset->set_interpret);
rqset->set_interpret = lov_setattr_interpret;
rqset->set_arg = (void *)set;
@@ -1272,7 +1271,8 @@
/* find any ldlm lock of the inode in lov
* return 0 not find
* 1 find one
- * < 0 error */
+ * < 0 error
+ */
static int lov_find_cbdata(struct obd_export *exp,
struct lov_stripe_md *lsm, ldlm_iterator_t it,
void *data)
@@ -1326,20 +1326,17 @@
struct obd_device *obd = class_exp2obd(exp);
struct lov_request_set *set;
struct lov_request *req;
- struct list_head *pos;
struct lov_obd *lov;
int rc = 0;
- LASSERT(oinfo != NULL);
- LASSERT(oinfo->oi_osfs != NULL);
+ LASSERT(oinfo->oi_osfs);
lov = &obd->u.lov;
rc = lov_prep_statfs_set(obd, oinfo, &set);
if (rc)
return rc;
- list_for_each(pos, &set->set_list) {
- req = list_entry(pos, struct lov_request, rq_link);
+ list_for_each_entry(req, &set->set_list, rq_link) {
rc = obd_statfs_async(lov->lov_tgts[req->rq_idx]->ltd_exp,
&req->rq_oi, max_age, rqset);
if (rc)
@@ -1355,7 +1352,7 @@
return rc ? rc : err;
}
- LASSERT(rqset->set_interpret == NULL);
+ LASSERT(!rqset->set_interpret);
rqset->set_interpret = lov_statfs_interpret;
rqset->set_arg = (void *)set;
return 0;
@@ -1369,9 +1366,10 @@
int rc = 0;
/* for obdclass we forbid using obd_statfs_rqset, but prefer using async
- * statfs requests */
+ * statfs requests
+ */
set = ptlrpc_prep_set();
- if (set == NULL)
+ if (!set)
return -ENOMEM;
oinfo.oi_osfs = osfs;
@@ -1503,7 +1501,7 @@
&qctl->obd_uuid))
continue;
- if (tgt->ltd_exp == NULL)
+ if (!tgt->ltd_exp)
return -EINVAL;
break;
@@ -1545,14 +1543,15 @@
continue;
/* ll_umount_begin() sets force flag but for lov, not
- * osc. Let's pass it through */
+ * osc. Let's pass it through
+ */
osc_obd = class_exp2obd(lov->lov_tgts[i]->ltd_exp);
osc_obd->obd_force = obddev->obd_force;
err = obd_iocontrol(cmd, lov->lov_tgts[i]->ltd_exp,
len, karg, uarg);
- if (err == -ENODATA && cmd == OBD_IOC_POLL_QUOTACHECK) {
+ if (err == -ENODATA && cmd == OBD_IOC_POLL_QUOTACHECK)
return err;
- } else if (err) {
+ if (err) {
if (lov->lov_tgts[i]->ltd_active) {
CDEBUG(err == -ENOTTY ?
D_IOCTL : D_WARNING,
@@ -1622,7 +1621,8 @@
return -EINVAL;
/* If we have finished mapping on previous device, shift logical
- * offset to start of next device */
+ * offset to start of next device
+ */
if ((lov_stripe_intersects(lsm, stripe_no, fm_start, fm_end,
&lun_start, &lun_end)) != 0 &&
local_end < lun_end) {
@@ -1630,7 +1630,8 @@
*start_stripe = stripe_no;
} else {
/* This is a special value to indicate that caller should
- * calculate offset in next stripe. */
+ * calculate offset in next stripe.
+ */
fm_end_offset = 0;
*start_stripe = (stripe_no + 1) % lsm->lsm_stripe_count;
}
@@ -1741,7 +1742,7 @@
buffer_size = fiemap_count_to_size(fm_key->fiemap.fm_extent_count);
fm_local = libcfs_kvzalloc(buffer_size, GFP_NOFS);
- if (fm_local == NULL) {
+ if (!fm_local) {
rc = -ENOMEM;
goto out;
}
@@ -1798,7 +1799,8 @@
/* If this is a continuation FIEMAP call and we are on
* starting stripe then lun_start needs to be set to
- * fm_end_offset */
+ * fm_end_offset
+ */
if (fm_end_offset != 0 && cur_stripe == start_stripe)
lun_start = fm_end_offset;
@@ -1820,7 +1822,8 @@
len_mapped_single_call = 0;
/* If the output buffer is very large and the objects have many
- * extents we may need to loop on a single OST repeatedly */
+ * extents we may need to loop on a single OST repeatedly
+ */
ost_eof = 0;
ost_done = 0;
do {
@@ -1876,7 +1879,8 @@
if (ext_count == 0) {
ost_done = 1;
/* If last stripe has hole at the end,
- * then we need to return */
+ * then we need to return
+ */
if (cur_stripe_wrap == last_stripe) {
fiemap->fm_mapped_extents = 0;
goto finish;
@@ -1898,7 +1902,8 @@
ost_done = 1;
/* Clear the EXTENT_LAST flag which can be present on
- * last extent */
+ * last extent
+ */
if (lcl_fm_ext[ext_count-1].fe_flags & FIEMAP_EXTENT_LAST)
lcl_fm_ext[ext_count - 1].fe_flags &=
~FIEMAP_EXTENT_LAST;
@@ -1927,7 +1932,8 @@
finish:
/* Indicate that we are returning device offsets unless file just has
- * single stripe */
+ * single stripe
+ */
if (lsm->lsm_stripe_count > 1)
fiemap->fm_flags |= FIEMAP_FLAG_DEVICE_ORDER;
@@ -1935,7 +1941,8 @@
goto skip_last_device_calc;
/* Check if we have reached the last stripe and whether mapping for that
- * stripe is done. */
+ * stripe is done.
+ */
if (cur_stripe_wrap == last_stripe) {
if (ost_done || ost_eof)
fiemap->fm_extents[current_extent - 1].fe_flags |=
@@ -1980,10 +1987,12 @@
/* XXX This is another one of those bits that will need to
* change if we ever actually support nested LOVs. It uses
- * the lock's export to find out which stripe it is. */
+ * the lock's export to find out which stripe it is.
+ */
/* XXX - it's assumed all the locks for deleted OSTs have
* been cancelled. Also, the export for deleted OSTs will
- * be NULL and won't match the lock's export. */
+ * be NULL and won't match the lock's export.
+ */
for (i = 0; i < lsm->lsm_stripe_count; i++) {
loi = lsm->lsm_oinfo[i];
if (lov_oinfo_is_dummy(loi))
@@ -2072,7 +2081,7 @@
unsigned next_id = 0, mds_con = 0;
incr = check_uuid = do_inactive = no_set = 0;
- if (set == NULL) {
+ if (!set) {
no_set = 1;
set = ptlrpc_prep_set();
if (!set)
@@ -2095,7 +2104,7 @@
} else if (KEY_IS(KEY_MDS_CONN)) {
mds_con = 1;
} else if (KEY_IS(KEY_CACHE_SET)) {
- LASSERT(lov->lov_cache == NULL);
+ LASSERT(!lov->lov_cache);
lov->lov_cache = val;
do_inactive = 1;
}
@@ -2319,7 +2328,8 @@
/* print an address of _any_ initialized kernel symbol from this
* module, to allow debugging with gdb that doesn't support data
- * symbols from modules.*/
+ * symbols from modules.
+ */
CDEBUG(D_INFO, "Lustre LOV module (%p).\n", &lov_caches);
rc = lu_kmem_init(lov_caches);
@@ -2329,7 +2339,7 @@
lov_oinfo_slab = kmem_cache_create("lov_oinfo",
sizeof(struct lov_oinfo),
0, SLAB_HWCACHE_ALIGN, NULL);
- if (lov_oinfo_slab == NULL) {
+ if (!lov_oinfo_slab) {
lu_kmem_fini(lov_caches);
return -ENOMEM;
}
diff --git a/drivers/staging/lustre/lustre/lov/lov_object.c b/drivers/staging/lustre/lustre/lov/lov_object.c
index 3b79ebc..4101696 100644
--- a/drivers/staging/lustre/lustre/lov/lov_object.c
+++ b/drivers/staging/lustre/lustre/lov/lov_object.c
@@ -135,7 +135,8 @@
* Do not leave the object in cache to avoid accessing
* freed memory. This is because osc_object is referring to
* lov_oinfo of lsm_stripe_data which will be freed due to
- * this failure. */
+ * this failure.
+ */
cl_object_kill(env, stripe);
cl_object_put(env, stripe);
return -EIO;
@@ -154,7 +155,7 @@
/* reuse ->coh_attr_guard to protect coh_parent change */
spin_lock(&subhdr->coh_attr_guard);
parent = subhdr->coh_parent;
- if (parent == NULL) {
+ if (!parent) {
subhdr->coh_parent = hdr;
spin_unlock(&subhdr->coh_attr_guard);
subhdr->coh_nesting = hdr->coh_nesting + 1;
@@ -170,11 +171,12 @@
spin_unlock(&subhdr->coh_attr_guard);
old_obj = lu_object_locate(&parent->coh_lu, &lov_device_type);
- LASSERT(old_obj != NULL);
+ LASSERT(old_obj);
old_lov = cl2lov(lu2cl(old_obj));
if (old_lov->lo_layout_invalid) {
/* the object's layout has already changed but isn't
- * refreshed */
+ * refreshed
+ */
lu_object_unhash(env, &stripe->co_lu);
result = -EAGAIN;
} else {
@@ -212,14 +214,14 @@
LOV_MAGIC_V1, LOV_MAGIC_V3, lsm->lsm_magic);
}
- LASSERT(lov->lo_lsm == NULL);
+ LASSERT(!lov->lo_lsm);
lov->lo_lsm = lsm_addref(lsm);
r0->lo_nr = lsm->lsm_stripe_count;
LASSERT(r0->lo_nr <= lov_targets_nr(dev));
r0->lo_sub = libcfs_kvzalloc(r0->lo_nr * sizeof(r0->lo_sub[0]),
GFP_NOFS);
- if (r0->lo_sub != NULL) {
+ if (r0->lo_sub) {
result = 0;
subconf->coc_inode = conf->coc_inode;
spin_lock_init(&r0->lo_sub_lock);
@@ -241,9 +243,10 @@
subdev = lovsub2cl_dev(dev->ld_target[ost_idx]);
subconf->u.coc_oinfo = oinfo;
- LASSERTF(subdev != NULL, "not init ost %d\n", ost_idx);
+ LASSERTF(subdev, "not init ost %d\n", ost_idx);
/* In the function below, .hs_keycmp resolves to
- * lu_obj_hop_keycmp() */
+ * lu_obj_hop_keycmp()
+ */
/* coverity[overrun-buffer-val] */
stripe = lov_sub_find(env, subdev, ofid, subconf);
if (!IS_ERR(stripe)) {
@@ -269,9 +272,9 @@
{
struct lov_stripe_md *lsm = conf->u.coc_md->lsm;
- LASSERT(lsm != NULL);
+ LASSERT(lsm);
LASSERT(lsm_is_released(lsm));
- LASSERT(lov->lo_lsm == NULL);
+ LASSERT(!lov->lo_lsm);
lov->lo_lsm = lsm_addref(lsm);
return 0;
@@ -310,7 +313,8 @@
cl_object_put(env, sub);
/* ... wait until it is actually destroyed---sub-object clears its
- * ->lo_sub[] slot in lovsub_object_fini() */
+ * ->lo_sub[] slot in lovsub_object_fini()
+ */
if (r0->lo_sub[idx] == los) {
waiter = &lov_env_info(env)->lti_waiter;
init_waitqueue_entry(waiter, current);
@@ -318,7 +322,8 @@
set_current_state(TASK_UNINTERRUPTIBLE);
while (1) {
/* this wait-queue is signaled at the end of
- * lu_object_free(). */
+ * lu_object_free().
+ */
set_current_state(TASK_UNINTERRUPTIBLE);
spin_lock(&r0->lo_sub_lock);
if (r0->lo_sub[idx] == los) {
@@ -332,7 +337,7 @@
}
remove_wait_queue(&bkt->lsb_marche_funebre, waiter);
}
- LASSERT(r0->lo_sub[idx] == NULL);
+ LASSERT(!r0->lo_sub[idx]);
}
static int lov_delete_raid0(const struct lu_env *env, struct lov_object *lov,
@@ -345,11 +350,11 @@
dump_lsm(D_INODE, lsm);
lov_layout_wait(env, lov);
- if (r0->lo_sub != NULL) {
+ if (r0->lo_sub) {
for (i = 0; i < r0->lo_nr; ++i) {
struct lovsub_object *los = r0->lo_sub[i];
- if (los != NULL) {
+ if (los) {
cl_locks_prune(env, &los->lso_cl, 1);
/*
* If top-level object is to be evicted from
@@ -374,7 +379,7 @@
{
struct lov_layout_raid0 *r0 = &state->raid0;
- if (r0->lo_sub != NULL) {
+ if (r0->lo_sub) {
kvfree(r0->lo_sub);
r0->lo_sub = NULL;
}
@@ -412,7 +417,7 @@
for (i = 0; i < r0->lo_nr; ++i) {
struct lu_object *sub;
- if (r0->lo_sub[i] != NULL) {
+ if (r0->lo_sub[i]) {
sub = lovsub2lu(r0->lo_sub[i]);
lu_object_print(env, cookie, p, sub);
} else {
@@ -465,7 +470,8 @@
* context, and this function is called in ccc_lock_state(), it will
* hit this assertion.
* Anyway, it's still okay to call attr_get w/o type guard as layout
- * can't go if locks exist. */
+ * can't go if locks exist.
+ */
/* LASSERT(atomic_read(&lsm->lsm_refc) > 1); */
if (!r0->lo_attr_valid) {
@@ -475,7 +481,8 @@
memset(lvb, 0, sizeof(*lvb));
/* XXX: timestamps can be negative by sanity:test_39m,
- * how can it be? */
+ * how can it be?
+ */
lvb->lvb_atime = LLONG_MIN;
lvb->lvb_ctime = LLONG_MIN;
lvb->lvb_mtime = LLONG_MIN;
@@ -569,7 +576,7 @@
*/
static enum lov_layout_type lov_type(struct lov_stripe_md *lsm)
{
- if (lsm == NULL)
+ if (!lsm)
return LLT_EMPTY;
if (lsm_is_released(lsm))
return LLT_RELEASED;
@@ -624,7 +631,7 @@
{
LASSERT(lov->lo_owner != current);
down_write(&lov->lo_type_guard);
- LASSERT(lov->lo_owner == NULL);
+ LASSERT(!lov->lo_owner);
lov->lo_owner = current;
}
@@ -666,7 +673,7 @@
LASSERT(0 <= lov->lo_type && lov->lo_type < ARRAY_SIZE(lov_dispatch));
- if (conf->u.coc_md != NULL)
+ if (conf->u.coc_md)
llt = lov_type(conf->u.coc_md->lsm);
LASSERT(0 <= llt && llt < ARRAY_SIZE(lov_dispatch));
@@ -689,7 +696,7 @@
old_ops->llo_fini(env, lov, &lov->u);
LASSERT(atomic_read(&lov->lo_active_ios) == 0);
- LASSERT(hdr->coh_tree.rnode == NULL);
+ LASSERT(!hdr->coh_tree.rnode);
LASSERT(hdr->coh_pages == 0);
lov->lo_type = LLT_EMPTY;
@@ -767,10 +774,10 @@
LASSERT(conf->coc_opc == OBJECT_CONF_SET);
- if (conf->u.coc_md != NULL)
+ if (conf->u.coc_md)
lsm = conf->u.coc_md->lsm;
- if ((lsm == NULL && lov->lo_lsm == NULL) ||
- ((lsm != NULL && lov->lo_lsm != NULL) &&
+ if ((!lsm && !lov->lo_lsm) ||
+ ((lsm && lov->lo_lsm) &&
(lov->lo_lsm->lsm_layout_gen == lsm->lsm_layout_gen) &&
(lov->lo_lsm->lsm_pattern == lsm->lsm_pattern))) {
/* same version of layout */
@@ -845,7 +852,8 @@
struct cl_attr *attr)
{
/* do not take lock, as this function is called under a
- * spin-lock. Layout is protected from changing by ongoing IO. */
+ * spin-lock. Layout is protected from changing by ongoing IO.
+ */
return LOV_2DISPATCH_NOLOCK(cl2lov(obj), llo_getattr, env, obj, attr);
}
@@ -892,7 +900,7 @@
struct lu_object *obj;
lov = kmem_cache_alloc(lov_object_kmem, GFP_NOFS | __GFP_ZERO);
- if (lov != NULL) {
+ if (lov) {
obj = lov2lu(lov);
lu_object_init(obj, NULL, dev);
lov->lo_cl.co_ops = &lov_ops;
@@ -913,7 +921,7 @@
struct lov_stripe_md *lsm = NULL;
lov_conf_freeze(lov);
- if (lov->lo_lsm != NULL) {
+ if (lov->lo_lsm) {
lsm = lsm_addref(lov->lo_lsm);
CDEBUG(D_INODE, "lsm %p addref %d/%d by %p.\n",
lsm, atomic_read(&lsm->lsm_refc),
@@ -928,12 +936,12 @@
struct lu_object *luobj;
struct lov_stripe_md *lsm = NULL;
- if (clobj == NULL)
+ if (!clobj)
return NULL;
luobj = lu_object_locate(&cl_object_header(clobj)->coh_lu,
&lov_device_type);
- if (luobj != NULL)
+ if (luobj)
lsm = lov_lsm_addref(lu2lov(luobj));
return lsm;
}
@@ -941,7 +949,7 @@
void lov_lsm_put(struct cl_object *unused, struct lov_stripe_md *lsm)
{
- if (lsm != NULL)
+ if (lsm)
lov_free_memmd(&lsm);
}
EXPORT_SYMBOL(lov_lsm_put);
@@ -953,7 +961,7 @@
luobj = lu_object_locate(&cl_object_header(clob)->coh_lu,
&lov_device_type);
- if (luobj != NULL) {
+ if (luobj) {
struct lov_object *lov = lu2lov(luobj);
lov_conf_freeze(lov);
@@ -963,7 +971,6 @@
int i;
lsm = lov->lo_lsm;
- LASSERT(lsm != NULL);
for (i = 0; i < lsm->lsm_stripe_count; i++) {
struct lov_oinfo *loi = lsm->lsm_oinfo[i];
diff --git a/drivers/staging/lustre/lustre/lov/lov_offset.c b/drivers/staging/lustre/lustre/lov/lov_offset.c
index aa520aa..1ceea3d 100644
--- a/drivers/staging/lustre/lustre/lov/lov_offset.c
+++ b/drivers/staging/lustre/lustre/lov/lov_offset.c
@@ -55,7 +55,6 @@
if (ost_size == 0)
return 0;
- LASSERT(lsm_op_find(magic) != NULL);
lsm_op_find(magic)->lsm_stripe_by_index(lsm, &stripeno, NULL, &swidth);
/* lov_do_div64(a, b) returns a % b, and a = a / b */
@@ -115,7 +114,8 @@
* this function returns < 0 when the offset was "before" the stripe and
* was moved forward to the start of the stripe in question; 0 when it
* falls in the stripe and no shifting was done; > 0 when the offset
- * was outside the stripe and was pulled back to its final byte. */
+ * was outside the stripe and was pulled back to its final byte.
+ */
int lov_stripe_offset(struct lov_stripe_md *lsm, u64 lov_off,
int stripeno, u64 *obdoff)
{
@@ -129,8 +129,6 @@
return 0;
}
- LASSERT(lsm_op_find(magic) != NULL);
-
lsm_op_find(magic)->lsm_stripe_by_index(lsm, &stripeno, &lov_off,
&swidth);
@@ -183,7 +181,6 @@
if (file_size == OBD_OBJECT_EOF)
return OBD_OBJECT_EOF;
- LASSERT(lsm_op_find(magic) != NULL);
lsm_op_find(magic)->lsm_stripe_by_index(lsm, &stripeno, &file_size,
&swidth);
@@ -213,7 +210,8 @@
/* given an extent in an lov and a stripe, calculate the extent of the stripe
* that is contained within the lov extent. this returns true if the given
- * stripe does intersect with the lov extent. */
+ * stripe does intersect with the lov extent.
+ */
int lov_stripe_intersects(struct lov_stripe_md *lsm, int stripeno,
u64 start, u64 end, u64 *obd_start, u64 *obd_end)
{
@@ -227,7 +225,8 @@
/* this stripe doesn't intersect the file extent when neither
* start or the end intersected the stripe and obd_start and
- * obd_end got rounded up to the save value. */
+ * obd_end got rounded up to the save value.
+ */
if (start_side != 0 && end_side != 0 && *obd_start == *obd_end)
return 0;
@@ -238,7 +237,8 @@
* in the wrong direction and touch it up.
* interestingly, this can't underflow since end must be > start
* if we passed through the previous check.
- * (should we assert for that somewhere?) */
+ * (should we assert for that somewhere?)
+ */
if (end_side != 0)
(*obd_end)--;
@@ -252,7 +252,6 @@
u64 stripe_off, swidth;
int magic = lsm->lsm_magic;
- LASSERT(lsm_op_find(magic) != NULL);
lsm_op_find(magic)->lsm_stripe_by_offset(lsm, NULL, &lov_off, &swidth);
stripe_off = lov_do_div64(lov_off, swidth);
diff --git a/drivers/staging/lustre/lustre/lov/lov_pack.c b/drivers/staging/lustre/lustre/lov/lov_pack.c
index a78211f..9235d7d 100644
--- a/drivers/staging/lustre/lustre/lov/lov_pack.c
+++ b/drivers/staging/lustre/lustre/lov/lov_pack.c
@@ -141,7 +141,8 @@
if (lsm) {
/* If we are just sizing the EA, limit the stripe count
- * to the actual number of OSTs in this filesystem. */
+ * to the actual number of OSTs in this filesystem.
+ */
if (!lmmp) {
stripe_count = lov_get_stripecnt(lov, lmm_magic,
lsm->lsm_stripe_count);
@@ -155,7 +156,8 @@
/* No need to allocate more than maximum supported stripes.
* Anyway, this is pretty inaccurate since ld_tgt_count now
* represents max index and we should rely on the actual number
- * of OSTs instead */
+ * of OSTs instead
+ */
stripe_count = lov_mds_md_max_stripe_count(
lov->lov_ocd.ocd_max_easize, lmm_magic);
@@ -183,7 +185,7 @@
return -ENOMEM;
}
- CDEBUG(D_INFO, "lov_packmd: LOV_MAGIC 0x%08X, lmm_size = %d \n",
+ CDEBUG(D_INFO, "lov_packmd: LOV_MAGIC 0x%08X, lmm_size = %d\n",
lmm_magic, lmm_size);
lmmv1 = *lmmp;
@@ -241,7 +243,8 @@
stripe_count = 1;
/* stripe count is based on whether ldiskfs can handle
- * larger EA sizes */
+ * larger EA sizes
+ */
if (lov->lov_ocd.ocd_connect_flags & OBD_CONNECT_MAX_EASIZE &&
lov->lov_ocd.ocd_max_easize)
max_stripes = lov_mds_md_max_stripe_count(
@@ -257,7 +260,7 @@
{
int rc;
- if (lsm_op_find(le32_to_cpu(*(__u32 *)lmm)) == NULL) {
+ if (!lsm_op_find(le32_to_cpu(*(__u32 *)lmm))) {
CERROR("bad disk LOV MAGIC: 0x%08X; dumping LMM (size=%d):\n",
le32_to_cpu(*(__u32 *)lmm), lmm_bytes);
CERROR("%*phN\n", lmm_bytes, lmm);
@@ -306,10 +309,9 @@
*lsmp = NULL;
LASSERT(atomic_read(&lsm->lsm_refc) > 0);
refc = atomic_dec_return(&lsm->lsm_refc);
- if (refc == 0) {
- LASSERT(lsm_op_find(lsm->lsm_magic) != NULL);
+ if (refc == 0)
lsm_op_find(lsm->lsm_magic)->lsm_free(lsm);
- }
+
return refc;
}
@@ -359,7 +361,6 @@
if (!lmm)
return lsm_size;
- LASSERT(lsm_op_find(magic) != NULL);
rc = lsm_op_find(magic)->lsm_unpackmd(lov, *lsmp, lmm);
if (rc) {
lov_free_memmd(lsmp);
@@ -399,13 +400,15 @@
set_fs(KERNEL_DS);
/* we only need the header part from user space to get lmm_magic and
- * lmm_stripe_count, (the header part is common to v1 and v3) */
+ * lmm_stripe_count, (the header part is common to v1 and v3)
+ */
lum_size = sizeof(struct lov_user_md_v1);
if (copy_from_user(&lum, lump, lum_size)) {
rc = -EFAULT;
goto out_set;
- } else if ((lum.lmm_magic != LOV_USER_MAGIC) &&
- (lum.lmm_magic != LOV_USER_MAGIC_V3)) {
+ }
+ if ((lum.lmm_magic != LOV_USER_MAGIC) &&
+ (lum.lmm_magic != LOV_USER_MAGIC_V3)) {
rc = -EINVAL;
goto out_set;
}
diff --git a/drivers/staging/lustre/lustre/lov/lov_page.c b/drivers/staging/lustre/lustre/lov/lov_page.c
index 037ae91..17ae450 100644
--- a/drivers/staging/lustre/lustre/lov/lov_page.c
+++ b/drivers/staging/lustre/lustre/lov/lov_page.c
@@ -57,7 +57,7 @@
const struct cl_page *page = slice->cpl_page;
const struct cl_page *sub = lov_sub_page(slice);
- return ergo(sub != NULL,
+ return ergo(sub,
page->cp_child == sub &&
sub->cp_parent == page &&
page->cp_state == sub->cp_state);
@@ -70,7 +70,7 @@
LINVRNT(lov_page_invariant(slice));
- if (sub != NULL) {
+ if (sub) {
LASSERT(sub->cp_state == CPS_FREEING);
lu_ref_del(&sub->cp_reference, "lov", sub->cp_parent);
sub->cp_parent = NULL;
@@ -151,7 +151,7 @@
static void lov_empty_page_fini(const struct lu_env *env,
struct cl_page_slice *slice)
{
- LASSERT(slice->cpl_page->cp_child == NULL);
+ LASSERT(!slice->cpl_page->cp_child);
}
int lov_page_init_raid0(const struct lu_env *env, struct cl_object *obj,
diff --git a/drivers/staging/lustre/lustre/lov/lov_pool.c b/drivers/staging/lustre/lustre/lov/lov_pool.c
index 3ee1d40..210304f 100644
--- a/drivers/staging/lustre/lustre/lov/lov_pool.c
+++ b/drivers/staging/lustre/lustre/lov/lov_pool.c
@@ -64,7 +64,7 @@
if (atomic_dec_and_test(&pool->pool_refcount)) {
LASSERT(hlist_unhashed(&pool->pool_hash));
LASSERT(list_empty(&pool->pool_list));
- LASSERT(pool->pool_debugfs_entry == NULL);
+ LASSERT(!pool->pool_debugfs_entry);
lov_ost_pool_free(&(pool->pool_rr.lqr_pool));
lov_ost_pool_free(&(pool->pool_obds));
kfree(pool);
@@ -154,7 +154,7 @@
/* ifdef needed for liblustre support */
/*
- * pool /proc seq_file methods
+ * pool debugfs seq_file methods
*/
/*
* iterator is used to go through the target pool entries
@@ -204,7 +204,8 @@
if ((pool_tgt_count(pool) == 0) ||
(*pos >= pool_tgt_count(pool))) {
/* iter is not created, so stop() has no way to
- * find pool to dec ref */
+ * find pool to dec ref
+ */
lov_pool_putref(pool);
return NULL;
}
@@ -217,7 +218,8 @@
iter->idx = 0;
/* we use seq_file private field to memorized iterator so
- * we can free it at stop() */
+ * we can free it at stop()
+ */
/* /!\ do not forget to restore it to pool before freeing it */
s->private = iter;
if (*pos > 0) {
@@ -226,8 +228,8 @@
i = 0;
do {
- ptr = pool_proc_next(s, &iter, &i);
- } while ((i < *pos) && (ptr != NULL));
+ ptr = pool_proc_next(s, &iter, &i);
+ } while ((i < *pos) && ptr);
return ptr;
}
return iter;
@@ -239,10 +241,12 @@
/* in some cases stop() method is called 2 times, without
* calling start() method (see seq_read() from fs/seq_file.c)
- * we have to free only if s->private is an iterator */
+ * we have to free only if s->private is an iterator
+ */
if ((iter) && (iter->magic == POOL_IT_MAGIC)) {
/* we restore s->private so next call to pool_proc_start()
- * will work */
+ * will work
+ */
s->private = iter->pool;
lov_pool_putref(iter->pool);
kfree(iter);
@@ -255,7 +259,7 @@
struct lov_tgt_desc *tgt;
LASSERTF(iter->magic == POOL_IT_MAGIC, "%08X", iter->magic);
- LASSERT(iter->pool != NULL);
+ LASSERT(iter->pool);
LASSERT(iter->idx <= pool_tgt_count(iter->pool));
down_read(&pool_tgt_rw_sem(iter->pool));
@@ -304,7 +308,7 @@
init_rwsem(&op->op_rw_sem);
op->op_size = count;
op->op_array = kcalloc(op->op_size, sizeof(op->op_array[0]), GFP_NOFS);
- if (op->op_array == NULL) {
+ if (!op->op_array) {
op->op_size = 0;
return -ENOMEM;
}
@@ -324,7 +328,7 @@
new_size = max(min_count, 2 * op->op_size);
new = kcalloc(new_size, sizeof(op->op_array[0]), GFP_NOFS);
- if (new == NULL)
+ if (!new)
return -ENOMEM;
/* copy old array to new one */
@@ -429,7 +433,7 @@
INIT_HLIST_NODE(&new_pool->pool_hash);
/* we need this assert seq_file is not implemented for liblustre */
- /* get ref for /proc file */
+ /* get ref for debugfs file */
lov_pool_getref(new_pool);
new_pool->pool_debugfs_entry = ldebugfs_add_simple(
lov->lov_pool_debugfs_entry,
@@ -486,7 +490,7 @@
/* lookup and kill hash reference */
pool = cfs_hash_del_key(lov->lov_pools_hash_body, poolname);
- if (pool == NULL)
+ if (!pool)
return -ENOENT;
if (!IS_ERR_OR_NULL(pool->pool_debugfs_entry)) {
@@ -517,7 +521,7 @@
lov = &(obd->u.lov);
pool = cfs_hash_lookup(lov->lov_pools_hash_body, poolname);
- if (pool == NULL)
+ if (!pool)
return -ENOENT;
obd_str2uuid(&ost_uuid, ostname);
@@ -563,7 +567,7 @@
lov = &(obd->u.lov);
pool = cfs_hash_lookup(lov->lov_pools_hash_body, poolname);
- if (pool == NULL)
+ if (!pool)
return -ENOENT;
obd_str2uuid(&ost_uuid, ostname);
@@ -631,10 +635,10 @@
pool = NULL;
if (poolname[0] != '\0') {
pool = cfs_hash_lookup(lov->lov_pools_hash_body, poolname);
- if (pool == NULL)
+ if (!pool)
CWARN("Request for an unknown pool ("LOV_POOLNAMEF")\n",
poolname);
- if ((pool != NULL) && (pool_tgt_count(pool) == 0)) {
+ if (pool && (pool_tgt_count(pool) == 0)) {
CWARN("Request for an empty pool ("LOV_POOLNAMEF")\n",
poolname);
/* pool is ignored, so we remove ref on it */
diff --git a/drivers/staging/lustre/lustre/lov/lov_request.c b/drivers/staging/lustre/lustre/lov/lov_request.c
index 42deda7..42660f2 100644
--- a/drivers/staging/lustre/lustre/lov/lov_request.c
+++ b/drivers/staging/lustre/lustre/lov/lov_request.c
@@ -156,7 +156,7 @@
tgt = lov->lov_tgts[ost_idx];
- if (unlikely(tgt == NULL)) {
+ if (unlikely(!tgt)) {
rc = 0;
goto out;
}
@@ -178,7 +178,7 @@
cfs_time_seconds(1), NULL, NULL);
rc = l_wait_event(waitq, lov_check_set(lov, ost_idx), &lwi);
- if (tgt != NULL && tgt->ltd_active)
+ if (tgt && tgt->ltd_active)
return 1;
return 0;
@@ -190,28 +190,23 @@
static int common_attr_done(struct lov_request_set *set)
{
- struct list_head *pos;
struct lov_request *req;
struct obdo *tmp_oa;
int rc = 0, attrset = 0;
- LASSERT(set->set_oi != NULL);
-
- if (set->set_oi->oi_oa == NULL)
+ if (!set->set_oi->oi_oa)
return 0;
if (!atomic_read(&set->set_success))
return -EIO;
tmp_oa = kmem_cache_alloc(obdo_cachep, GFP_NOFS | __GFP_ZERO);
- if (tmp_oa == NULL) {
+ if (!tmp_oa) {
rc = -ENOMEM;
goto out;
}
- list_for_each(pos, &set->set_list) {
- req = list_entry(pos, struct lov_request, rq_link);
-
+ list_for_each_entry(req, &set->set_list, rq_link) {
if (!req->rq_complete || req->rq_rc)
continue;
if (req->rq_oi.oi_oa->o_valid == 0) /* inactive stripe */
@@ -227,7 +222,8 @@
if ((set->set_oi->oi_oa->o_valid & OBD_MD_FLEPOCH) &&
(set->set_oi->oi_md->lsm_stripe_count != attrset)) {
/* When we take attributes of some epoch, we require all the
- * ost to be active. */
+ * ost to be active.
+ */
CERROR("Not all the stripes had valid attrs\n");
rc = -EIO;
goto out;
@@ -246,7 +242,7 @@
{
int rc = 0;
- if (set == NULL)
+ if (!set)
return 0;
LASSERT(set->set_exp);
if (atomic_read(&set->set_completes))
@@ -258,7 +254,8 @@
}
/* The callback for osc_getattr_async that finalizes a request info when a
- * response is received. */
+ * response is received.
+ */
static int cb_getattr_update(void *cookie, int rc)
{
struct obd_info *oinfo = cookie;
@@ -312,7 +309,7 @@
req->rq_oi.oi_oa = kmem_cache_alloc(obdo_cachep,
GFP_NOFS | __GFP_ZERO);
- if (req->rq_oi.oi_oa == NULL) {
+ if (!req->rq_oi.oi_oa) {
kfree(req);
rc = -ENOMEM;
goto out_set;
@@ -337,7 +334,7 @@
int lov_fini_destroy_set(struct lov_request_set *set)
{
- if (set == NULL)
+ if (!set)
return 0;
LASSERT(set->set_exp);
if (atomic_read(&set->set_completes)) {
@@ -368,7 +365,7 @@
set->set_oi->oi_md = lsm;
set->set_oi->oi_oa = src_oa;
set->set_oti = oti;
- if (oti != NULL && src_oa->o_valid & OBD_MD_FLCOOKIE)
+ if (oti && src_oa->o_valid & OBD_MD_FLCOOKIE)
set->set_cookies = oti->oti_logcookies;
for (i = 0; i < lsm->lsm_stripe_count; i++) {
@@ -395,7 +392,7 @@
req->rq_oi.oi_oa = kmem_cache_alloc(obdo_cachep,
GFP_NOFS | __GFP_ZERO);
- if (req->rq_oi.oi_oa == NULL) {
+ if (!req->rq_oi.oi_oa) {
kfree(req);
rc = -ENOMEM;
goto out_set;
@@ -419,7 +416,7 @@
{
int rc = 0;
- if (set == NULL)
+ if (!set)
return 0;
LASSERT(set->set_exp);
if (atomic_read(&set->set_completes)) {
@@ -460,7 +457,8 @@
}
/* The callback for osc_setattr_async that finalizes a request info when a
- * response is received. */
+ * response is received.
+ */
static int cb_setattr_update(void *cookie, int rc)
{
struct obd_info *oinfo = cookie;
@@ -486,7 +484,7 @@
set->set_exp = exp;
set->set_oti = oti;
set->set_oi = oinfo;
- if (oti != NULL && oinfo->oi_oa->o_valid & OBD_MD_FLCOOKIE)
+ if (oti && oinfo->oi_oa->o_valid & OBD_MD_FLCOOKIE)
set->set_cookies = oti->oti_logcookies;
for (i = 0; i < oinfo->oi_md->lsm_stripe_count; i++) {
@@ -511,7 +509,7 @@
req->rq_oi.oi_oa = kmem_cache_alloc(obdo_cachep,
GFP_NOFS | __GFP_ZERO);
- if (req->rq_oi.oi_oa == NULL) {
+ if (!req->rq_oi.oi_oa) {
kfree(req);
rc = -ENOMEM;
goto out_set;
@@ -581,7 +579,7 @@
{
int rc = 0;
- if (set == NULL)
+ if (!set)
return 0;
if (atomic_read(&set->set_completes)) {
@@ -648,7 +646,8 @@
}
/* The callback for osc_statfs_async that finalizes a request info when a
- * response is received. */
+ * response is received.
+ */
static int cb_statfs_update(void *cookie, int rc)
{
struct obd_info *oinfo = cookie;
@@ -668,7 +667,8 @@
lov_sfs = oinfo->oi_osfs;
success = atomic_read(&set->set_success);
/* XXX: the same is done in lov_update_common_set, however
- lovset->set_exp is not initialized. */
+ * lovset->set_exp is not initialized.
+ */
lov_update_set(set, lovreq, rc);
if (rc)
goto out;
@@ -718,7 +718,7 @@
for (i = 0; i < lov->desc.ld_tgt_count; i++) {
struct lov_request *req;
- if (lov->lov_tgts[i] == NULL ||
+ if (!lov->lov_tgts[i] ||
(!lov_check_and_wait_active(lov, i) &&
(oinfo->oi_flags & OBD_STATFS_NODELAY))) {
CDEBUG(D_HA, "lov idx %d inactive\n", i);
@@ -726,7 +726,8 @@
}
/* skip targets that have been explicitly disabled by the
- * administrator */
+ * administrator
+ */
if (!lov->lov_tgts[i]->ltd_exp) {
CDEBUG(D_HA, "lov idx %d administratively disabled\n", i);
continue;
diff --git a/drivers/staging/lustre/lustre/lov/lovsub_dev.c b/drivers/staging/lustre/lustre/lov/lovsub_dev.c
index f1795c3..233740c 100644
--- a/drivers/staging/lustre/lustre/lov/lovsub_dev.c
+++ b/drivers/staging/lustre/lustre/lov/lovsub_dev.c
@@ -101,7 +101,6 @@
next->ld_site = d->ld_site;
ldt = next->ld_type;
- LASSERT(ldt != NULL);
rc = ldt->ldt_ops->ldto_device_init(env, next, ldt->ldt_name, NULL);
if (rc) {
next->ld_site = NULL;
@@ -149,7 +148,7 @@
int result;
lsr = kmem_cache_alloc(lovsub_req_kmem, GFP_NOFS | __GFP_ZERO);
- if (lsr != NULL) {
+ if (lsr) {
cl_req_slice_add(req, &lsr->lsrq_cl, dev, &lovsub_req_ops);
result = 0;
} else
@@ -175,7 +174,7 @@
struct lovsub_device *lsd;
lsd = kzalloc(sizeof(*lsd), GFP_NOFS);
- if (lsd != NULL) {
+ if (lsd) {
int result;
result = cl_device_init(&lsd->acid_cl, t);
diff --git a/drivers/staging/lustre/lustre/lov/lovsub_lock.c b/drivers/staging/lustre/lustre/lov/lovsub_lock.c
index 1a3e30a..0046812 100644
--- a/drivers/staging/lustre/lustre/lov/lovsub_lock.c
+++ b/drivers/staging/lustre/lustre/lov/lovsub_lock.c
@@ -148,7 +148,8 @@
{
pgoff_t size; /* stripe size in pages */
pgoff_t skip; /* how many pages in every stripe are occupied by
- * "other" stripes */
+ * "other" stripes
+ */
pgoff_t start;
pgoff_t end;
@@ -284,7 +285,8 @@
switch (parent->cll_state) {
case CLS_ENQUEUED:
/* See LU-1355 for the case that a glimpse lock is
- * interrupted by signal */
+ * interrupted by signal
+ */
LASSERT(parent->cll_flags & CLF_CANCELLED);
break;
case CLS_QUEUING:
@@ -429,7 +431,7 @@
list_for_each_entry(scan, &sub->lss_parents, lll_list) {
lov = scan->lll_super;
(*p)(env, cookie, "[%d %p ", scan->lll_idx, lov);
- if (lov != NULL)
+ if (lov)
cl_lock_descr_print(env, cookie, p,
&lov->lls_cl.cls_lock->cll_descr);
(*p)(env, cookie, "] ");
@@ -454,7 +456,7 @@
int result;
lsk = kmem_cache_alloc(lovsub_lock_kmem, GFP_NOFS | __GFP_ZERO);
- if (lsk != NULL) {
+ if (lsk) {
INIT_LIST_HEAD(&lsk->lss_parents);
cl_lock_slice_add(lock, &lsk->lss_cl, obj, &lovsub_lock_ops);
result = 0;
diff --git a/drivers/staging/lustre/lustre/lov/lovsub_object.c b/drivers/staging/lustre/lustre/lov/lovsub_object.c
index 5ba5ee1..0a9bf51 100644
--- a/drivers/staging/lustre/lustre/lov/lovsub_object.c
+++ b/drivers/staging/lustre/lustre/lov/lovsub_object.c
@@ -63,7 +63,7 @@
under = &dev->acid_next->cd_lu_dev;
below = under->ld_ops->ldo_object_alloc(env, obj->lo_header, under);
- if (below != NULL) {
+ if (below) {
lu_object_add(obj, below);
cl_object_page_init(lu2cl(obj), sizeof(struct lovsub_page));
result = 0;
@@ -144,7 +144,7 @@
struct lu_object *obj;
los = kmem_cache_alloc(lovsub_object_kmem, GFP_NOFS | __GFP_ZERO);
- if (los != NULL) {
+ if (los) {
struct cl_object_header *hdr;
obj = lovsub2lu(los);
diff --git a/drivers/staging/lustre/lustre/lov/lproc_lov.c b/drivers/staging/lustre/lustre/lov/lproc_lov.c
index 337241d..bb81d04 100644
--- a/drivers/staging/lustre/lustre/lov/lproc_lov.c
+++ b/drivers/staging/lustre/lustre/lov/lproc_lov.c
@@ -46,7 +46,7 @@
struct obd_device *dev = (struct obd_device *)m->private;
struct lov_desc *desc;
- LASSERT(dev != NULL);
+ LASSERT(dev);
desc = &dev->u.lov.desc;
seq_printf(m, "%llu\n", desc->ld_default_stripe_size);
return 0;
@@ -61,7 +61,7 @@
__u64 val;
int rc;
- LASSERT(dev != NULL);
+ LASSERT(dev);
desc = &dev->u.lov.desc;
rc = lprocfs_write_u64_helper(buffer, count, &val);
if (rc)
@@ -79,7 +79,7 @@
struct obd_device *dev = (struct obd_device *)m->private;
struct lov_desc *desc;
- LASSERT(dev != NULL);
+ LASSERT(dev);
desc = &dev->u.lov.desc;
seq_printf(m, "%llu\n", desc->ld_default_stripe_offset);
return 0;
@@ -94,7 +94,7 @@
__u64 val;
int rc;
- LASSERT(dev != NULL);
+ LASSERT(dev);
desc = &dev->u.lov.desc;
rc = lprocfs_write_u64_helper(buffer, count, &val);
if (rc)
@@ -111,7 +111,7 @@
struct obd_device *dev = (struct obd_device *)m->private;
struct lov_desc *desc;
- LASSERT(dev != NULL);
+ LASSERT(dev);
desc = &dev->u.lov.desc;
seq_printf(m, "%u\n", desc->ld_pattern);
return 0;
@@ -125,7 +125,7 @@
struct lov_desc *desc;
int val, rc;
- LASSERT(dev != NULL);
+ LASSERT(dev);
desc = &dev->u.lov.desc;
rc = lprocfs_write_helper(buffer, count, &val);
if (rc)
@@ -143,7 +143,7 @@
struct obd_device *dev = (struct obd_device *)m->private;
struct lov_desc *desc;
- LASSERT(dev != NULL);
+ LASSERT(dev);
desc = &dev->u.lov.desc;
seq_printf(m, "%d\n", (__s16)(desc->ld_default_stripe_count + 1) - 1);
return 0;
@@ -157,7 +157,7 @@
struct lov_desc *desc;
int val, rc;
- LASSERT(dev != NULL);
+ LASSERT(dev);
desc = &dev->u.lov.desc;
rc = lprocfs_write_helper(buffer, count, &val);
if (rc)
@@ -199,7 +199,7 @@
struct obd_device *dev = (struct obd_device *)m->private;
struct lov_obd *lov;
- LASSERT(dev != NULL);
+ LASSERT(dev);
lov = &dev->u.lov;
seq_printf(m, "%s\n", lov->desc.ld_uuid.uuid);
return 0;
diff --git a/drivers/staging/lustre/lustre/mdc/mdc_internal.h b/drivers/staging/lustre/lustre/mdc/mdc_internal.h
index 3d2997a..54f4ca6 100644
--- a/drivers/staging/lustre/lustre/mdc/mdc_internal.h
+++ b/drivers/staging/lustre/lustre/mdc/mdc_internal.h
@@ -90,7 +90,7 @@
struct ptlrpc_request **req, __u64 extra_lock_flags);
int mdc_resource_get_unused(struct obd_export *exp, const struct lu_fid *fid,
- struct list_head *cancels, ldlm_mode_t mode,
+ struct list_head *cancels, enum ldlm_mode mode,
__u64 bits);
/* mdc/mdc_request.c */
int mdc_fid_alloc(struct obd_export *exp, struct lu_fid *fid,
@@ -119,8 +119,8 @@
int mdc_unlink(struct obd_export *exp, struct md_op_data *op_data,
struct ptlrpc_request **request);
int mdc_cancel_unused(struct obd_export *exp, const struct lu_fid *fid,
- ldlm_policy_data_t *policy, ldlm_mode_t mode,
- ldlm_cancel_flags_t flags, void *opaque);
+ ldlm_policy_data_t *policy, enum ldlm_mode mode,
+ enum ldlm_cancel_flags flags, void *opaque);
int mdc_revalidate_lock(struct obd_export *exp, struct lookup_intent *it,
struct lu_fid *fid, __u64 *bits);
@@ -129,10 +129,10 @@
struct md_enqueue_info *minfo,
struct ldlm_enqueue_info *einfo);
-ldlm_mode_t mdc_lock_match(struct obd_export *exp, __u64 flags,
- const struct lu_fid *fid, ldlm_type_t type,
- ldlm_policy_data_t *policy, ldlm_mode_t mode,
- struct lustre_handle *lockh);
+enum ldlm_mode mdc_lock_match(struct obd_export *exp, __u64 flags,
+ const struct lu_fid *fid, enum ldlm_type type,
+ ldlm_policy_data_t *policy, enum ldlm_mode mode,
+ struct lustre_handle *lockh);
static inline int mdc_prep_elc_req(struct obd_export *exp,
struct ptlrpc_request *req, int opc,
diff --git a/drivers/staging/lustre/lustre/mdc/mdc_lib.c b/drivers/staging/lustre/lustre/mdc/mdc_lib.c
index 7218532..b3bfdcb 100644
--- a/drivers/staging/lustre/lustre/mdc/mdc_lib.c
+++ b/drivers/staging/lustre/lustre/mdc/mdc_lib.c
@@ -41,8 +41,6 @@
static void __mdc_pack_body(struct mdt_body *b, __u32 suppgid)
{
- LASSERT(b != NULL);
-
b->suppgid = suppgid;
b->uid = from_kuid(&init_user_ns, current_uid());
b->gid = from_kgid(&init_user_ns, current_gid());
@@ -83,7 +81,6 @@
{
struct mdt_body *b = req_capsule_client_get(&req->rq_pill,
&RMF_MDT_BODY);
- LASSERT(b != NULL);
b->valid = valid;
b->eadatasize = ea_size;
b->flags = flags;
@@ -323,7 +320,7 @@
return;
lum = req_capsule_client_get(&req->rq_pill, &RMF_EADATA);
- if (ea == NULL) { /* Remove LOV EA */
+ if (!ea) { /* Remove LOV EA */
lum->lmm_magic = LOV_USER_MAGIC_V1;
lum->lmm_stripe_size = 0;
lum->lmm_stripe_count = 0;
@@ -346,7 +343,6 @@
CLASSERT(sizeof(struct mdt_rec_reint) == sizeof(struct mdt_rec_unlink));
rec = req_capsule_client_get(&req->rq_pill, &RMF_REC_REINT);
- LASSERT(rec != NULL);
rec->ul_opcode = op_data->op_cli_flags & CLI_RM_ENTRY ?
REINT_RMENTRY : REINT_UNLINK;
@@ -362,7 +358,7 @@
rec->ul_bias = op_data->op_bias;
tmp = req_capsule_client_get(&req->rq_pill, &RMF_NAME);
- LASSERT(tmp != NULL);
+ LASSERT(tmp);
LOGL0(op_data->op_name, op_data->op_namelen, tmp);
}
@@ -373,7 +369,6 @@
CLASSERT(sizeof(struct mdt_rec_reint) == sizeof(struct mdt_rec_link));
rec = req_capsule_client_get(&req->rq_pill, &RMF_REC_REINT);
- LASSERT(rec != NULL);
rec->lk_opcode = REINT_LINK;
rec->lk_fsuid = op_data->op_fsuid; /* current->fsuid; */
@@ -456,10 +451,9 @@
struct ldlm_lock *lock;
data = req_capsule_client_get(&req->rq_pill, &RMF_CLOSE_DATA);
- LASSERT(data != NULL);
lock = ldlm_handle2lock(&op_data->op_lease_handle);
- if (lock != NULL) {
+ if (lock) {
data->cd_handle = lock->l_remote_handle;
ldlm_lock_put(lock);
}
@@ -495,7 +489,8 @@
/* We record requests in flight in cli->cl_r_in_flight here.
* There is only one write rpc possible in mdc anyway. If this to change
- * in the future - the code may need to be revisited. */
+ * in the future - the code may need to be revisited.
+ */
int mdc_enter_request(struct client_obd *cli)
{
int rc = 0;
diff --git a/drivers/staging/lustre/lustre/mdc/mdc_locks.c b/drivers/staging/lustre/lustre/mdc/mdc_locks.c
index ef9a1e1..6dae574 100644
--- a/drivers/staging/lustre/lustre/mdc/mdc_locks.c
+++ b/drivers/staging/lustre/lustre/mdc/mdc_locks.c
@@ -129,7 +129,7 @@
lock = ldlm_handle2lock((struct lustre_handle *)lockh);
- LASSERT(lock != NULL);
+ LASSERT(lock);
lock_res_and_lock(lock);
if (lock->l_resource->lr_lvb_inode &&
lock->l_resource->lr_lvb_inode != data) {
@@ -151,13 +151,13 @@
return 0;
}
-ldlm_mode_t mdc_lock_match(struct obd_export *exp, __u64 flags,
- const struct lu_fid *fid, ldlm_type_t type,
- ldlm_policy_data_t *policy, ldlm_mode_t mode,
- struct lustre_handle *lockh)
+enum ldlm_mode mdc_lock_match(struct obd_export *exp, __u64 flags,
+ const struct lu_fid *fid, enum ldlm_type type,
+ ldlm_policy_data_t *policy, enum ldlm_mode mode,
+ struct lustre_handle *lockh)
{
struct ldlm_res_id res_id;
- ldlm_mode_t rc;
+ enum ldlm_mode rc;
fid_build_reg_res_name(fid, &res_id);
/* LU-4405: Clear bits not supported by server */
@@ -170,8 +170,8 @@
int mdc_cancel_unused(struct obd_export *exp,
const struct lu_fid *fid,
ldlm_policy_data_t *policy,
- ldlm_mode_t mode,
- ldlm_cancel_flags_t flags,
+ enum ldlm_mode mode,
+ enum ldlm_cancel_flags flags,
void *opaque)
{
struct ldlm_res_id res_id;
@@ -191,12 +191,12 @@
struct ldlm_resource *res;
struct ldlm_namespace *ns = class_exp2obd(exp)->obd_namespace;
- LASSERTF(ns != NULL, "no namespace passed\n");
+ LASSERTF(ns, "no namespace passed\n");
fid_build_reg_res_name(fid, &res_id);
res = ldlm_resource_get(ns, NULL, &res_id, 0, 0);
- if (res == NULL)
+ if (!res)
return 0;
lock_res(res);
@@ -210,7 +210,8 @@
/* find any ldlm lock of the inode in mdc
* return 0 not find
* 1 find one
- * < 0 error */
+ * < 0 error
+ */
int mdc_find_cbdata(struct obd_export *exp,
const struct lu_fid *fid,
ldlm_iterator_t it, void *data)
@@ -252,7 +253,8 @@
* OOM here may cause recovery failure if lmm is needed (only for the
* original open if the MDS crashed just when this client also OOM'd)
* but this is incredibly unlikely, and questionable whether the client
- * could do MDS recovery under OOM anyways... */
+ * could do MDS recovery under OOM anyways...
+ */
static void mdc_realloc_openmsg(struct ptlrpc_request *req,
struct mdt_body *body)
{
@@ -317,7 +319,7 @@
req = ptlrpc_request_alloc(class_exp2cliimp(exp),
&RQF_LDLM_INTENT_OPEN);
- if (req == NULL) {
+ if (!req) {
ldlm_lock_list_put(&cancels, l_bl_ast, count);
return ERR_PTR(-ENOMEM);
}
@@ -365,7 +367,7 @@
req = ptlrpc_request_alloc(class_exp2cliimp(exp),
&RQF_LDLM_INTENT_GETXATTR);
- if (req == NULL)
+ if (!req)
return ERR_PTR(-ENOMEM);
rc = ldlm_prep_enqueue_req(exp, req, &cancels, count);
@@ -409,7 +411,7 @@
req = ptlrpc_request_alloc(class_exp2cliimp(exp),
&RQF_LDLM_INTENT_UNLINK);
- if (req == NULL)
+ if (!req)
return ERR_PTR(-ENOMEM);
req_capsule_set_size(&req->rq_pill, &RMF_NAME, RCL_CLIENT,
@@ -453,7 +455,7 @@
req = ptlrpc_request_alloc(class_exp2cliimp(exp),
&RQF_LDLM_INTENT_GETATTR);
- if (req == NULL)
+ if (!req)
return ERR_PTR(-ENOMEM);
req_capsule_set_size(&req->rq_pill, &RMF_NAME, RCL_CLIENT,
@@ -497,7 +499,7 @@
req = ptlrpc_request_alloc(class_exp2cliimp(exp),
&RQF_LDLM_INTENT_LAYOUT);
- if (req == NULL)
+ if (!req)
return ERR_PTR(-ENOMEM);
req_capsule_set_size(&req->rq_pill, &RMF_EADATA, RCL_CLIENT, 0);
@@ -514,7 +516,8 @@
/* pack the layout intent request */
layout = req_capsule_client_get(&req->rq_pill, &RMF_LAYOUT_INTENT);
/* LAYOUT_INTENT_ACCESS is generic, specific operation will be
- * set for replication */
+ * set for replication
+ */
layout->li_opc = LAYOUT_INTENT_ACCESS;
req_capsule_set_size(&req->rq_pill, &RMF_DLM_LVB, RCL_SERVER,
@@ -530,7 +533,7 @@
int rc;
req = ptlrpc_request_alloc(class_exp2cliimp(exp), &RQF_LDLM_ENQUEUE);
- if (req == NULL)
+ if (!req)
return ERR_PTR(-ENOMEM);
rc = ldlm_prep_enqueue_req(exp, req, NULL, 0);
@@ -561,7 +564,8 @@
LASSERT(rc >= 0);
/* Similarly, if we're going to replay this request, we don't want to
- * actually get a lock, just perform the intent. */
+ * actually get a lock, just perform the intent.
+ */
if (req->rq_transno || req->rq_replay) {
lockreq = req_capsule_client_get(pill, &RMF_DLM_REQ);
lockreq->lock_flags |= ldlm_flags_to_wire(LDLM_FL_INTENT_ONLY);
@@ -573,10 +577,10 @@
rc = 0;
} else { /* rc = 0 */
lock = ldlm_handle2lock(lockh);
- LASSERT(lock != NULL);
/* If the server gave us back a different lock mode, we should
- * fix up our variables. */
+ * fix up our variables.
+ */
if (lock->l_req_mode != einfo->ei_mode) {
ldlm_lock_addref(lockh, lock->l_req_mode);
ldlm_lock_decref(lockh, einfo->ei_mode);
@@ -586,7 +590,6 @@
}
lockrep = req_capsule_server_get(pill, &RMF_DLM_REP);
- LASSERT(lockrep != NULL); /* checked by ldlm_cli_enqueue() */
intent->it_disposition = (int)lockrep->lock_policy_res1;
intent->it_status = (int)lockrep->lock_policy_res2;
@@ -595,7 +598,8 @@
intent->it_data = req;
/* Technically speaking rq_transno must already be zero if
- * it_status is in error, so the check is a bit redundant */
+ * it_status is in error, so the check is a bit redundant
+ */
if ((!req->rq_transno || intent->it_status < 0) && req->rq_replay)
mdc_clear_replay_flag(req, intent->it_status);
@@ -605,7 +609,8 @@
*
* It's important that we do this first! Otherwise we might exit the
* function without doing so, and try to replay a failed create
- * (bug 3440) */
+ * (bug 3440)
+ */
if (it->it_op & IT_OPEN && req->rq_replay &&
(!it_disposition(it, DISP_OPEN_OPEN) || intent->it_status != 0))
mdc_clear_replay_flag(req, intent->it_status);
@@ -618,7 +623,7 @@
struct mdt_body *body;
body = req_capsule_server_get(pill, &RMF_MDT_BODY);
- if (body == NULL) {
+ if (!body) {
CERROR("Can't swab mdt_body\n");
return -EPROTO;
}
@@ -645,11 +650,12 @@
*/
eadata = req_capsule_server_sized_get(pill, &RMF_MDT_MD,
body->eadatasize);
- if (eadata == NULL)
+ if (!eadata)
return -EPROTO;
/* save lvb data and length in case this is for layout
- * lock */
+ * lock
+ */
lvb_data = eadata;
lvb_len = body->eadatasize;
@@ -690,31 +696,32 @@
LASSERT(client_is_remote(exp));
perm = req_capsule_server_swab_get(pill, &RMF_ACL,
lustre_swab_mdt_remote_perm);
- if (perm == NULL)
+ if (!perm)
return -EPROTO;
}
} else if (it->it_op & IT_LAYOUT) {
/* maybe the lock was granted right away and layout
- * is packed into RMF_DLM_LVB of req */
+ * is packed into RMF_DLM_LVB of req
+ */
lvb_len = req_capsule_get_size(pill, &RMF_DLM_LVB, RCL_SERVER);
if (lvb_len > 0) {
lvb_data = req_capsule_server_sized_get(pill,
&RMF_DLM_LVB, lvb_len);
- if (lvb_data == NULL)
+ if (!lvb_data)
return -EPROTO;
}
}
/* fill in stripe data for layout lock */
lock = ldlm_handle2lock(lockh);
- if (lock != NULL && ldlm_has_layout(lock) && lvb_data != NULL) {
+ if (lock && ldlm_has_layout(lock) && lvb_data) {
void *lmm;
LDLM_DEBUG(lock, "layout lock returned by: %s, lvb_len: %d\n",
ldlm_it2str(it->it_op), lvb_len);
lmm = libcfs_kvzalloc(lvb_len, GFP_NOFS);
- if (lmm == NULL) {
+ if (!lmm) {
LDLM_LOCK_PUT(lock);
return -ENOMEM;
}
@@ -722,24 +729,25 @@
/* install lvb_data */
lock_res_and_lock(lock);
- if (lock->l_lvb_data == NULL) {
+ if (!lock->l_lvb_data) {
lock->l_lvb_type = LVB_T_LAYOUT;
lock->l_lvb_data = lmm;
lock->l_lvb_len = lvb_len;
lmm = NULL;
}
unlock_res_and_lock(lock);
- if (lmm != NULL)
+ if (lmm)
kvfree(lmm);
}
- if (lock != NULL)
+ if (lock)
LDLM_LOCK_PUT(lock);
return rc;
}
/* We always reserve enough space in the reply packet for a stripe MD, because
- * we don't know in advance the file type. */
+ * we don't know in advance the file type.
+ */
int mdc_enqueue(struct obd_export *exp, struct ldlm_enqueue_info *einfo,
struct lookup_intent *it, struct md_op_data *op_data,
struct lustre_handle *lockh, void *lmm, int lmmsize,
@@ -782,14 +790,15 @@
policy = &getxattr_policy;
}
- LASSERT(reqp == NULL);
+ LASSERT(!reqp);
generation = obddev->u.cli.cl_import->imp_generation;
resend:
flags = saved_flags;
if (!it) {
/* The only way right now is FLOCK, in this case we hide flock
- policy as lmm, but lmmsize is 0 */
+ * policy as lmm, but lmmsize is 0
+ */
LASSERT(lmm && lmmsize == 0);
LASSERTF(einfo->ei_type == LDLM_FLOCK, "lock type %d\n",
einfo->ei_type);
@@ -823,9 +832,10 @@
if (IS_ERR(req))
return PTR_ERR(req);
- if (req != NULL && it && it->it_op & IT_CREAT)
+ if (req && it && it->it_op & IT_CREAT)
/* ask ptlrpc not to resend on EINPROGRESS since we have our own
- * retry logic */
+ * retry logic
+ */
req->rq_no_retry_einprogress = 1;
if (resends) {
@@ -836,7 +846,8 @@
/* It is important to obtain rpc_lock first (if applicable), so that
* threads that are serialised with rpc_lock are not polluting our
- * rpcs in flight counter. We do not do flock request limiting, though*/
+ * rpcs in flight counter. We do not do flock request limiting, though
+ */
if (it) {
mdc_get_rpc_lock(obddev->u.cli.cl_rpc_lock, it);
rc = mdc_enter_request(&obddev->u.cli);
@@ -852,13 +863,14 @@
0, lvb_type, lockh, 0);
if (!it) {
/* For flock requests we immediately return without further
- delay and let caller deal with the rest, since rest of
- this function metadata processing makes no sense for flock
- requests anyway. But in case of problem during comms with
- Server (ETIMEDOUT) or any signal/kill attempt (EINTR), we
- can not rely on caller and this mainly for F_UNLCKs
- (explicits or automatically generated by Kernel to clean
- current FLocks upon exit) that can't be trashed */
+ * delay and let caller deal with the rest, since rest of
+ * this function metadata processing makes no sense for flock
+ * requests anyway. But in case of problem during comms with
+ * Server (ETIMEDOUT) or any signal/kill attempt (EINTR), we
+ * can not rely on caller and this mainly for F_UNLCKs
+ * (explicits or automatically generated by Kernel to clean
+ * current FLocks upon exit) that can't be trashed
+ */
if ((rc == -EINTR) || (rc == -ETIMEDOUT))
goto resend;
return rc;
@@ -878,13 +890,13 @@
}
lockrep = req_capsule_server_get(&req->rq_pill, &RMF_DLM_REP);
- LASSERT(lockrep != NULL);
lockrep->lock_policy_res2 =
ptlrpc_status_ntoh(lockrep->lock_policy_res2);
/* Retry the create infinitely when we get -EINPROGRESS from
- * server. This is required by the new quota design. */
+ * server. This is required by the new quota design.
+ */
if (it->it_op & IT_CREAT &&
(int)lockrep->lock_policy_res2 == -EINPROGRESS) {
mdc_clear_replay_flag(req, rc);
@@ -930,13 +942,13 @@
struct ldlm_lock *lock;
int rc;
- LASSERT(request != NULL);
LASSERT(request != LP_POISON);
LASSERT(request->rq_repmsg != LP_POISON);
if (!it_disposition(it, DISP_IT_EXECD)) {
/* The server failed before it even started executing the
- * intent, i.e. because it couldn't unpack the request. */
+ * intent, i.e. because it couldn't unpack the request.
+ */
LASSERT(it->d.lustre.it_status != 0);
return it->d.lustre.it_status;
}
@@ -945,10 +957,11 @@
return rc;
mdt_body = req_capsule_server_get(&request->rq_pill, &RMF_MDT_BODY);
- LASSERT(mdt_body != NULL); /* mdc_enqueue checked */
+ LASSERT(mdt_body); /* mdc_enqueue checked */
/* If we were revalidating a fid/name pair, mark the intent in
- * case we fail and get called again from lookup */
+ * case we fail and get called again from lookup
+ */
if (fid_is_sane(&op_data->op_fid2) &&
it->it_create_mode & M_CHECK_STALE &&
it->it_op != IT_GETATTR) {
@@ -957,7 +970,8 @@
/* sever can return one of two fids:
* op_fid2 - new allocated fid - if file is created.
* op_fid3 - existent fid - if file only open.
- * op_fid3 is saved in lmv_intent_open */
+ * op_fid3 is saved in lmv_intent_open
+ */
if ((!lu_fid_eq(&op_data->op_fid2, &mdt_body->fid1)) &&
(!lu_fid_eq(&op_data->op_fid3, &mdt_body->fid1))) {
CDEBUG(D_DENTRY, "Found stale data "DFID"("DFID")/"DFID
@@ -1001,7 +1015,8 @@
* one. We have to set the data here instead of in
* mdc_enqueue, because we need to use the child's inode as
* the l_ast_data to match, and that's not available until
- * intent_finish has performed the iget().) */
+ * intent_finish has performed the iget().)
+ */
lock = ldlm_handle2lock(lockh);
if (lock) {
ldlm_policy_data_t policy = lock->l_policy_data;
@@ -1036,11 +1051,12 @@
{
/* We could just return 1 immediately, but since we should only
* be called in revalidate_it if we already have a lock, let's
- * verify that. */
+ * verify that.
+ */
struct ldlm_res_id res_id;
struct lustre_handle lockh;
ldlm_policy_data_t policy;
- ldlm_mode_t mode;
+ enum ldlm_mode mode;
if (it->d.lustre.it_lock_handle) {
lockh.cookie = it->d.lustre.it_lock_handle;
@@ -1059,10 +1075,12 @@
* Unfortunately, if the bits are split across multiple
* locks, there's no easy way to match all of them here,
* so an extra RPC would be performed to fetch all
- * of those bits at once for now. */
+ * of those bits at once for now.
+ */
/* For new MDTs(> 2.4), UPDATE|PERM should be enough,
* but for old MDTs (< 2.4), permission is covered
- * by LOOKUP lock, so it needs to match all bits here.*/
+ * by LOOKUP lock, so it needs to match all bits here.
+ */
policy.l_inodebits.bits = MDS_INODELOCK_UPDATE |
MDS_INODELOCK_LOOKUP |
MDS_INODELOCK_PERM;
@@ -1147,11 +1165,13 @@
(it->it_op & (IT_LOOKUP | IT_GETATTR))) {
/* We could just return 1 immediately, but since we should only
* be called in revalidate_it if we already have a lock, let's
- * verify that. */
+ * verify that.
+ */
it->d.lustre.it_lock_handle = 0;
rc = mdc_revalidate_lock(exp, it, &op_data->op_fid2, NULL);
/* Only return failure if it was not GETATTR by cfid
- (from inode_revalidate) */
+ * (from inode_revalidate)
+ */
if (rc || op_data->op_namelen != 0)
return rc;
}
@@ -1206,7 +1226,6 @@
}
lockrep = req_capsule_server_get(&req->rq_pill, &RMF_DLM_REP);
- LASSERT(lockrep != NULL);
lockrep->lock_policy_res2 =
ptlrpc_status_ntoh(lockrep->lock_policy_res2);
@@ -1235,7 +1254,8 @@
struct ldlm_res_id res_id;
/*XXX: Both MDS_INODELOCK_LOOKUP and MDS_INODELOCK_UPDATE are needed
* for statahead currently. Consider CMD in future, such two bits
- * maybe managed by different MDS, should be adjusted then. */
+ * maybe managed by different MDS, should be adjusted then.
+ */
ldlm_policy_data_t policy = {
.l_inodebits = { MDS_INODELOCK_LOOKUP |
MDS_INODELOCK_UPDATE }
diff --git a/drivers/staging/lustre/lustre/mdc/mdc_reint.c b/drivers/staging/lustre/lustre/mdc/mdc_reint.c
index ac7695a..019b2373 100644
--- a/drivers/staging/lustre/lustre/mdc/mdc_reint.c
+++ b/drivers/staging/lustre/lustre/mdc/mdc_reint.c
@@ -65,9 +65,10 @@
/* Find and cancel locally locks matched by inode @bits & @mode in the resource
* found by @fid. Found locks are added into @cancel list. Returns the amount of
- * locks added to @cancels list. */
+ * locks added to @cancels list.
+ */
int mdc_resource_get_unused(struct obd_export *exp, const struct lu_fid *fid,
- struct list_head *cancels, ldlm_mode_t mode,
+ struct list_head *cancels, enum ldlm_mode mode,
__u64 bits)
{
struct ldlm_namespace *ns = exp->exp_obd->obd_namespace;
@@ -81,14 +82,15 @@
*
* This distinguishes from a case when ELC is not supported originally,
* when we still want to cancel locks in advance and just cancel them
- * locally, without sending any RPC. */
+ * locally, without sending any RPC.
+ */
if (exp_connect_cancelset(exp) && !ns_connect_cancelset(ns))
return 0;
fid_build_reg_res_name(fid, &res_id);
res = ldlm_resource_get(exp->exp_obd->obd_namespace,
NULL, &res_id, 0, 0);
- if (res == NULL)
+ if (!res)
return 0;
LDLM_RESOURCE_ADDREF(res);
/* Initialize ibits lock policy. */
@@ -111,8 +113,6 @@
int count = 0, rc;
__u64 bits;
- LASSERT(op_data != NULL);
-
bits = MDS_INODELOCK_UPDATE;
if (op_data->op_attr.ia_valid & (ATTR_MODE|ATTR_UID|ATTR_GID))
bits |= MDS_INODELOCK_LOOKUP;
@@ -123,7 +123,7 @@
&cancels, LCK_EX, bits);
req = ptlrpc_request_alloc(class_exp2cliimp(exp),
&RQF_MDS_REINT_SETATTR);
- if (req == NULL) {
+ if (!req) {
ldlm_lock_list_put(&cancels, l_bl_ast, count);
return -ENOMEM;
}
@@ -151,10 +151,10 @@
ptlrpc_request_set_replen(req);
if (mod && (op_data->op_flags & MF_EPOCH_OPEN) &&
req->rq_import->imp_replayable) {
- LASSERT(*mod == NULL);
+ LASSERT(!*mod);
*mod = obd_mod_alloc();
- if (*mod == NULL) {
+ if (!*mod) {
DEBUG_REQ(D_ERROR, req, "Can't allocate md_open_data");
} else {
req->rq_replay = 1;
@@ -181,8 +181,6 @@
epoch = req_capsule_client_get(&req->rq_pill, &RMF_MDT_EPOCH);
body = req_capsule_server_get(&req->rq_pill, &RMF_MDT_BODY);
- LASSERT(epoch != NULL);
- LASSERT(body != NULL);
epoch->handle = body->handle;
epoch->ioepoch = body->ioepoch;
req->rq_replay_cb = mdc_replay_open;
@@ -195,7 +193,7 @@
*request = req;
if (rc && req->rq_commit_cb) {
/* Put an extra reference on \var mod on error case. */
- if (mod != NULL && *mod != NULL)
+ if (mod && *mod)
obd_mod_put(*mod);
req->rq_commit_cb(req);
}
@@ -237,7 +235,7 @@
req = ptlrpc_request_alloc(class_exp2cliimp(exp),
&RQF_MDS_REINT_CREATE_RMT_ACL);
- if (req == NULL) {
+ if (!req) {
ldlm_lock_list_put(&cancels, l_bl_ast, count);
return -ENOMEM;
}
@@ -262,7 +260,8 @@
ptlrpc_request_set_replen(req);
/* ask ptlrpc not to resend on EINPROGRESS since we have our own retry
- * logic here */
+ * logic here
+ */
req->rq_no_retry_einprogress = 1;
if (resends) {
@@ -280,7 +279,8 @@
goto resend;
} else if (rc == -EINPROGRESS) {
/* Retry create infinitely until succeed or get other
- * error code. */
+ * error code.
+ */
ptlrpc_req_finished(req);
resends++;
@@ -308,7 +308,7 @@
struct ptlrpc_request *req = *request;
int count = 0, rc;
- LASSERT(req == NULL);
+ LASSERT(!req);
if ((op_data->op_flags & MF_MDC_CANCEL_FID1) &&
(fid_is_sane(&op_data->op_fid1)) &&
@@ -324,7 +324,7 @@
MDS_INODELOCK_FULL);
req = ptlrpc_request_alloc(class_exp2cliimp(exp),
&RQF_MDS_REINT_UNLINK);
- if (req == NULL) {
+ if (!req) {
ldlm_lock_list_put(&cancels, l_bl_ast, count);
return -ENOMEM;
}
@@ -373,7 +373,7 @@
MDS_INODELOCK_UPDATE);
req = ptlrpc_request_alloc(class_exp2cliimp(exp), &RQF_MDS_REINT_LINK);
- if (req == NULL) {
+ if (!req) {
ldlm_lock_list_put(&cancels, l_bl_ast, count);
return -ENOMEM;
}
@@ -429,7 +429,7 @@
req = ptlrpc_request_alloc(class_exp2cliimp(exp),
&RQF_MDS_REINT_RENAME);
- if (req == NULL) {
+ if (!req) {
ldlm_lock_list_put(&cancels, l_bl_ast, count);
return -ENOMEM;
}
diff --git a/drivers/staging/lustre/lustre/mdc/mdc_request.c b/drivers/staging/lustre/lustre/mdc/mdc_request.c
index 4348127..c7a0fa1 100644
--- a/drivers/staging/lustre/lustre/mdc/mdc_request.c
+++ b/drivers/staging/lustre/lustre/mdc/mdc_request.c
@@ -63,7 +63,8 @@
/* mdc_enter_request() ensures that this client has no more
* than cl_max_rpcs_in_flight RPCs simultaneously inf light
- * against an MDT. */
+ * against an MDT.
+ */
rc = mdc_enter_request(cli);
if (rc != 0)
return rc;
@@ -83,7 +84,7 @@
req = ptlrpc_request_alloc_pack(class_exp2cliimp(exp),
&RQF_MDS_GETSTATUS,
LUSTRE_MDS_VERSION, MDS_GETSTATUS);
- if (req == NULL)
+ if (!req)
return -ENOMEM;
mdc_pack_body(req, NULL, 0, 0, -1, 0);
@@ -96,7 +97,7 @@
goto out;
body = req_capsule_server_get(&req->rq_pill, &RMF_MDT_BODY);
- if (body == NULL) {
+ if (!body) {
rc = -EPROTO;
goto out;
}
@@ -136,7 +137,7 @@
/* sanity check for the reply */
body = req_capsule_server_get(pill, &RMF_MDT_BODY);
- if (body == NULL)
+ if (!body)
return -EPROTO;
CDEBUG(D_NET, "mode: %o\n", body->mode);
@@ -146,7 +147,7 @@
eadata = req_capsule_server_sized_get(pill, &RMF_MDT_MD,
body->eadatasize);
- if (eadata == NULL)
+ if (!eadata)
return -EPROTO;
}
@@ -156,7 +157,7 @@
LASSERT(client_is_remote(exp));
perm = req_capsule_server_swab_get(pill, &RMF_ACL,
lustre_swab_mdt_remote_perm);
- if (perm == NULL)
+ if (!perm)
return -EPROTO;
}
@@ -176,7 +177,7 @@
}
*request = NULL;
req = ptlrpc_request_alloc(class_exp2cliimp(exp), &RQF_MDS_GETATTR);
- if (req == NULL)
+ if (!req)
return -ENOMEM;
rc = ptlrpc_request_pack(req, LUSTRE_MDS_VERSION, MDS_GETATTR);
@@ -214,7 +215,7 @@
*request = NULL;
req = ptlrpc_request_alloc(class_exp2cliimp(exp),
&RQF_MDS_GETATTR_NAME);
- if (req == NULL)
+ if (!req)
return -ENOMEM;
req_capsule_set_size(&req->rq_pill, &RMF_NAME, RCL_CLIENT,
@@ -261,7 +262,7 @@
req = ptlrpc_request_alloc_pack(class_exp2cliimp(exp),
&RQF_MDS_IS_SUBDIR, LUSTRE_MDS_VERSION,
MDS_IS_SUBDIR);
- if (req == NULL)
+ if (!req)
return -ENOMEM;
mdc_is_subdir_pack(req, pfid, cfid, 0);
@@ -290,7 +291,7 @@
*request = NULL;
req = ptlrpc_request_alloc(class_exp2cliimp(exp), fmt);
- if (req == NULL)
+ if (!req)
return -ENOMEM;
if (xattr_name) {
@@ -425,7 +426,7 @@
return -EPROTO;
acl = posix_acl_from_xattr(&init_user_ns, buf, body->aclsize);
- if (acl == NULL)
+ if (!acl)
return 0;
if (IS_ERR(acl)) {
@@ -461,7 +462,6 @@
memset(md, 0, sizeof(*md));
md->body = req_capsule_server_get(pill, &RMF_MDT_BODY);
- LASSERT(md->body != NULL);
if (md->body->valid & OBD_MD_FLEASIZE) {
int lmmsize;
@@ -593,17 +593,16 @@
struct lustre_handle old;
struct mdt_body *body;
- if (mod == NULL) {
+ if (!mod) {
DEBUG_REQ(D_ERROR, req,
"Can't properly replay without open data.");
return;
}
body = req_capsule_server_get(&req->rq_pill, &RMF_MDT_BODY);
- LASSERT(body != NULL);
och = mod->mod_och;
- if (och != NULL) {
+ if (och) {
struct lustre_handle *file_fh;
LASSERT(och->och_magic == OBD_CLIENT_HANDLE_MAGIC);
@@ -615,7 +614,7 @@
*file_fh = body->handle;
}
close_req = mod->mod_close_req;
- if (close_req != NULL) {
+ if (close_req) {
__u32 opc = lustre_msg_get_opc(close_req->rq_reqmsg);
struct mdt_ioepoch *epoch;
@@ -624,7 +623,7 @@
&RMF_MDT_EPOCH);
LASSERT(epoch);
- if (och != NULL)
+ if (och)
LASSERT(!memcmp(&old, &epoch->handle, sizeof(old)));
DEBUG_REQ(D_HA, close_req, "updating close body with new fh");
epoch->handle = body->handle;
@@ -635,7 +634,7 @@
{
struct md_open_data *mod = req->rq_cb_data;
- if (mod == NULL)
+ if (!mod)
return;
/**
@@ -675,15 +674,15 @@
rec = req_capsule_client_get(&open_req->rq_pill, &RMF_REC_REINT);
body = req_capsule_server_get(&open_req->rq_pill, &RMF_MDT_BODY);
- LASSERT(rec != NULL);
+ LASSERT(rec);
/* Incoming message in my byte order (it's been swabbed). */
/* Outgoing messages always in my byte order. */
- LASSERT(body != NULL);
+ LASSERT(body);
/* Only if the import is replayable, we set replay_open data */
if (och && imp->imp_replayable) {
mod = obd_mod_alloc();
- if (mod == NULL) {
+ if (!mod) {
DEBUG_REQ(D_ERROR, open_req,
"Can't allocate md_open_data");
return 0;
@@ -749,11 +748,11 @@
* It is possible to not have \var mod in a case of eviction between
* lookup and ll_file_open().
**/
- if (mod == NULL)
+ if (!mod)
return 0;
LASSERT(mod != LP_POISON);
- LASSERT(mod->mod_open_req != NULL);
+ LASSERT(mod->mod_open_req);
mdc_free_open(mod);
mod->mod_och = NULL;
@@ -804,7 +803,7 @@
*request = NULL;
req = ptlrpc_request_alloc(class_exp2cliimp(exp), req_fmt);
- if (req == NULL)
+ if (!req)
return -ENOMEM;
rc = ptlrpc_request_pack(req, LUSTRE_MDS_VERSION, MDS_CLOSE);
@@ -815,13 +814,14 @@
/* To avoid a livelock (bug 7034), we need to send CLOSE RPCs to a
* portal whose threads are not taking any DLM locks and are therefore
- * always progressing */
+ * always progressing
+ */
req->rq_request_portal = MDS_READPAGE_PORTAL;
ptlrpc_at_set_req_timeout(req);
/* Ensure that this close's handle is fixed up during replay. */
- if (likely(mod != NULL)) {
- LASSERTF(mod->mod_open_req != NULL &&
+ if (likely(mod)) {
+ LASSERTF(mod->mod_open_req &&
mod->mod_open_req->rq_type != LI_POISON,
"POISONED open %p!\n", mod->mod_open_req);
@@ -829,7 +829,8 @@
DEBUG_REQ(D_HA, mod->mod_open_req, "matched open");
/* We no longer want to preserve this open for replay even
- * though the open was committed. b=3632, b=3633 */
+ * though the open was committed. b=3632, b=3633
+ */
spin_lock(&mod->mod_open_req->rq_lock);
mod->mod_open_req->rq_replay = 0;
spin_unlock(&mod->mod_open_req->rq_lock);
@@ -851,7 +852,7 @@
rc = ptlrpc_queue_wait(req);
mdc_put_rpc_lock(obd->u.cli.cl_close_lock, NULL);
- if (req->rq_repmsg == NULL) {
+ if (!req->rq_repmsg) {
CDEBUG(D_RPCTRACE, "request failed to send: %p, %d\n", req,
req->rq_status);
if (rc == 0)
@@ -867,7 +868,7 @@
rc = -rc;
}
body = req_capsule_server_get(&req->rq_pill, &RMF_MDT_BODY);
- if (body == NULL)
+ if (!body)
rc = -EPROTO;
} else if (rc == -ESTALE) {
/**
@@ -877,7 +878,6 @@
*/
if (mod) {
DEBUG_REQ(D_HA, req, "Reset ESTALE = %d", rc);
- LASSERT(mod->mod_open_req != NULL);
if (mod->mod_open_req->rq_committed)
rc = 0;
}
@@ -887,7 +887,8 @@
if (rc != 0)
mod->mod_close_req = NULL;
/* Since now, mod is accessed through open_req only,
- * thus close req does not keep a reference on mod anymore. */
+ * thus close req does not keep a reference on mod anymore.
+ */
obd_mod_put(mod);
}
*request = req;
@@ -904,7 +905,7 @@
req = ptlrpc_request_alloc(class_exp2cliimp(exp),
&RQF_MDS_DONE_WRITING);
- if (req == NULL)
+ if (!req)
return -ENOMEM;
rc = ptlrpc_request_pack(req, LUSTRE_MDS_VERSION, MDS_DONE_WRITING);
@@ -913,15 +914,16 @@
return rc;
}
- if (mod != NULL) {
- LASSERTF(mod->mod_open_req != NULL &&
+ if (mod) {
+ LASSERTF(mod->mod_open_req &&
mod->mod_open_req->rq_type != LI_POISON,
"POISONED setattr %p!\n", mod->mod_open_req);
mod->mod_close_req = req;
DEBUG_REQ(D_HA, mod->mod_open_req, "matched setattr");
/* We no longer want to preserve this setattr for replay even
- * though the open was committed. b=3632, b=3633 */
+ * though the open was committed. b=3632, b=3633
+ */
spin_lock(&mod->mod_open_req->rq_lock);
mod->mod_open_req->rq_replay = 0;
spin_unlock(&mod->mod_open_req->rq_lock);
@@ -941,7 +943,6 @@
* Let's check if mod exists and return no error in that case
*/
if (mod) {
- LASSERT(mod->mod_open_req != NULL);
if (mod->mod_open_req->rq_committed)
rc = 0;
}
@@ -950,11 +951,12 @@
if (mod) {
if (rc != 0)
mod->mod_close_req = NULL;
- LASSERT(mod->mod_open_req != NULL);
+ LASSERT(mod->mod_open_req);
mdc_free_open(mod);
/* Since now, mod is accessed through setattr req only,
- * thus DW req does not keep a reference on mod anymore. */
+ * thus DW req does not keep a reference on mod anymore.
+ */
obd_mod_put(mod);
}
@@ -979,7 +981,7 @@
restart_bulk:
req = ptlrpc_request_alloc(class_exp2cliimp(exp), &RQF_MDS_READPAGE);
- if (req == NULL)
+ if (!req)
return -ENOMEM;
rc = ptlrpc_request_pack(req, LUSTRE_MDS_VERSION, MDS_READPAGE);
@@ -993,7 +995,7 @@
desc = ptlrpc_prep_bulk_imp(req, op_data->op_npages, 1, BULK_PUT_SINK,
MDS_BULK_PORTAL);
- if (desc == NULL) {
+ if (!desc) {
ptlrpc_request_free(req);
return -ENOMEM;
}
@@ -1067,7 +1069,7 @@
req = ptlrpc_request_alloc_pack(imp, &RQF_MDS_STATFS,
LUSTRE_MDS_VERSION, MDS_STATFS);
- if (req == NULL) {
+ if (!req) {
rc = -ENOMEM;
goto output;
}
@@ -1089,7 +1091,7 @@
}
msfs = req_capsule_server_get(&req->rq_pill, &RMF_OBD_STATFS);
- if (msfs == NULL) {
+ if (!msfs) {
rc = -EPROTO;
goto out;
}
@@ -1162,7 +1164,7 @@
req = ptlrpc_request_alloc_pack(imp, &RQF_MDS_HSM_PROGRESS,
LUSTRE_MDS_VERSION, MDS_HSM_PROGRESS);
- if (req == NULL) {
+ if (!req) {
rc = -ENOMEM;
goto out;
}
@@ -1171,7 +1173,7 @@
/* Copy hsm_progress struct */
req_hpk = req_capsule_client_get(&req->rq_pill, &RMF_MDS_HSM_PROGRESS);
- if (req_hpk == NULL) {
+ if (!req_hpk) {
rc = -EPROTO;
goto out;
}
@@ -1196,7 +1198,7 @@
req = ptlrpc_request_alloc_pack(imp, &RQF_MDS_HSM_CT_REGISTER,
LUSTRE_MDS_VERSION,
MDS_HSM_CT_REGISTER);
- if (req == NULL) {
+ if (!req) {
rc = -ENOMEM;
goto out;
}
@@ -1206,7 +1208,7 @@
/* Copy hsm_progress struct */
archive_mask = req_capsule_client_get(&req->rq_pill,
&RMF_MDS_HSM_ARCHIVE);
- if (archive_mask == NULL) {
+ if (!archive_mask) {
rc = -EPROTO;
goto out;
}
@@ -1231,7 +1233,7 @@
req = ptlrpc_request_alloc(class_exp2cliimp(exp),
&RQF_MDS_HSM_ACTION);
- if (req == NULL)
+ if (!req)
return -ENOMEM;
rc = ptlrpc_request_pack(req, LUSTRE_MDS_VERSION, MDS_HSM_ACTION);
@@ -1251,7 +1253,7 @@
req_hca = req_capsule_server_get(&req->rq_pill,
&RMF_MDS_HSM_CURRENT_ACTION);
- if (req_hca == NULL) {
+ if (!req_hca) {
rc = -EPROTO;
goto out;
}
@@ -1271,7 +1273,7 @@
req = ptlrpc_request_alloc_pack(imp, &RQF_MDS_HSM_CT_UNREGISTER,
LUSTRE_MDS_VERSION,
MDS_HSM_CT_UNREGISTER);
- if (req == NULL) {
+ if (!req) {
rc = -ENOMEM;
goto out;
}
@@ -1296,7 +1298,7 @@
req = ptlrpc_request_alloc(class_exp2cliimp(exp),
&RQF_MDS_HSM_STATE_GET);
- if (req == NULL)
+ if (!req)
return -ENOMEM;
rc = ptlrpc_request_pack(req, LUSTRE_MDS_VERSION, MDS_HSM_STATE_GET);
@@ -1315,7 +1317,7 @@
goto out;
req_hus = req_capsule_server_get(&req->rq_pill, &RMF_HSM_USER_STATE);
- if (req_hus == NULL) {
+ if (!req_hus) {
rc = -EPROTO;
goto out;
}
@@ -1337,7 +1339,7 @@
req = ptlrpc_request_alloc(class_exp2cliimp(exp),
&RQF_MDS_HSM_STATE_SET);
- if (req == NULL)
+ if (!req)
return -ENOMEM;
rc = ptlrpc_request_pack(req, LUSTRE_MDS_VERSION, MDS_HSM_STATE_SET);
@@ -1351,7 +1353,7 @@
/* Copy states */
req_hss = req_capsule_client_get(&req->rq_pill, &RMF_HSM_STATE_SET);
- if (req_hss == NULL) {
+ if (!req_hss) {
rc = -EPROTO;
goto out;
}
@@ -1376,7 +1378,7 @@
int rc;
req = ptlrpc_request_alloc(imp, &RQF_MDS_HSM_REQUEST);
- if (req == NULL) {
+ if (!req) {
rc = -ENOMEM;
goto out;
}
@@ -1397,7 +1399,7 @@
/* Copy hsm_request struct */
req_hr = req_capsule_client_get(&req->rq_pill, &RMF_MDS_HSM_REQUEST);
- if (req_hr == NULL) {
+ if (!req_hr) {
rc = -EPROTO;
goto out;
}
@@ -1405,7 +1407,7 @@
/* Copy hsm_user_item structs */
req_hui = req_capsule_client_get(&req->rq_pill, &RMF_MDS_HSM_USER_ITEM);
- if (req_hui == NULL) {
+ if (!req_hui) {
rc = -EPROTO;
goto out;
}
@@ -1414,7 +1416,7 @@
/* Copy opaque field */
req_opaque = req_capsule_client_get(&req->rq_pill, &RMF_GENERIC_DATA);
- if (req_opaque == NULL) {
+ if (!req_opaque) {
rc = -EPROTO;
goto out;
}
@@ -1513,7 +1515,7 @@
/* Set up the remote catalog handle */
ctxt = llog_get_context(cs->cs_obd, LLOG_CHANGELOG_REPL_CTXT);
- if (ctxt == NULL) {
+ if (!ctxt) {
rc = -ENOENT;
goto out;
}
@@ -1554,6 +1556,7 @@
struct ioc_changelog *icc)
{
struct changelog_show *cs;
+ struct task_struct *task;
int rc;
/* Freed in mdc_changelog_send_thread */
@@ -1571,15 +1574,20 @@
* New thread because we should return to user app before
* writing into our pipe
*/
- rc = PTR_ERR(kthread_run(mdc_changelog_send_thread, cs,
- "mdc_clg_send_thread"));
- if (!IS_ERR_VALUE(rc)) {
- CDEBUG(D_CHANGELOG, "start changelog thread\n");
- return 0;
+ task = kthread_run(mdc_changelog_send_thread, cs,
+ "mdc_clg_send_thread");
+ if (IS_ERR(task)) {
+ rc = PTR_ERR(task);
+ CERROR("%s: can't start changelog thread: rc = %d\n",
+ obd->obd_name, rc);
+ kfree(cs);
+ } else {
+ rc = 0;
+ CDEBUG(D_CHANGELOG, "%s: started changelog thread\n",
+ obd->obd_name);
}
CERROR("Failed to start changelog thread: %d\n", rc);
- kfree(cs);
return rc;
}
@@ -1597,7 +1605,7 @@
req = ptlrpc_request_alloc_pack(class_exp2cliimp(exp),
&RQF_MDS_QUOTACHECK, LUSTRE_MDS_VERSION,
MDS_QUOTACHECK);
- if (req == NULL)
+ if (!req)
return -ENOMEM;
body = req_capsule_client_get(&req->rq_pill, &RMF_OBD_QUOTACTL);
@@ -1606,7 +1614,8 @@
ptlrpc_request_set_replen(req);
/* the next poll will find -ENODATA, that means quotacheck is
- * going on */
+ * going on
+ */
cli->cl_qchk_stat = -ENODATA;
rc = ptlrpc_queue_wait(req);
if (rc)
@@ -1641,7 +1650,7 @@
req = ptlrpc_request_alloc_pack(class_exp2cliimp(exp),
&RQF_MDS_QUOTACTL, LUSTRE_MDS_VERSION,
MDS_QUOTACTL);
- if (req == NULL)
+ if (!req)
return -ENOMEM;
oqc = req_capsule_client_get(&req->rq_pill, &RMF_OBD_QUOTACTL);
@@ -1695,7 +1704,7 @@
req = ptlrpc_request_alloc(class_exp2cliimp(exp),
&RQF_MDS_SWAP_LAYOUTS);
- if (req == NULL) {
+ if (!req) {
ldlm_lock_list_put(&cancels, l_bl_ast, count);
return -ENOMEM;
}
@@ -1881,7 +1890,7 @@
int rc = -EINVAL;
req = ptlrpc_request_alloc(imp, &RQF_MDS_GET_INFO);
- if (req == NULL)
+ if (!req)
return -ENOMEM;
req_capsule_set_size(&req->rq_pill, &RMF_GETINFO_KEY,
@@ -1906,7 +1915,8 @@
rc = ptlrpc_queue_wait(req);
/* -EREMOTE means the get_info result is partial, and it needs to
- * continue on another MDT, see fid2path part in lmv_iocontrol */
+ * continue on another MDT, see fid2path part in lmv_iocontrol
+ */
if (rc == 0 || rc == -EREMOTE) {
tmp = req_capsule_server_get(&req->rq_pill, &RMF_GETINFO_VAL);
memcpy(val, tmp, vallen);
@@ -2140,7 +2150,7 @@
*request = NULL;
req = ptlrpc_request_alloc(class_exp2cliimp(exp), &RQF_MDS_SYNC);
- if (req == NULL)
+ if (!req)
return -ENOMEM;
rc = ptlrpc_request_pack(req, LUSTRE_MDS_VERSION, MDS_SYNC);
@@ -2182,7 +2192,7 @@
* Flush current sequence to make client obtain new one
* from server in case of disconnect/reconnect.
*/
- if (cli->cl_seq != NULL)
+ if (cli->cl_seq)
seq_client_flush(cli->cl_seq);
rc = obd_notify_observer(obd, obd, OBD_NOTIFY_INACTIVE, NULL);
@@ -2245,7 +2255,8 @@
/* FIXME: if we ever get into a situation where there are too many
* opened files with open locks on a single node, then we really
- * should replay these open locks to reget it */
+ * should replay these open locks to reget it
+ */
if (lock->l_policy_data.l_inodebits.bits & MDS_INODELOCK_OPEN)
return 0;
@@ -2429,7 +2440,7 @@
*request = NULL;
req = ptlrpc_request_alloc(class_exp2cliimp(exp), &RQF_MDS_GETATTR);
- if (req == NULL)
+ if (!req)
return -ENOMEM;
rc = ptlrpc_request_pack(req, LUSTRE_MDS_VERSION, MDS_GETATTR);
diff --git a/drivers/staging/lustre/lustre/mgc/mgc_request.c b/drivers/staging/lustre/lustre/mgc/mgc_request.c
index ab4800c..6fc8225 100644
--- a/drivers/staging/lustre/lustre/mgc/mgc_request.c
+++ b/drivers/staging/lustre/lustre/mgc/mgc_request.c
@@ -90,7 +90,8 @@
int mgc_fsname2resid(char *fsname, struct ldlm_res_id *res_id, int type)
{
/* fsname is at most 8 chars long, maybe contain "-".
- * e.g. "lustre", "SUN-000" */
+ * e.g. "lustre", "SUN-000"
+ */
return mgc_name2resid(fsname, strlen(fsname), res_id, type);
}
EXPORT_SYMBOL(mgc_fsname2resid);
@@ -102,7 +103,8 @@
/* logname consists of "fsname-nodetype".
* e.g. "lustre-MDT0001", "SUN-000-client"
- * there is an exception: llog "params" */
+ * there is an exception: llog "params"
+ */
name_end = strrchr(logname, '-');
if (!name_end)
len = strlen(logname);
@@ -125,7 +127,8 @@
}
/* Drop a reference to a config log. When no longer referenced,
- we can free the config log data */
+ * we can free the config log data
+ */
static void config_log_put(struct config_llog_data *cld)
{
CDEBUG(D_INFO, "log %s refs %d\n", cld->cld_logname,
@@ -162,7 +165,7 @@
struct config_llog_data *found = NULL;
void *instance;
- LASSERT(logname != NULL);
+ LASSERT(logname);
instance = cfg ? cfg->cfg_instance : NULL;
spin_lock(&config_list_lock);
@@ -252,7 +255,8 @@
char logname[32];
/* we have to use different llog for clients and mdts for cmd
- * where only clients are notified if one of cmd server restarts */
+ * where only clients are notified if one of cmd server restarts
+ */
LASSERT(strlen(fsname) < sizeof(logname) / 2);
strcpy(logname, fsname);
LASSERT(lcfg.cfg_instance);
@@ -300,7 +304,7 @@
* <fsname>-sptlrpc. multiple regular logs may share one sptlrpc log.
*/
ptr = strrchr(logname, '-');
- if (ptr == NULL || ptr - logname > 8) {
+ if (!ptr || ptr - logname > 8) {
CERROR("logname %s is too long\n", logname);
return -EINVAL;
}
@@ -309,7 +313,7 @@
strcpy(seclogname + (ptr - logname), "-sptlrpc");
sptlrpc_cld = config_log_find(seclogname, NULL);
- if (sptlrpc_cld == NULL) {
+ if (!sptlrpc_cld) {
sptlrpc_cld = do_config_log_add(obd, seclogname,
CONFIG_T_SPTLRPC, NULL, NULL);
if (IS_ERR(sptlrpc_cld)) {
@@ -376,7 +380,7 @@
int rc = 0;
cld = config_log_find(logname, cfg);
- if (cld == NULL)
+ if (!cld)
return -ENOENT;
mutex_lock(&cld->cld_lock);
@@ -455,7 +459,7 @@
spin_lock(&config_list_lock);
list_for_each_entry(cld, &config_llog_list, cld_list_chain) {
- if (cld->cld_recover == NULL)
+ if (!cld->cld_recover)
continue;
seq_printf(m, " - { client: %s, nidtbl_version: %u }\n",
cld->cld_logname,
@@ -483,8 +487,9 @@
LASSERT(atomic_read(&cld->cld_refcount) > 0);
/* Do not run mgc_process_log on a disconnected export or an
- export which is being disconnected. Take the client
- semaphore to make the check non-racy. */
+ * export which is being disconnected. Take the client
+ * semaphore to make the check non-racy.
+ */
down_read(&cld->cld_mgcexp->exp_obd->u.cli.cl_sem);
if (cld->cld_mgcexp->exp_obd->u.cli.cl_conn_count != 0) {
CDEBUG(D_MGC, "updating log %s\n", cld->cld_logname);
@@ -529,8 +534,9 @@
}
/* Always wait a few seconds to allow the server who
- caused the lock revocation to finish its setup, plus some
- random so everyone doesn't try to reconnect at once. */
+ * caused the lock revocation to finish its setup, plus some
+ * random so everyone doesn't try to reconnect at once.
+ */
to = MGC_TIMEOUT_MIN_SECONDS * HZ;
to += rand * HZ / 100; /* rand is centi-seconds */
lwi = LWI_TIMEOUT(to, NULL, NULL);
@@ -559,7 +565,8 @@
LASSERT(atomic_read(&cld->cld_refcount) > 0);
/* Whether we enqueued again or not in mgc_process_log,
- * we're done with the ref from the old enqueue */
+ * we're done with the ref from the old enqueue
+ */
if (cld_prev)
config_log_put(cld_prev);
cld_prev = cld;
@@ -575,7 +582,8 @@
config_log_put(cld_prev);
/* break after scanning the list so that we can drop
- * refcount to losing lock clds */
+ * refcount to losing lock clds
+ */
if (unlikely(stopped)) {
spin_lock(&config_list_lock);
break;
@@ -598,7 +606,8 @@
}
/* Add a cld to the list to requeue. Start the requeue thread if needed.
- We are responsible for dropping the config log reference from here on out. */
+ * We are responsible for dropping the config log reference from here on out.
+ */
static void mgc_requeue_add(struct config_llog_data *cld)
{
CDEBUG(D_INFO, "log %s: requeue (r=%d sp=%d st=%x)\n",
@@ -635,7 +644,8 @@
int rc;
/* setup only remote ctxt, the local disk context is switched per each
- * filesystem during mgc_fs_setup() */
+ * filesystem during mgc_fs_setup()
+ */
rc = llog_setup(env, obd, &obd->obd_olg, LLOG_CONFIG_REPL_CTXT, obd,
&llog_client_ops);
if (rc)
@@ -697,7 +707,8 @@
static int mgc_cleanup(struct obd_device *obd)
{
/* COMPAT_146 - old config logs may have added profiles we don't
- know about */
+ * know about
+ */
if (obd->obd_type->typ_refcnt <= 1)
/* Only for the last mgc */
class_del_profiles();
@@ -711,6 +722,7 @@
static int mgc_setup(struct obd_device *obd, struct lustre_cfg *lcfg)
{
struct lprocfs_static_vars lvars = { NULL };
+ struct task_struct *task;
int rc;
ptlrpcd_addref();
@@ -734,10 +746,10 @@
init_waitqueue_head(&rq_waitq);
/* start requeue thread */
- rc = PTR_ERR(kthread_run(mgc_requeue_thread, NULL,
- "ll_cfg_requeue"));
- if (IS_ERR_VALUE(rc)) {
- CERROR("%s: Cannot start requeue thread (%d),no more log updates!\n",
+ task = kthread_run(mgc_requeue_thread, NULL, "ll_cfg_requeue");
+ if (IS_ERR(task)) {
+ rc = PTR_ERR(task);
+ CERROR("%s: cannot start requeue thread: rc = %d; no more log updates\n",
obd->obd_name, rc);
goto err_cleanup;
}
@@ -793,7 +805,8 @@
break;
}
/* Make sure not to re-enqueue when the mgc is stopping
- (we get called from client_disconnect_export) */
+ * (we get called from client_disconnect_export)
+ */
if (!lock->l_conn_export ||
!lock->l_conn_export->exp_obd->u.cli.cl_conn_count) {
CDEBUG(D_MGC, "log %.8s: disconnecting, won't requeue\n",
@@ -815,7 +828,8 @@
/* Not sure where this should go... */
/* This is the timeout value for MGS_CONNECT request plus a ping interval, such
- * that we can have a chance to try the secondary MGS if any. */
+ * that we can have a chance to try the secondary MGS if any.
+ */
#define MGC_ENQUEUE_LIMIT (INITIAL_CONNECT_TIMEOUT + (AT_OFF ? 0 : at_min) \
+ PING_INTERVAL)
#define MGC_TARGET_REG_LIMIT 10
@@ -879,11 +893,12 @@
cld->cld_resid.name[0]);
/* We need a callback for every lockholder, so don't try to
- ldlm_lock_match (see rev 1.1.2.11.2.47) */
+ * ldlm_lock_match (see rev 1.1.2.11.2.47)
+ */
req = ptlrpc_request_alloc_pack(class_exp2cliimp(exp),
&RQF_LDLM_ENQUEUE, LUSTRE_DLM_VERSION,
LDLM_ENQUEUE);
- if (req == NULL)
+ if (!req)
return -ENOMEM;
req_capsule_set_size(&req->rq_pill, &RMF_DLM_LVB, RCL_SERVER, 0);
@@ -894,7 +909,8 @@
rc = ldlm_cli_enqueue(exp, &req, &einfo, &cld->cld_resid, NULL, flags,
NULL, 0, LVB_T_NONE, lockh, 0);
/* A failed enqueue should still call the mgc_blocking_ast,
- where it will be requeued if needed ("grant failed"). */
+ * where it will be requeued if needed ("grant failed").
+ */
ptlrpc_req_finished(req);
return rc;
}
@@ -921,7 +937,7 @@
req = ptlrpc_request_alloc_pack(class_exp2cliimp(exp),
&RQF_MGS_TARGET_REG, LUSTRE_MGS_VERSION,
MGS_TARGET_REG);
- if (req == NULL)
+ if (!req)
return -ENOMEM;
req_mti = req_capsule_client_get(&req->rq_pill, &RMF_MGS_TARGET_INFO);
@@ -1109,7 +1125,7 @@
int rc = 0;
int off = 0;
- LASSERT(cfg->cfg_instance != NULL);
+ LASSERT(cfg->cfg_instance);
LASSERT(cfg->cfg_sb == cfg->cfg_instance);
inst = kzalloc(PAGE_CACHE_SIZE, GFP_KERNEL);
@@ -1195,7 +1211,7 @@
/* lustre-OST0001-osc-<instance #> */
strcpy(obdname, cld->cld_logname);
cname = strrchr(obdname, '-');
- if (cname == NULL) {
+ if (!cname) {
CERROR("mgc %s: invalid logname %s\n",
mgc->obd_name, obdname);
break;
@@ -1212,7 +1228,7 @@
/* find the obd by obdname */
obd = class_name2obd(obdname);
- if (obd == NULL) {
+ if (!obd) {
CDEBUG(D_INFO, "mgc %s: cannot find obdname %s\n",
mgc->obd_name, obdname);
rc = 0;
@@ -1227,7 +1243,7 @@
uuid = buf + pos;
down_read(&obd->u.cli.cl_sem);
- if (obd->u.cli.cl_import == NULL) {
+ if (!obd->u.cli.cl_import) {
/* client does not connect to the OST yet */
up_read(&obd->u.cli.cl_sem);
rc = 0;
@@ -1257,7 +1273,7 @@
rc = -ENOMEM;
lcfg = lustre_cfg_new(LCFG_PARAM, &bufs);
- if (lcfg == NULL) {
+ if (!lcfg) {
CERROR("mgc: cannot allocate memory\n");
break;
}
@@ -1309,14 +1325,14 @@
nrpages = CONFIG_READ_NRPAGES_INIT;
pages = kcalloc(nrpages, sizeof(*pages), GFP_KERNEL);
- if (pages == NULL) {
+ if (!pages) {
rc = -ENOMEM;
goto out;
}
for (i = 0; i < nrpages; i++) {
pages[i] = alloc_page(GFP_KERNEL);
- if (pages[i] == NULL) {
+ if (!pages[i]) {
rc = -ENOMEM;
goto out;
}
@@ -1327,7 +1343,7 @@
LASSERT(mutex_is_locked(&cld->cld_lock));
req = ptlrpc_request_alloc(class_exp2cliimp(cld->cld_mgcexp),
&RQF_MGS_CONFIG_READ);
- if (req == NULL) {
+ if (!req) {
rc = -ENOMEM;
goto out;
}
@@ -1338,7 +1354,6 @@
/* pack request */
body = req_capsule_client_get(&req->rq_pill, &RMF_MGS_CONFIG_BODY);
- LASSERT(body != NULL);
LASSERT(sizeof(body->mcb_name) > strlen(cld->cld_logname));
if (strlcpy(body->mcb_name, cld->cld_logname, sizeof(body->mcb_name))
>= sizeof(body->mcb_name)) {
@@ -1353,7 +1368,7 @@
/* allocate bulk transfer descriptor */
desc = ptlrpc_prep_bulk_imp(req, nrpages, 1, BULK_PUT_SINK,
MGS_BULK_PORTAL);
- if (desc == NULL) {
+ if (!desc) {
rc = -ENOMEM;
goto out;
}
@@ -1373,7 +1388,8 @@
}
/* always update the index even though it might have errors with
- * handling the recover logs */
+ * handling the recover logs
+ */
cfg->cfg_last_idx = res->mcr_offset;
eof = res->mcr_offset == res->mcr_size;
@@ -1400,7 +1416,8 @@
mne_swab = !!ptlrpc_rep_need_swab(req);
#if LUSTRE_VERSION_CODE < OBD_OCD_VERSION(3, 2, 50, 0)
/* This import flag means the server did an extra swab of IR MNE
- * records (fixed in LU-1252), reverse it here if needed. LU-1644 */
+ * records (fixed in LU-1252), reverse it here if needed. LU-1644
+ */
if (unlikely(req->rq_import->imp_need_mne_swab))
mne_swab = !mne_swab;
#else
@@ -1434,7 +1451,7 @@
if (pages) {
for (i = 0; i < nrpages; i++) {
- if (pages[i] == NULL)
+ if (!pages[i])
break;
__free_page(pages[i]);
}
@@ -1489,7 +1506,8 @@
/* logname and instance info should be the same, so use our
* copy of the instance for the update. The cfg_last_idx will
- * be updated here. */
+ * be updated here.
+ */
rc = class_config_parse_llog(env, ctxt, cld->cld_logname,
&cld->cld_cfg);
@@ -1529,9 +1547,10 @@
LASSERT(cld);
/* I don't want multiple processes running process_log at once --
- sounds like badness. It actually might be fine, as long as
- we're not trying to update from the same log
- simultaneously (in which case we should use a per-log sem.) */
+ * sounds like badness. It actually might be fine, as long as
+ * we're not trying to update from the same log
+ * simultaneously (in which case we should use a per-log sem.)
+ */
mutex_lock(&cld->cld_lock);
if (cld->cld_stopping) {
mutex_unlock(&cld->cld_lock);
@@ -1556,7 +1575,8 @@
CDEBUG(D_MGC, "Can't get cfg lock: %d\n", rcl);
/* mark cld_lostlock so that it will requeue
- * after MGC becomes available. */
+ * after MGC becomes available.
+ */
cld->cld_lostlock = 1;
/* Get extra reference, it will be put in requeue thread */
config_log_get(cld);
@@ -1635,18 +1655,19 @@
if (rc)
break;
cld = config_log_find(logname, cfg);
- if (cld == NULL) {
+ if (!cld) {
rc = -ENOENT;
break;
}
/* COMPAT_146 */
/* FIXME only set this for old logs! Right now this forces
- us to always skip the "inside markers" check */
+ * us to always skip the "inside markers" check
+ */
cld->cld_cfg.cfg_flags |= CFG_F_COMPAT146;
rc = mgc_process_log(obd, cld);
- if (rc == 0 && cld->cld_recover != NULL) {
+ if (rc == 0 && cld->cld_recover) {
if (OCD_HAS_FLAG(&obd->u.cli.cl_import->
imp_connect_data, IMP_RECOV)) {
rc = mgc_process_log(obd, cld->cld_recover);
@@ -1660,7 +1681,7 @@
CERROR("Cannot process recover llog %d\n", rc);
}
- if (rc == 0 && cld->cld_params != NULL) {
+ if (rc == 0 && cld->cld_params) {
rc = mgc_process_log(obd, cld->cld_params);
if (rc == -ENOENT) {
CDEBUG(D_MGC,
diff --git a/drivers/staging/lustre/lustre/obdclass/acl.c b/drivers/staging/lustre/lustre/obdclass/acl.c
index 49ba885..0e02ae9 100644
--- a/drivers/staging/lustre/lustre/obdclass/acl.c
+++ b/drivers/staging/lustre/lustre/obdclass/acl.c
@@ -104,7 +104,7 @@
return old_size;
new = kmemdup(*header, new_size, GFP_NOFS);
- if (unlikely(new == NULL))
+ if (unlikely(!new))
return -ENOMEM;
kfree(*header);
@@ -124,7 +124,7 @@
return 0;
new = kmemdup(*header, ext_size, GFP_NOFS);
- if (unlikely(new == NULL))
+ if (unlikely(!new))
return -ENOMEM;
kfree(*header);
@@ -149,7 +149,7 @@
count = CFS_ACL_XATTR_COUNT(size, posix_acl_xattr);
esize = CFS_ACL_XATTR_SIZE(count, ext_acl_xattr);
new = kzalloc(esize, GFP_NOFS);
- if (unlikely(new == NULL))
+ if (unlikely(!new))
return ERR_PTR(-ENOMEM);
new->a_count = cpu_to_le32(count);
@@ -180,7 +180,7 @@
return -EINVAL;
new = kzalloc(size, GFP_NOFS);
- if (unlikely(new == NULL))
+ if (unlikely(!new))
return -ENOMEM;
new->a_version = cpu_to_le32(CFS_ACL_XATTR_VERSION);
@@ -300,7 +300,7 @@
ext_size = CFS_ACL_XATTR_SIZE(ext_count, ext_acl_xattr);
new = kzalloc(ext_size, GFP_NOFS);
- if (unlikely(new == NULL))
+ if (unlikely(!new))
return ERR_PTR(-ENOMEM);
for (i = 0, j = 0; i < posix_count; i++) {
diff --git a/drivers/staging/lustre/lustre/obdclass/cl_io.c b/drivers/staging/lustre/lustre/obdclass/cl_io.c
index 63246ba..191ec43 100644
--- a/drivers/staging/lustre/lustre/obdclass/cl_io.c
+++ b/drivers/staging/lustre/lustre/obdclass/cl_io.c
@@ -44,6 +44,7 @@
#include "../include/obd_support.h"
#include "../include/lustre_fid.h"
#include <linux/list.h>
+#include <linux/sched.h>
#include "../include/cl_object.h"
#include "cl_internal.h"
@@ -93,7 +94,7 @@
* CIS_IO_GOING.
*/
ergo(io->ci_owned_nr > 0, io->ci_state == CIS_IO_GOING ||
- (io->ci_state == CIS_LOCKED && up != NULL));
+ (io->ci_state == CIS_LOCKED && up));
}
/**
@@ -111,7 +112,7 @@
slice = container_of(io->ci_layers.prev, struct cl_io_slice,
cis_linkage);
list_del_init(&slice->cis_linkage);
- if (slice->cis_iop->op[io->ci_type].cio_fini != NULL)
+ if (slice->cis_iop->op[io->ci_type].cio_fini)
slice->cis_iop->op[io->ci_type].cio_fini(env, slice);
/*
* Invalidate slice to catch use after free. This assumes that
@@ -164,7 +165,7 @@
result = 0;
cl_object_for_each(scan, obj) {
- if (scan->co_ops->coo_io_init != NULL) {
+ if (scan->co_ops->coo_io_init) {
result = scan->co_ops->coo_io_init(env, scan, io);
if (result != 0)
break;
@@ -186,7 +187,7 @@
struct cl_thread_info *info = cl_env_info(env);
LASSERT(obj != cl_object_top(obj));
- if (info->clt_current_io == NULL)
+ if (!info->clt_current_io)
info->clt_current_io = io;
return cl_io_init0(env, io, iot, obj);
}
@@ -208,7 +209,7 @@
struct cl_thread_info *info = cl_env_info(env);
LASSERT(obj == cl_object_top(obj));
- LASSERT(info->clt_current_io == NULL);
+ LASSERT(!info->clt_current_io);
info->clt_current_io = io;
return cl_io_init0(env, io, iot, obj);
@@ -224,7 +225,7 @@
enum cl_io_type iot, loff_t pos, size_t count)
{
LINVRNT(iot == CIT_READ || iot == CIT_WRITE);
- LINVRNT(io->ci_obj != NULL);
+ LINVRNT(io->ci_obj);
LU_OBJECT_HEADER(D_VFSTRACE, env, &io->ci_obj->co_lu,
"io range: %u [%llu, %llu) %u %u\n",
@@ -292,7 +293,7 @@
list_for_each_entry_safe(curr, temp,
&io->ci_lockset.cls_todo,
cill_linkage) {
- if (prev != NULL) {
+ if (prev) {
switch (cl_lock_descr_sort(&prev->cill_descr,
&curr->cill_descr)) {
case 0:
@@ -308,7 +309,8 @@
&prev->cill_linkage);
done = 0;
continue; /* don't change prev: it's
- * still "previous" */
+ * still "previous"
+ */
case -1: /* already in order */
break;
}
@@ -327,32 +329,31 @@
int cl_queue_match(const struct list_head *queue,
const struct cl_lock_descr *need)
{
- struct cl_io_lock_link *scan;
+ struct cl_io_lock_link *scan;
- list_for_each_entry(scan, queue, cill_linkage) {
- if (cl_lock_descr_match(&scan->cill_descr, need))
- return 1;
- }
- return 0;
+ list_for_each_entry(scan, queue, cill_linkage) {
+ if (cl_lock_descr_match(&scan->cill_descr, need))
+ return 1;
+ }
+ return 0;
}
EXPORT_SYMBOL(cl_queue_match);
static int cl_queue_merge(const struct list_head *queue,
const struct cl_lock_descr *need)
{
- struct cl_io_lock_link *scan;
+ struct cl_io_lock_link *scan;
- list_for_each_entry(scan, queue, cill_linkage) {
- if (cl_lock_descr_cmp(&scan->cill_descr, need))
- continue;
- cl_lock_descr_merge(&scan->cill_descr, need);
- CDEBUG(D_VFSTRACE, "lock: %d: [%lu, %lu]\n",
- scan->cill_descr.cld_mode, scan->cill_descr.cld_start,
- scan->cill_descr.cld_end);
- return 1;
- }
- return 0;
-
+ list_for_each_entry(scan, queue, cill_linkage) {
+ if (cl_lock_descr_cmp(&scan->cill_descr, need))
+ continue;
+ cl_lock_descr_merge(&scan->cill_descr, need);
+ CDEBUG(D_VFSTRACE, "lock: %d: [%lu, %lu]\n",
+ scan->cill_descr.cld_mode, scan->cill_descr.cld_start,
+ scan->cill_descr.cld_end);
+ return 1;
+ }
+ return 0;
}
static int cl_lockset_match(const struct cl_lockset *set,
@@ -399,11 +400,11 @@
struct cl_lock *lock = link->cill_lock;
list_del_init(&link->cill_linkage);
- if (lock != NULL) {
+ if (lock) {
cl_lock_release(env, lock, "io", io);
link->cill_lock = NULL;
}
- if (link->cill_fini != NULL)
+ if (link->cill_fini)
link->cill_fini(env, link);
}
@@ -419,7 +420,8 @@
list_for_each_entry_safe(link, temp, &set->cls_todo, cill_linkage) {
if (!cl_lockset_match(set, &link->cill_descr)) {
/* XXX some locking to guarantee that locks aren't
- * expanded in between. */
+ * expanded in between.
+ */
result = cl_lockset_lock_one(env, io, set, link);
if (result != 0)
break;
@@ -458,7 +460,7 @@
LINVRNT(cl_io_invariant(io));
cl_io_for_each(scan, io) {
- if (scan->cis_iop->op[io->ci_type].cio_lock == NULL)
+ if (!scan->cis_iop->op[io->ci_type].cio_lock)
continue;
result = scan->cis_iop->op[io->ci_type].cio_lock(env, scan);
if (result != 0)
@@ -503,7 +505,7 @@
cl_lock_link_fini(env, io, link);
}
cl_io_for_each_reverse(scan, io) {
- if (scan->cis_iop->op[io->ci_type].cio_unlock != NULL)
+ if (scan->cis_iop->op[io->ci_type].cio_unlock)
scan->cis_iop->op[io->ci_type].cio_unlock(env, scan);
}
io->ci_state = CIS_UNLOCKED;
@@ -529,7 +531,7 @@
result = 0;
cl_io_for_each(scan, io) {
- if (scan->cis_iop->op[io->ci_type].cio_iter_init == NULL)
+ if (!scan->cis_iop->op[io->ci_type].cio_iter_init)
continue;
result = scan->cis_iop->op[io->ci_type].cio_iter_init(env,
scan);
@@ -556,7 +558,7 @@
LINVRNT(cl_io_invariant(io));
cl_io_for_each_reverse(scan, io) {
- if (scan->cis_iop->op[io->ci_type].cio_iter_fini != NULL)
+ if (scan->cis_iop->op[io->ci_type].cio_iter_fini)
scan->cis_iop->op[io->ci_type].cio_iter_fini(env, scan);
}
io->ci_state = CIS_IT_ENDED;
@@ -581,7 +583,7 @@
/* layers have to be notified. */
cl_io_for_each_reverse(scan, io) {
- if (scan->cis_iop->op[io->ci_type].cio_advance != NULL)
+ if (scan->cis_iop->op[io->ci_type].cio_advance)
scan->cis_iop->op[io->ci_type].cio_advance(env, scan,
nob);
}
@@ -621,7 +623,7 @@
int result;
link = kzalloc(sizeof(*link), GFP_NOFS);
- if (link != NULL) {
+ if (link) {
link->cill_descr = *descr;
link->cill_fini = cl_free_io_lock_link;
result = cl_io_lock_add(env, io, link);
@@ -648,7 +650,7 @@
io->ci_state = CIS_IO_GOING;
cl_io_for_each(scan, io) {
- if (scan->cis_iop->op[io->ci_type].cio_start == NULL)
+ if (!scan->cis_iop->op[io->ci_type].cio_start)
continue;
result = scan->cis_iop->op[io->ci_type].cio_start(env, scan);
if (result != 0)
@@ -673,7 +675,7 @@
LINVRNT(cl_io_invariant(io));
cl_io_for_each_reverse(scan, io) {
- if (scan->cis_iop->op[io->ci_type].cio_end != NULL)
+ if (scan->cis_iop->op[io->ci_type].cio_end)
scan->cis_iop->op[io->ci_type].cio_end(env, scan);
/* TODO: error handling. */
}
@@ -687,7 +689,7 @@
const struct cl_page_slice *slice;
slice = cl_page_at(page, ios->cis_obj->co_lu.lo_dev->ld_type);
- LINVRNT(slice != NULL);
+ LINVRNT(slice);
return slice;
}
@@ -759,11 +761,11 @@
* "parallel io" (see CLO_REPEAT loops in cl_lock.c).
*/
cl_io_for_each(scan, io) {
- if (scan->cis_iop->cio_read_page != NULL) {
+ if (scan->cis_iop->cio_read_page) {
const struct cl_page_slice *slice;
slice = cl_io_slice_page(scan, page);
- LINVRNT(slice != NULL);
+ LINVRNT(slice);
result = scan->cis_iop->cio_read_page(env, scan, slice);
if (result != 0)
break;
@@ -798,7 +800,7 @@
LASSERT(cl_page_in_io(page, io));
cl_io_for_each_reverse(scan, io) {
- if (scan->cis_iop->cio_prepare_write != NULL) {
+ if (scan->cis_iop->cio_prepare_write) {
const struct cl_page_slice *slice;
slice = cl_io_slice_page(scan, page);
@@ -833,11 +835,11 @@
* state. Better (and more general) way of dealing with such situation
* is needed.
*/
- LASSERT(cl_page_is_owned(page, io) || page->cp_parent != NULL);
+ LASSERT(cl_page_is_owned(page, io) || page->cp_parent);
LASSERT(cl_page_in_io(page, io));
cl_io_for_each(scan, io) {
- if (scan->cis_iop->cio_commit_write != NULL) {
+ if (scan->cis_iop->cio_commit_write) {
const struct cl_page_slice *slice;
slice = cl_io_slice_page(scan, page);
@@ -872,7 +874,7 @@
LINVRNT(crt < ARRAY_SIZE(scan->cis_iop->req_op));
cl_io_for_each(scan, io) {
- if (scan->cis_iop->req_op[crt].cio_submit == NULL)
+ if (!scan->cis_iop->req_op[crt].cio_submit)
continue;
result = scan->cis_iop->req_op[crt].cio_submit(env, scan, crt,
queue);
@@ -900,7 +902,7 @@
int rc;
cl_page_list_for_each(pg, &queue->c2_qin) {
- LASSERT(pg->cp_sync_io == NULL);
+ LASSERT(!pg->cp_sync_io);
pg->cp_sync_io = anchor;
}
@@ -913,14 +915,14 @@
* clean pages), count them as completed to avoid infinite
* wait.
*/
- cl_page_list_for_each(pg, &queue->c2_qin) {
+ cl_page_list_for_each(pg, &queue->c2_qin) {
pg->cp_sync_io = NULL;
cl_sync_io_note(anchor, 1);
- }
+ }
- /* wait for the IO to be finished. */
- rc = cl_sync_io_wait(env, io, &queue->c2_qout,
- anchor, timeout);
+ /* wait for the IO to be finished. */
+ rc = cl_sync_io_wait(env, io, &queue->c2_qout,
+ anchor, timeout);
} else {
LASSERT(list_empty(&queue->c2_qout.pl_pages));
cl_page_list_for_each(pg, &queue->c2_qin)
@@ -1026,7 +1028,7 @@
{
struct list_head *linkage = &slice->cis_linkage;
- LASSERT((linkage->prev == NULL && linkage->next == NULL) ||
+ LASSERT((!linkage->prev && !linkage->next) ||
list_empty(linkage));
list_add_tail(linkage, &io->ci_layers);
@@ -1053,8 +1055,9 @@
void cl_page_list_add(struct cl_page_list *plist, struct cl_page *page)
{
/* it would be better to check that page is owned by "current" io, but
- * it is not passed here. */
- LASSERT(page->cp_owner != NULL);
+ * it is not passed here.
+ */
+ LASSERT(page->cp_owner);
LINVRNT(plist->pl_owner == current);
lockdep_off();
@@ -1263,7 +1266,7 @@
*/
struct cl_io *cl_io_top(struct cl_io *io)
{
- while (io->ci_parent != NULL)
+ while (io->ci_parent)
io = io->ci_parent;
return io;
}
@@ -1296,13 +1299,13 @@
LASSERT(list_empty(&req->crq_pages));
LASSERT(req->crq_nrpages == 0);
LINVRNT(list_empty(&req->crq_layers));
- LINVRNT(equi(req->crq_nrobjs > 0, req->crq_o != NULL));
+ LINVRNT(equi(req->crq_nrobjs > 0, req->crq_o));
- if (req->crq_o != NULL) {
+ if (req->crq_o) {
for (i = 0; i < req->crq_nrobjs; ++i) {
struct cl_object *obj = req->crq_o[i].ro_obj;
- if (obj != NULL) {
+ if (obj) {
lu_object_ref_del_at(&obj->co_lu,
&req->crq_o[i].ro_obj_ref,
"cl_req", req);
@@ -1326,7 +1329,7 @@
do {
list_for_each_entry(slice, &page->cp_layers, cpl_linkage) {
dev = lu2cl_dev(slice->cpl_obj->co_lu.lo_dev);
- if (dev->cd_ops->cdo_req_init != NULL) {
+ if (dev->cd_ops->cdo_req_init) {
result = dev->cd_ops->cdo_req_init(env,
dev, req);
if (result != 0)
@@ -1334,7 +1337,7 @@
}
}
page = page->cp_child;
- } while (page != NULL && result == 0);
+ } while (page && result == 0);
return result;
}
@@ -1353,7 +1356,7 @@
slice = list_entry(req->crq_layers.prev,
struct cl_req_slice, crs_linkage);
list_del_init(&slice->crs_linkage);
- if (slice->crs_ops->cro_completion != NULL)
+ if (slice->crs_ops->cro_completion)
slice->crs_ops->cro_completion(env, slice, rc);
}
cl_req_free(env, req);
@@ -1371,7 +1374,7 @@
LINVRNT(nr_objects > 0);
req = kzalloc(sizeof(*req), GFP_NOFS);
- if (req != NULL) {
+ if (req) {
int result;
req->crq_type = crt;
@@ -1380,7 +1383,7 @@
req->crq_o = kcalloc(nr_objects, sizeof(req->crq_o[0]),
GFP_NOFS);
- if (req->crq_o != NULL) {
+ if (req->crq_o) {
req->crq_nrobjs = nr_objects;
result = cl_req_init(env, req, page);
} else
@@ -1408,7 +1411,7 @@
page = cl_page_top(page);
LASSERT(list_empty(&page->cp_flight));
- LASSERT(page->cp_req == NULL);
+ LASSERT(!page->cp_req);
CL_PAGE_DEBUG(D_PAGE, env, page, "req %p, %d, %u\n",
req, req->crq_type, req->crq_nrpages);
@@ -1418,7 +1421,7 @@
page->cp_req = req;
obj = cl_object_top(page->cp_obj);
for (i = 0, rqo = req->crq_o; obj != rqo->ro_obj; ++i, ++rqo) {
- if (rqo->ro_obj == NULL) {
+ if (!rqo->ro_obj) {
rqo->ro_obj = obj;
cl_object_get(obj);
lu_object_ref_add_at(&obj->co_lu, &rqo->ro_obj_ref,
@@ -1463,11 +1466,11 @@
* of objects.
*/
for (i = 0; i < req->crq_nrobjs; ++i)
- LASSERT(req->crq_o[i].ro_obj != NULL);
+ LASSERT(req->crq_o[i].ro_obj);
result = 0;
list_for_each_entry(slice, &req->crq_layers, crs_linkage) {
- if (slice->crs_ops->cro_prep != NULL) {
+ if (slice->crs_ops->cro_prep) {
result = slice->crs_ops->cro_prep(env, slice);
if (result != 0)
break;
@@ -1501,9 +1504,8 @@
scan = cl_page_at(page,
slice->crs_dev->cd_lu_dev.ld_type);
- LASSERT(scan != NULL);
obj = scan->cpl_obj;
- if (slice->crs_ops->cro_attr_set != NULL)
+ if (slice->crs_ops->cro_attr_set)
slice->crs_ops->cro_attr_set(env, slice, obj,
attr + i, flags);
}
@@ -1511,9 +1513,6 @@
}
EXPORT_SYMBOL(cl_req_attr_set);
-/* XXX complete(), init_completion(), and wait_for_completion(), until they are
- * implemented in libcfs. */
-# include <linux/sched.h>
/**
* Initialize synchronous io wait anchor, for transfer of \a nrpages pages.
diff --git a/drivers/staging/lustre/lustre/obdclass/cl_lock.c b/drivers/staging/lustre/lustre/obdclass/cl_lock.c
index 1836dc0..7b7f344 100644
--- a/drivers/staging/lustre/lustre/obdclass/cl_lock.c
+++ b/drivers/staging/lustre/lustre/obdclass/cl_lock.c
@@ -96,7 +96,7 @@
result = atomic_read(&lock->cll_ref) > 0 &&
cl_lock_invariant_trusted(env, lock);
- if (!result && env != NULL)
+ if (!result && env)
CL_LOCK_DEBUG(D_ERROR, env, lock, "invariant broken");
return result;
}
@@ -288,7 +288,7 @@
LINVRNT(cl_lock_invariant(env, lock));
obj = lock->cll_descr.cld_obj;
- LINVRNT(obj != NULL);
+ LINVRNT(obj);
CDEBUG(D_TRACE, "releasing reference: %d %p %lu\n",
atomic_read(&lock->cll_ref), lock, RETIP);
@@ -362,7 +362,7 @@
struct lu_object_header *head;
lock = kmem_cache_alloc(cl_lock_kmem, GFP_NOFS | __GFP_ZERO);
- if (lock != NULL) {
+ if (lock) {
atomic_set(&lock->cll_ref, 1);
lock->cll_descr = *descr;
lock->cll_state = CLS_NEW;
@@ -461,7 +461,7 @@
LINVRNT(cl_lock_invariant_trusted(env, lock));
list_for_each_entry(slice, &lock->cll_layers, cls_linkage) {
- if (slice->cls_ops->clo_fits_into != NULL &&
+ if (slice->cls_ops->clo_fits_into &&
!slice->cls_ops->clo_fits_into(env, slice, need, io))
return 0;
}
@@ -524,14 +524,14 @@
lock = cl_lock_lookup(env, obj, io, need);
spin_unlock(&head->coh_lock_guard);
- if (lock == NULL) {
+ if (!lock) {
lock = cl_lock_alloc(env, obj, io, need);
if (!IS_ERR(lock)) {
struct cl_lock *ghost;
spin_lock(&head->coh_lock_guard);
ghost = cl_lock_lookup(env, obj, io, need);
- if (ghost == NULL) {
+ if (!ghost) {
cl_lock_get_trust(lock);
list_add_tail(&lock->cll_linkage,
&head->coh_locks);
@@ -572,7 +572,7 @@
spin_lock(&head->coh_lock_guard);
lock = cl_lock_lookup(env, obj, io, need);
spin_unlock(&head->coh_lock_guard);
- if (lock == NULL)
+ if (!lock)
return NULL;
cl_lock_mutex_get(env, lock);
@@ -584,7 +584,7 @@
cl_lock_put(env, lock);
lock = NULL;
}
- } while (lock == NULL);
+ } while (!lock);
cl_lock_hold_add(env, lock, scope, source);
cl_lock_user_add(env, lock);
@@ -775,7 +775,7 @@
lock->cll_flags |= CLF_CANCELLED;
list_for_each_entry_reverse(slice, &lock->cll_layers,
cls_linkage) {
- if (slice->cls_ops->clo_cancel != NULL)
+ if (slice->cls_ops->clo_cancel)
slice->cls_ops->clo_cancel(env, slice);
}
}
@@ -812,7 +812,7 @@
*/
list_for_each_entry_reverse(slice, &lock->cll_layers,
cls_linkage) {
- if (slice->cls_ops->clo_delete != NULL)
+ if (slice->cls_ops->clo_delete)
slice->cls_ops->clo_delete(env, slice);
}
/*
@@ -935,7 +935,8 @@
if (result == 0) {
/* To avoid being interrupted by the 'non-fatal' signals
* (SIGCHLD, for instance), we'd block them temporarily.
- * LU-305 */
+ * LU-305
+ */
blocked = cfs_block_sigsinv(LUSTRE_FATAL_SIGS);
init_waitqueue_entry(&waiter, current);
@@ -946,7 +947,8 @@
LASSERT(cl_lock_nr_mutexed(env) == 0);
/* Returning ERESTARTSYS instead of EINTR so syscalls
- * can be restarted if signals are pending here */
+ * can be restarted if signals are pending here
+ */
result = -ERESTARTSYS;
if (likely(!OBD_FAIL_CHECK(OBD_FAIL_LOCK_STATE_WAIT_INTR))) {
schedule();
@@ -974,7 +976,7 @@
LINVRNT(cl_lock_invariant(env, lock));
list_for_each_entry(slice, &lock->cll_layers, cls_linkage)
- if (slice->cls_ops->clo_state != NULL)
+ if (slice->cls_ops->clo_state)
slice->cls_ops->clo_state(env, slice, state);
wake_up_all(&lock->cll_wq);
}
@@ -1039,7 +1041,7 @@
result = -ENOSYS;
list_for_each_entry_reverse(slice, &lock->cll_layers,
cls_linkage) {
- if (slice->cls_ops->clo_unuse != NULL) {
+ if (slice->cls_ops->clo_unuse) {
result = slice->cls_ops->clo_unuse(env, slice);
if (result != 0)
break;
@@ -1072,7 +1074,7 @@
result = -ENOSYS;
state = cl_lock_intransit(env, lock);
list_for_each_entry(slice, &lock->cll_layers, cls_linkage) {
- if (slice->cls_ops->clo_use != NULL) {
+ if (slice->cls_ops->clo_use) {
result = slice->cls_ops->clo_use(env, slice);
if (result != 0)
break;
@@ -1125,7 +1127,7 @@
result = -ENOSYS;
list_for_each_entry(slice, &lock->cll_layers, cls_linkage) {
- if (slice->cls_ops->clo_enqueue != NULL) {
+ if (slice->cls_ops->clo_enqueue) {
result = slice->cls_ops->clo_enqueue(env,
slice, io, flags);
if (result != 0)
@@ -1170,7 +1172,8 @@
/* kick layers. */
result = cl_enqueue_kick(env, lock, io, flags);
/* For AGL case, the cl_lock::cll_state may
- * become CLS_HELD already. */
+ * become CLS_HELD already.
+ */
if (result == 0 && lock->cll_state == CLS_QUEUING)
cl_lock_state_set(env, lock, CLS_ENQUEUED);
break;
@@ -1215,7 +1218,7 @@
LASSERT(cl_lock_is_mutexed(lock));
LASSERT(lock->cll_state == CLS_QUEUING);
- LASSERT(lock->cll_conflict != NULL);
+ LASSERT(lock->cll_conflict);
conflict = lock->cll_conflict;
lock->cll_conflict = NULL;
@@ -1258,7 +1261,7 @@
do {
result = cl_enqueue_try(env, lock, io, enqflags);
if (result == CLO_WAIT) {
- if (lock->cll_conflict != NULL)
+ if (lock->cll_conflict)
result = cl_lock_enqueue_wait(env, lock, 1);
else
result = cl_lock_state_wait(env, lock);
@@ -1300,7 +1303,8 @@
}
/* Only if the lock is in CLS_HELD or CLS_ENQUEUED state, it can hold
- * underlying resources. */
+ * underlying resources.
+ */
if (!(lock->cll_state == CLS_HELD || lock->cll_state == CLS_ENQUEUED)) {
cl_lock_user_del(env, lock);
return 0;
@@ -1416,7 +1420,7 @@
result = -ENOSYS;
list_for_each_entry(slice, &lock->cll_layers, cls_linkage) {
- if (slice->cls_ops->clo_wait != NULL) {
+ if (slice->cls_ops->clo_wait) {
result = slice->cls_ops->clo_wait(env, slice);
if (result != 0)
break;
@@ -1449,7 +1453,7 @@
LINVRNT(cl_lock_invariant(env, lock));
LASSERTF(lock->cll_state == CLS_ENQUEUED || lock->cll_state == CLS_HELD,
- "Wrong state %d \n", lock->cll_state);
+ "Wrong state %d\n", lock->cll_state);
LASSERT(lock->cll_holds > 0);
do {
@@ -1487,7 +1491,7 @@
pound = 0;
list_for_each_entry_reverse(slice, &lock->cll_layers, cls_linkage) {
- if (slice->cls_ops->clo_weigh != NULL) {
+ if (slice->cls_ops->clo_weigh) {
ounce = slice->cls_ops->clo_weigh(env, slice);
pound += ounce;
if (pound < ounce) /* over-weight^Wflow */
@@ -1523,7 +1527,7 @@
LINVRNT(cl_lock_invariant(env, lock));
list_for_each_entry_reverse(slice, &lock->cll_layers, cls_linkage) {
- if (slice->cls_ops->clo_modify != NULL) {
+ if (slice->cls_ops->clo_modify) {
result = slice->cls_ops->clo_modify(env, slice, desc);
if (result != 0)
return result;
@@ -1584,7 +1588,7 @@
result = cl_lock_enclosure(env, lock, closure);
if (result == 0) {
list_for_each_entry(slice, &lock->cll_layers, cls_linkage) {
- if (slice->cls_ops->clo_closure != NULL) {
+ if (slice->cls_ops->clo_closure) {
result = slice->cls_ops->clo_closure(env, slice,
closure);
if (result != 0)
@@ -1777,13 +1781,15 @@
lock = NULL;
need->cld_mode = CLM_READ; /* CLM_READ matches both READ & WRITE, but
- * not PHANTOM */
+ * not PHANTOM
+ */
need->cld_start = need->cld_end = index;
need->cld_enq_flags = 0;
spin_lock(&head->coh_lock_guard);
/* It is fine to match any group lock since there could be only one
- * with a uniq gid and it conflicts with all other lock modes too */
+ * with a uniq gid and it conflicts with all other lock modes too
+ */
list_for_each_entry(scan, &head->coh_locks, cll_linkage) {
if (scan != except &&
(scan->cll_descr.cld_mode == CLM_GROUP ||
@@ -1798,7 +1804,8 @@
(canceld || !(scan->cll_flags & CLF_CANCELLED)) &&
(pending || !(scan->cll_flags & CLF_CANCELPEND))) {
/* Don't increase cs_hit here since this
- * is just a helper function. */
+ * is just a helper function.
+ */
cl_lock_get_trust(scan);
lock = scan;
break;
@@ -1820,7 +1827,6 @@
dtype = lock->cll_descr.cld_obj->co_lu.lo_dev->ld_type;
slice = cl_page_at(page, dtype);
- LASSERT(slice != NULL);
return slice->cpl_page->cp_index;
}
@@ -1840,11 +1846,12 @@
/* refresh non-overlapped index */
tmp = cl_lock_at_pgoff(env, lock->cll_descr.cld_obj, index,
lock, 1, 0);
- if (tmp != NULL) {
+ if (tmp) {
/* Cache the first-non-overlapped index so as to skip
* all pages within [index, clt_fn_index). This
* is safe because if tmp lock is canceled, it will
- * discard these pages. */
+ * discard these pages.
+ */
info->clt_fn_index = tmp->cll_descr.cld_end + 1;
if (tmp->cll_descr.cld_end == CL_PAGE_EOF)
info->clt_fn_index = CL_PAGE_EOF;
@@ -1950,7 +1957,7 @@
* already destroyed (as otherwise they will be left unprotected).
*/
LASSERT(ergo(!cancel,
- head->coh_tree.rnode == NULL && head->coh_pages == 0));
+ !head->coh_tree.rnode && head->coh_pages == 0));
spin_lock(&head->coh_lock_guard);
while (!list_empty(&head->coh_locks)) {
@@ -2194,7 +2201,7 @@
(*printer)(env, cookie, " %s@%p: ",
slice->cls_obj->co_lu.lo_dev->ld_type->ldt_name,
slice);
- if (slice->cls_ops->clo_print != NULL)
+ if (slice->cls_ops->clo_print)
slice->cls_ops->clo_print(env, cookie, printer, slice);
(*printer)(env, cookie, "\n");
}
diff --git a/drivers/staging/lustre/lustre/obdclass/cl_object.c b/drivers/staging/lustre/lustre/obdclass/cl_object.c
index f118983..39b4fd0 100644
--- a/drivers/staging/lustre/lustre/obdclass/cl_object.c
+++ b/drivers/staging/lustre/lustre/obdclass/cl_object.c
@@ -152,7 +152,7 @@
struct cl_object_header *hdr = cl_object_header(o);
struct cl_object *top;
- while (hdr->coh_parent != NULL)
+ while (hdr->coh_parent)
hdr = hdr->coh_parent;
top = lu2cl(lu_object_top(&hdr->coh_lu));
@@ -217,7 +217,7 @@
top = obj->co_lu.lo_header;
result = 0;
list_for_each_entry(obj, &top->loh_layers, co_lu.lo_linkage) {
- if (obj->co_ops->coo_attr_get != NULL) {
+ if (obj->co_ops->coo_attr_get) {
result = obj->co_ops->coo_attr_get(env, obj, attr);
if (result != 0) {
if (result > 0)
@@ -249,7 +249,7 @@
result = 0;
list_for_each_entry_reverse(obj, &top->loh_layers,
co_lu.lo_linkage) {
- if (obj->co_ops->coo_attr_set != NULL) {
+ if (obj->co_ops->coo_attr_set) {
result = obj->co_ops->coo_attr_set(env, obj, attr, v);
if (result != 0) {
if (result > 0)
@@ -280,7 +280,7 @@
result = 0;
list_for_each_entry_reverse(obj, &top->loh_layers,
co_lu.lo_linkage) {
- if (obj->co_ops->coo_glimpse != NULL) {
+ if (obj->co_ops->coo_glimpse) {
result = obj->co_ops->coo_glimpse(env, obj, lvb);
if (result != 0)
break;
@@ -306,7 +306,7 @@
top = obj->co_lu.lo_header;
result = 0;
list_for_each_entry(obj, &top->loh_layers, co_lu.lo_linkage) {
- if (obj->co_ops->coo_conf_set != NULL) {
+ if (obj->co_ops->coo_conf_set) {
result = obj->co_ops->coo_conf_set(env, obj, conf);
if (result != 0)
break;
@@ -328,7 +328,7 @@
struct cl_object_header *hdr;
hdr = cl_object_header(obj);
- LASSERT(hdr->coh_tree.rnode == NULL);
+ LASSERT(!hdr->coh_tree.rnode);
LASSERT(hdr->coh_pages == 0);
set_bit(LU_OBJECT_HEARD_BANSHEE, &hdr->coh_lu.loh_flags);
@@ -541,7 +541,7 @@
{
LASSERT(cle->ce_ref == 0);
LASSERT(cle->ce_magic == &cl_env_init0);
- LASSERT(cle->ce_debug == NULL && cle->ce_owner == NULL);
+ LASSERT(!cle->ce_debug && !cle->ce_owner);
cle->ce_ref = 1;
cle->ce_debug = debug;
@@ -576,7 +576,7 @@
{
struct cl_env *cle = cl_env_hops_obj(hn);
- LASSERT(cle->ce_owner != NULL);
+ LASSERT(cle->ce_owner);
return (key == cle->ce_owner);
}
@@ -610,7 +610,7 @@
if (cle) {
int rc;
- LASSERT(cle->ce_owner == NULL);
+ LASSERT(!cle->ce_owner);
cle->ce_owner = (void *) (long) current->pid;
rc = cfs_hash_add_unique(cl_env_hash, cle->ce_owner,
&cle->ce_node);
@@ -638,7 +638,7 @@
CFS_HASH_MAX_THETA,
&cl_env_hops,
CFS_HASH_RW_BKTLOCK);
- return cl_env_hash != NULL ? 0 : -ENOMEM;
+ return cl_env_hash ? 0 : -ENOMEM;
}
static void cl_env_store_fini(void)
@@ -648,7 +648,7 @@
static inline struct cl_env *cl_env_detach(struct cl_env *cle)
{
- if (cle == NULL)
+ if (!cle)
cle = cl_env_fetch();
if (cle && cle->ce_owner)
@@ -663,7 +663,7 @@
struct cl_env *cle;
cle = kmem_cache_alloc(cl_env_kmem, GFP_NOFS | __GFP_ZERO);
- if (cle != NULL) {
+ if (cle) {
int rc;
INIT_LIST_HEAD(&cle->ce_linkage);
@@ -717,7 +717,7 @@
env = NULL;
cle = cl_env_fetch();
- if (cle != NULL) {
+ if (cle) {
CL_ENV_INC(hit);
env = &cle->ce_lu;
*refcheck = ++cle->ce_ref;
@@ -742,7 +742,7 @@
struct lu_env *env;
env = cl_env_peek(refcheck);
- if (env == NULL) {
+ if (!env) {
env = cl_env_new(lu_context_tags_default,
lu_session_tags_default,
__builtin_return_address(0));
@@ -769,7 +769,7 @@
{
struct lu_env *env;
- LASSERT(cl_env_peek(refcheck) == NULL);
+ LASSERT(!cl_env_peek(refcheck));
env = cl_env_new(tags, tags, __builtin_return_address(0));
if (!IS_ERR(env)) {
struct cl_env *cle;
@@ -784,7 +784,7 @@
static void cl_env_exit(struct cl_env *cle)
{
- LASSERT(cle->ce_owner == NULL);
+ LASSERT(!cle->ce_owner);
lu_context_exit(&cle->ce_lu.le_ctx);
lu_context_exit(&cle->ce_ses);
}
@@ -803,7 +803,7 @@
cle = cl_env_container(env);
LASSERT(cle->ce_ref > 0);
- LASSERT(ergo(refcheck != NULL, cle->ce_ref == *refcheck));
+ LASSERT(ergo(refcheck, cle->ce_ref == *refcheck));
CDEBUG(D_OTHER, "%d@%p\n", cle->ce_ref, cle);
if (--cle->ce_ref == 0) {
@@ -878,7 +878,7 @@
nest->cen_cookie = NULL;
env = cl_env_peek(&nest->cen_refcheck);
- if (env != NULL) {
+ if (env) {
if (!cl_io_is_going(env))
return env;
cl_env_put(env, &nest->cen_refcheck);
@@ -930,14 +930,12 @@
const char *typename;
struct lu_device *d;
- LASSERT(ldt != NULL);
-
typename = ldt->ldt_name;
d = ldt->ldt_ops->ldto_device_alloc(env, ldt, NULL);
if (!IS_ERR(d)) {
int rc;
- if (site != NULL)
+ if (site)
d->ld_site = site;
rc = ldt->ldt_ops->ldto_device_init(env, d, typename, next);
if (rc == 0) {
diff --git a/drivers/staging/lustre/lustre/obdclass/cl_page.c b/drivers/staging/lustre/lustre/obdclass/cl_page.c
index 61f28eb..72f9924 100644
--- a/drivers/staging/lustre/lustre/obdclass/cl_page.c
+++ b/drivers/staging/lustre/lustre/obdclass/cl_page.c
@@ -69,7 +69,7 @@
*/
static struct cl_page *cl_page_top_trusted(struct cl_page *page)
{
- while (page->cp_parent != NULL)
+ while (page->cp_parent)
page = page->cp_parent;
return page;
}
@@ -110,7 +110,7 @@
return slice;
}
page = page->cp_child;
- } while (page != NULL);
+ } while (page);
return NULL;
}
@@ -127,7 +127,7 @@
assert_spin_locked(&hdr->coh_page_guard);
page = radix_tree_lookup(&hdr->coh_tree, index);
- if (page != NULL)
+ if (page)
cl_page_get_trust(page);
return page;
}
@@ -188,7 +188,7 @@
* Pages for lsm-less file has no underneath sub-page
* for osc, in case of ...
*/
- PASSERT(env, page, slice != NULL);
+ PASSERT(env, page, slice);
page = slice->cpl_page;
/*
@@ -245,9 +245,9 @@
struct cl_object *obj = page->cp_obj;
PASSERT(env, page, list_empty(&page->cp_batch));
- PASSERT(env, page, page->cp_owner == NULL);
- PASSERT(env, page, page->cp_req == NULL);
- PASSERT(env, page, page->cp_parent == NULL);
+ PASSERT(env, page, !page->cp_owner);
+ PASSERT(env, page, !page->cp_req);
+ PASSERT(env, page, !page->cp_parent);
PASSERT(env, page, page->cp_state == CPS_FREEING);
might_sleep();
@@ -284,7 +284,7 @@
struct lu_object_header *head;
page = kzalloc(cl_object_header(o)->coh_page_bufsize, GFP_NOFS);
- if (page != NULL) {
+ if (page) {
int result = 0;
atomic_set(&page->cp_ref, 1);
@@ -305,7 +305,7 @@
head = o->co_lu.lo_header;
list_for_each_entry(o, &head->loh_layers,
co_lu.lo_linkage) {
- if (o->co_ops->coo_page_init != NULL) {
+ if (o->co_ops->coo_page_init) {
result = o->co_ops->coo_page_init(env, o,
page, vmpage);
if (result != 0) {
@@ -369,13 +369,13 @@
*/
page = cl_vmpage_page(vmpage, o);
PINVRNT(env, page,
- ergo(page != NULL,
+ ergo(page,
cl_page_vmpage(env, page) == vmpage &&
(void *)radix_tree_lookup(&hdr->coh_tree,
idx) == page));
}
- if (page != NULL)
+ if (page)
return page;
/* allocate and initialize cl_page */
@@ -385,7 +385,7 @@
if (type == CPT_TRANSIENT) {
if (parent) {
- LASSERT(page->cp_parent == NULL);
+ LASSERT(!page->cp_parent);
page->cp_parent = parent;
parent->cp_child = page;
}
@@ -418,7 +418,7 @@
"fail to insert into radix tree: %d\n", err);
} else {
if (parent) {
- LASSERT(page->cp_parent == NULL);
+ LASSERT(!page->cp_parent);
page->cp_parent = parent;
parent->cp_child = page;
}
@@ -426,7 +426,7 @@
}
spin_unlock(&hdr->coh_page_guard);
- if (unlikely(ghost != NULL)) {
+ if (unlikely(ghost)) {
cl_page_delete0(env, ghost, 0);
cl_page_free(env, ghost);
}
@@ -467,14 +467,13 @@
owner = pg->cp_owner;
return cl_page_in_use(pg) &&
- ergo(parent != NULL, parent->cp_child == pg) &&
- ergo(child != NULL, child->cp_parent == pg) &&
- ergo(child != NULL, pg->cp_obj != child->cp_obj) &&
- ergo(parent != NULL, pg->cp_obj != parent->cp_obj) &&
- ergo(owner != NULL && parent != NULL,
+ ergo(parent, parent->cp_child == pg) &&
+ ergo(child, child->cp_parent == pg) &&
+ ergo(child, pg->cp_obj != child->cp_obj) &&
+ ergo(parent, pg->cp_obj != parent->cp_obj) &&
+ ergo(owner && parent,
parent->cp_owner == pg->cp_owner->ci_parent) &&
- ergo(owner != NULL && child != NULL,
- child->cp_owner->ci_parent == owner) &&
+ ergo(owner && child, child->cp_owner->ci_parent == owner) &&
/*
* Either page is early in initialization (has neither child
* nor parent yet), or it is in the object radix tree.
@@ -482,7 +481,7 @@
ergo(pg->cp_state < CPS_FREEING && pg->cp_type == CPT_CACHEABLE,
(void *)radix_tree_lookup(&header->coh_tree,
pg->cp_index) == pg ||
- (child == NULL && parent == NULL));
+ (!child && !parent));
}
static void cl_page_state_set0(const struct lu_env *env,
@@ -535,10 +534,10 @@
old = page->cp_state;
PASSERT(env, page, allowed_transitions[old][state]);
CL_PAGE_HEADER(D_TRACE, env, page, "%d -> %d\n", old, state);
- for (; page != NULL; page = page->cp_child) {
+ for (; page; page = page->cp_child) {
PASSERT(env, page, page->cp_state == old);
PASSERT(env, page,
- equi(state == CPS_OWNED, page->cp_owner != NULL));
+ equi(state == CPS_OWNED, page->cp_owner));
cl_page_state_set_trust(page, state);
}
@@ -584,7 +583,7 @@
LASSERT(page->cp_state == CPS_FREEING);
LASSERT(atomic_read(&page->cp_ref) == 0);
- PASSERT(env, page, page->cp_owner == NULL);
+ PASSERT(env, page, !page->cp_owner);
PASSERT(env, page, list_empty(&page->cp_batch));
/*
* Page is no longer reachable by other threads. Tear
@@ -609,11 +608,11 @@
page = cl_page_top(page);
do {
list_for_each_entry(slice, &page->cp_layers, cpl_linkage) {
- if (slice->cpl_ops->cpo_vmpage != NULL)
+ if (slice->cpl_ops->cpo_vmpage)
return slice->cpl_ops->cpo_vmpage(env, slice);
}
page = page->cp_child;
- } while (page != NULL);
+ } while (page);
LBUG(); /* ->cpo_vmpage() has to be defined somewhere in the stack */
}
EXPORT_SYMBOL(cl_page_vmpage);
@@ -639,10 +638,10 @@
* can be rectified easily.
*/
top = (struct cl_page *)vmpage->private;
- if (top == NULL)
+ if (!top)
return NULL;
- for (page = top; page != NULL; page = page->cp_child) {
+ for (page = top; page; page = page->cp_child) {
if (cl_object_same(page->cp_obj, obj)) {
cl_page_get_trust(page);
break;
@@ -689,7 +688,7 @@
cpl_linkage) { \
__method = *(void **)((char *)__scan->cpl_ops + \
__op); \
- if (__method != NULL) { \
+ if (__method) { \
__result = (*__method)(__env, __scan, \
## __VA_ARGS__); \
if (__result != 0) \
@@ -697,7 +696,7 @@
} \
} \
__page = __page->cp_child; \
- } while (__page != NULL && __result == 0); \
+ } while (__page && __result == 0); \
if (__result > 0) \
__result = 0; \
__result; \
@@ -717,12 +716,12 @@
cpl_linkage) { \
__method = *(void **)((char *)__scan->cpl_ops + \
__op); \
- if (__method != NULL) \
+ if (__method) \
(*__method)(__env, __scan, \
## __VA_ARGS__); \
} \
__page = __page->cp_child; \
- } while (__page != NULL); \
+ } while (__page); \
} while (0)
#define CL_PAGE_INVOID_REVERSE(_env, _page, _op, _proto, ...) \
@@ -734,19 +733,19 @@
void (*__method)_proto; \
\
/* get to the bottom page. */ \
- while (__page->cp_child != NULL) \
+ while (__page->cp_child) \
__page = __page->cp_child; \
do { \
list_for_each_entry_reverse(__scan, &__page->cp_layers, \
cpl_linkage) { \
__method = *(void **)((char *)__scan->cpl_ops + \
__op); \
- if (__method != NULL) \
+ if (__method) \
(*__method)(__env, __scan, \
## __VA_ARGS__); \
} \
__page = __page->cp_parent; \
- } while (__page != NULL); \
+ } while (__page); \
} while (0)
static int cl_page_invoke(const struct lu_env *env,
@@ -772,8 +771,8 @@
static void cl_page_owner_clear(struct cl_page *page)
{
- for (page = cl_page_top(page); page != NULL; page = page->cp_child) {
- if (page->cp_owner != NULL) {
+ for (page = cl_page_top(page); page; page = page->cp_child) {
+ if (page->cp_owner) {
LASSERT(page->cp_owner->ci_owned_nr > 0);
page->cp_owner->ci_owned_nr--;
page->cp_owner = NULL;
@@ -784,10 +783,8 @@
static void cl_page_owner_set(struct cl_page *page)
{
- for (page = cl_page_top(page); page != NULL; page = page->cp_child) {
- LASSERT(page->cp_owner != NULL);
+ for (page = cl_page_top(page); page; page = page->cp_child)
page->cp_owner->ci_owned_nr++;
- }
}
void cl_page_disown0(const struct lu_env *env,
@@ -862,8 +859,8 @@
struct cl_io *, int),
io, nonblock);
if (result == 0) {
- PASSERT(env, pg, pg->cp_owner == NULL);
- PASSERT(env, pg, pg->cp_req == NULL);
+ PASSERT(env, pg, !pg->cp_owner);
+ PASSERT(env, pg, !pg->cp_req);
pg->cp_owner = io;
pg->cp_task = current;
cl_page_owner_set(pg);
@@ -921,7 +918,7 @@
io = cl_io_top(io);
cl_page_invoid(env, io, pg, CL_PAGE_OP(cpo_assume));
- PASSERT(env, pg, pg->cp_owner == NULL);
+ PASSERT(env, pg, !pg->cp_owner);
pg->cp_owner = io;
pg->cp_task = current;
cl_page_owner_set(pg);
@@ -1037,7 +1034,7 @@
* skip removing it.
*/
tmp = pg->cp_child;
- for (; tmp != NULL; tmp = tmp->cp_child) {
+ for (; tmp; tmp = tmp->cp_child) {
void *value;
struct cl_object_header *hdr;
@@ -1135,7 +1132,7 @@
pg = cl_page_top_trusted((struct cl_page *)pg);
slice = container_of(pg->cp_layers.next,
const struct cl_page_slice, cpl_linkage);
- PASSERT(env, pg, slice->cpl_ops->cpo_is_vmlocked != NULL);
+ PASSERT(env, pg, slice->cpl_ops->cpo_is_vmlocked);
/*
* Call ->cpo_is_vmlocked() directly instead of going through
* CL_PAGE_INVOKE(), because cl_page_is_vmlocked() is used by
@@ -1216,7 +1213,7 @@
PASSERT(env, pg, crt < CRT_NR);
/* cl_page::cp_req already cleared by the caller (osc_completion()) */
- PASSERT(env, pg, pg->cp_req == NULL);
+ PASSERT(env, pg, !pg->cp_req);
PASSERT(env, pg, pg->cp_state == cl_req_type_state(crt));
CL_PAGE_HEADER(D_TRACE, env, pg, "%d %d\n", crt, ioret);
@@ -1304,7 +1301,7 @@
return -EINVAL;
list_for_each_entry(scan, &pg->cp_layers, cpl_linkage) {
- if (scan->cpl_ops->io[crt].cpo_cache_add == NULL)
+ if (!scan->cpl_ops->io[crt].cpo_cache_add)
continue;
result = scan->cpl_ops->io[crt].cpo_cache_add(env, scan, io);
@@ -1450,8 +1447,8 @@
{
struct cl_page *scan;
- for (scan = cl_page_top((struct cl_page *)pg);
- scan != NULL; scan = scan->cp_child)
+ for (scan = cl_page_top((struct cl_page *)pg); scan;
+ scan = scan->cp_child)
cl_page_header_print(env, cookie, printer, scan);
CL_PAGE_INVOKE(env, (struct cl_page *)pg, CL_PAGE_OP(cpo_print),
(const struct lu_env *env,
diff --git a/drivers/staging/lustre/lustre/obdclass/class_obd.c b/drivers/staging/lustre/lustre/obdclass/class_obd.c
index 65cf46c..c1310c2 100644
--- a/drivers/staging/lustre/lustre/obdclass/class_obd.c
+++ b/drivers/staging/lustre/lustre/obdclass/class_obd.c
@@ -42,7 +42,6 @@
#include "../../include/linux/lnet/lnetctl.h"
#include "../include/lustre_debug.h"
#include "../include/lprocfs_status.h"
-#include "../include/lustre/lustre_build_version.h"
#include <linux/list.h>
#include "../include/cl_object.h"
#include "llog_internal.h"
@@ -52,7 +51,7 @@
struct list_head obd_types;
DEFINE_RWLOCK(obd_dev_lock);
-/* The following are visible and mutable through /proc/sys/lustre/. */
+/* The following are visible and mutable through /sys/fs/lustre. */
unsigned int obd_debug_peer_on_timeout;
EXPORT_SYMBOL(obd_debug_peer_on_timeout);
unsigned int obd_dump_on_timeout;
@@ -67,7 +66,7 @@
EXPORT_SYMBOL(obd_timeout);
unsigned int obd_timeout_set;
EXPORT_SYMBOL(obd_timeout_set);
-/* Adaptive timeout defs here instead of ptlrpc module for /proc/sys/ access */
+/* Adaptive timeout defs here instead of ptlrpc module for /sys/fs/ access */
unsigned int at_min;
EXPORT_SYMBOL(at_min);
unsigned int at_max = 600;
@@ -218,14 +217,14 @@
goto out;
}
- if (strlen(BUILD_VERSION) + 1 > data->ioc_inllen1) {
+ if (strlen(LUSTRE_VERSION_STRING) + 1 > data->ioc_inllen1) {
CERROR("ioctl buffer too small to hold version\n");
err = -EINVAL;
goto out;
}
- memcpy(data->ioc_bulk, BUILD_VERSION,
- strlen(BUILD_VERSION) + 1);
+ memcpy(data->ioc_bulk, LUSTRE_VERSION_STRING,
+ strlen(LUSTRE_VERSION_STRING) + 1);
err = obd_ioctl_popdata((void __user *)arg, data, len);
if (err)
@@ -341,7 +340,7 @@
}
if (data->ioc_dev == OBD_DEV_BY_DEVNAME) {
- if (data->ioc_inllen4 <= 0 || data->ioc_inlbuf4 == NULL) {
+ if (data->ioc_inllen4 <= 0 || !data->ioc_inlbuf4) {
err = -EINVAL;
goto out;
}
@@ -358,7 +357,7 @@
goto out;
}
- if (obd == NULL) {
+ if (!obd) {
CERROR("OBD ioctl : No Device %d\n", data->ioc_dev);
err = -EINVAL;
goto out;
@@ -481,7 +480,7 @@
int lustre_register_fs(void);
- LCONSOLE_INFO("Lustre: Build Version: "BUILD_VERSION"\n");
+ LCONSOLE_INFO("Lustre: Build Version: " LUSTRE_VERSION_STRING "\n");
spin_lock_init(&obd_types_lock);
obd_zombie_impexp_init();
@@ -509,7 +508,8 @@
/* Default the dirty page cache cap to 1/2 of system memory.
* For clients with less memory, a larger fraction is needed
- * for other purposes (mostly for BGL). */
+ * for other purposes (mostly for BGL).
+ */
if (totalram_pages <= 512 << (20 - PAGE_CACHE_SHIFT))
obd_max_dirty_pages = totalram_pages / 4;
else
@@ -544,8 +544,6 @@
return err;
}
-/* liblustre doesn't call cleanup_obdclass, apparently. we carry on in this
- * ifdef to the end of the file to cover module and versioning goo.*/
static void cleanup_obdclass(void)
{
int i;
@@ -579,7 +577,7 @@
}
MODULE_AUTHOR("OpenSFS, Inc. <http://www.lustre.org/>");
-MODULE_DESCRIPTION("Lustre Class Driver Build Version: " BUILD_VERSION);
+MODULE_DESCRIPTION("Lustre Class Driver Build Version: " LUSTRE_VERSION_STRING);
MODULE_LICENSE("GPL");
MODULE_VERSION(LUSTRE_VERSION_STRING);
diff --git a/drivers/staging/lustre/lustre/obdclass/genops.c b/drivers/staging/lustre/lustre/obdclass/genops.c
index 5665655..9042632 100644
--- a/drivers/staging/lustre/lustre/obdclass/genops.c
+++ b/drivers/staging/lustre/lustre/obdclass/genops.c
@@ -70,17 +70,16 @@
struct obd_device *obd;
obd = kmem_cache_alloc(obd_device_cachep, GFP_NOFS | __GFP_ZERO);
- if (obd != NULL)
+ if (obd)
obd->obd_magic = OBD_DEVICE_MAGIC;
return obd;
}
static void obd_device_free(struct obd_device *obd)
{
- LASSERT(obd != NULL);
LASSERTF(obd->obd_magic == OBD_DEVICE_MAGIC, "obd %p obd_magic %08x != %08x\n",
obd, obd->obd_magic, OBD_DEVICE_MAGIC);
- if (obd->obd_namespace != NULL) {
+ if (obd->obd_namespace) {
CERROR("obd %p: namespace %p was not properly cleaned up (obd_force=%d)!\n",
obd, obd->obd_namespace, obd->obd_force);
LBUG();
@@ -113,15 +112,6 @@
if (!type) {
const char *modname = name;
- if (strcmp(modname, "obdfilter") == 0)
- modname = "ofd";
-
- if (strcmp(modname, LUSTRE_LWP_NAME) == 0)
- modname = LUSTRE_OSP_NAME;
-
- if (!strncmp(modname, LUSTRE_MDS_NAME, strlen(LUSTRE_MDS_NAME)))
- modname = LUSTRE_MDT_NAME;
-
if (!request_module("%s", modname)) {
CDEBUG(D_INFO, "Loaded module '%s'\n", modname);
type = class_search_type(name);
@@ -203,7 +193,7 @@
goto failed;
}
- if (ldt != NULL) {
+ if (ldt) {
type->typ_lu = ldt;
rc = lu_device_type_init(ldt);
if (rc != 0)
@@ -365,7 +355,7 @@
obd, obd->obd_magic, OBD_DEVICE_MAGIC);
LASSERTF(obd == obd_devs[obd->obd_minor], "obd %p != obd_devs[%d] %p\n",
obd, obd->obd_minor, obd_devs[obd->obd_minor]);
- LASSERT(obd_type != NULL);
+ LASSERT(obd_type);
CDEBUG(D_INFO, "Release obd device %s at %d obd_type name =%s\n",
obd->obd_name, obd->obd_minor, obd->obd_type->typ_name);
@@ -391,7 +381,8 @@
if (obd && strcmp(name, obd->obd_name) == 0) {
/* Make sure we finished attaching before we give
- out any references */
+ * out any references
+ */
LASSERT(obd->obd_magic == OBD_DEVICE_MAGIC);
if (obd->obd_attached) {
read_unlock(&obd_dev_lock);
@@ -466,8 +457,9 @@
EXPORT_SYMBOL(class_num2obd);
/* Search for a client OBD connected to tgt_uuid. If grp_uuid is
- specified, then only the client with that uuid is returned,
- otherwise any client connected to the tgt is returned. */
+ * specified, then only the client with that uuid is returned,
+ * otherwise any client connected to the tgt is returned.
+ */
struct obd_device *class_find_client_obd(struct obd_uuid *tgt_uuid,
const char *typ_name,
struct obd_uuid *grp_uuid)
@@ -498,9 +490,10 @@
EXPORT_SYMBOL(class_find_client_obd);
/* Iterate the obd_device list looking devices have grp_uuid. Start
- searching at *next, and if a device is found, the next index to look
- at is saved in *next. If next is NULL, then the first matching device
- will always be returned. */
+ * searching at *next, and if a device is found, the next index to look
+ * at is saved in *next. If next is NULL, then the first matching device
+ * will always be returned.
+ */
struct obd_device *class_devices_in_group(struct obd_uuid *grp_uuid, int *next)
{
int i;
@@ -659,7 +652,7 @@
struct obd_device *obd = exp->exp_obd;
LASSERT_ATOMIC_ZERO(&exp->exp_refcount);
- LASSERT(obd != NULL);
+ LASSERT(obd);
CDEBUG(D_IOCTL, "destroying export %p/%s for %s\n", exp,
exp->exp_client_uuid.uuid, obd->obd_name);
@@ -699,7 +692,6 @@
void class_export_put(struct obd_export *exp)
{
- LASSERT(exp != NULL);
LASSERT_ATOMIC_GT_LT(&exp->exp_refcount, 0, LI_POISON);
CDEBUG(D_INFO, "PUTting export %p : new refcount %d\n", exp,
atomic_read(&exp->exp_refcount) - 1);
@@ -719,7 +711,8 @@
/* Creates a new export, adds it to the hash table, and returns a
* pointer to it. The refcount is 2: one for the hash reference, and
- * one for the pointer returned by this function. */
+ * one for the pointer returned by this function.
+ */
struct obd_export *class_new_export(struct obd_device *obd,
struct obd_uuid *cluuid)
{
@@ -902,8 +895,9 @@
at_init(&at->iat_net_latency, 0, 0);
for (i = 0; i < IMP_AT_MAX_PORTALS; i++) {
/* max service estimates are tracked on the server side, so
- don't use the AT history here, just use the last reported
- val. (But keep hist for proc histogram, worst_ever) */
+ * don't use the AT history here, just use the last reported
+ * val. (But keep hist for proc histogram, worst_ever)
+ */
at_init(&at->iat_service_estimate[i], INITIAL_CONNECT_TIMEOUT,
AT_FLG_NOHIST);
}
@@ -942,7 +936,8 @@
init_imp_at(&imp->imp_at);
/* the default magic is V2, will be used in connect RPC, and
- * then adjusted according to the flags in request/reply. */
+ * then adjusted according to the flags in request/reply.
+ */
imp->imp_msg_magic = LUSTRE_MSG_MAGIC_V2;
return imp;
@@ -951,7 +946,7 @@
void class_destroy_import(struct obd_import *import)
{
- LASSERT(import != NULL);
+ LASSERT(import);
LASSERT(import != LP_POISON);
class_handle_unhash(&import->imp_handle);
@@ -971,8 +966,7 @@
LASSERT(lock->l_exp_refs_nr >= 0);
- if (lock->l_exp_refs_target != NULL &&
- lock->l_exp_refs_target != exp) {
+ if (lock->l_exp_refs_target && lock->l_exp_refs_target != exp) {
LCONSOLE_WARN("setting export %p for lock %p which already has export %p\n",
exp, lock, lock->l_exp_refs_target);
}
@@ -1006,17 +1000,18 @@
#endif
/* A connection defines an export context in which preallocation can
- be managed. This releases the export pointer reference, and returns
- the export handle, so the export refcount is 1 when this function
- returns. */
+ * be managed. This releases the export pointer reference, and returns
+ * the export handle, so the export refcount is 1 when this function
+ * returns.
+ */
int class_connect(struct lustre_handle *conn, struct obd_device *obd,
struct obd_uuid *cluuid)
{
struct obd_export *export;
- LASSERT(conn != NULL);
- LASSERT(obd != NULL);
- LASSERT(cluuid != NULL);
+ LASSERT(conn);
+ LASSERT(obd);
+ LASSERT(cluuid);
export = class_new_export(obd, cluuid);
if (IS_ERR(export))
@@ -1036,7 +1031,8 @@
* and if disconnect really need
* 2 - removing from hash
* 3 - in client_unlink_export
- * The export pointer passed to this function can destroyed */
+ * The export pointer passed to this function can destroyed
+ */
int class_disconnect(struct obd_export *export)
{
int already_disconnected;
@@ -1053,7 +1049,8 @@
/* class_cleanup(), abort_recovery(), and class_fail_export()
* all end up in here, and if any of them race we shouldn't
- * call extra class_export_puts(). */
+ * call extra class_export_puts().
+ */
if (already_disconnected)
goto no_disconn;
@@ -1093,7 +1090,8 @@
/* Most callers into obd_disconnect are removing their own reference
* (request, for example) in addition to the one from the hash table.
- * We don't have such a reference here, so make one. */
+ * We don't have such a reference here, so make one.
+ */
class_export_get(exp);
rc = obd_disconnect(exp);
if (rc)
@@ -1142,14 +1140,14 @@
spin_unlock(&obd_zombie_impexp_lock);
- if (import != NULL) {
+ if (import) {
class_import_destroy(import);
spin_lock(&obd_zombie_impexp_lock);
zombies_count--;
spin_unlock(&obd_zombie_impexp_lock);
}
- if (export != NULL) {
+ if (export) {
class_export_destroy(export);
spin_lock(&obd_zombie_impexp_lock);
zombies_count--;
@@ -1157,7 +1155,7 @@
}
cond_resched();
- } while (import != NULL || export != NULL);
+ } while (import || export);
}
static struct completion obd_zombie_start;
diff --git a/drivers/staging/lustre/lustre/obdclass/linux/linux-module.c b/drivers/staging/lustre/lustre/obdclass/linux/linux-module.c
index 1913f3e..8eddf20 100644
--- a/drivers/staging/lustre/lustre/obdclass/linux/linux-module.c
+++ b/drivers/staging/lustre/lustre/obdclass/linux/linux-module.c
@@ -59,7 +59,6 @@
#include <linux/highmem.h>
#include <linux/io.h>
#include <asm/ioctls.h>
-#include <linux/poll.h>
#include <linux/uaccess.h>
#include <linux/miscdevice.h>
#include <linux/seq_file.h>
@@ -71,7 +70,6 @@
#include "../../include/obd_class.h"
#include "../../include/lprocfs_status.h"
#include "../../include/lustre_ver.h"
-#include "../../include/lustre/lustre_build_version.h"
/* buffer MUST be at least the size of obd_ioctl_hdr */
int obd_ioctl_getdata(char **buf, int *len, void __user *arg)
@@ -104,9 +102,10 @@
/* When there are lots of processes calling vmalloc on multi-core
* system, the high lock contention will hurt performance badly,
* obdfilter-survey is an example, which relies on ioctl. So we'd
- * better avoid vmalloc on ioctl path. LU-66 */
+ * better avoid vmalloc on ioctl path. LU-66
+ */
*buf = libcfs_kvzalloc(hdr.ioc_len, GFP_NOFS);
- if (*buf == NULL) {
+ if (!*buf) {
CERROR("Cannot allocate control buffer of len %d\n",
hdr.ioc_len);
return -EINVAL;
@@ -454,8 +453,7 @@
int class_procfs_clean(void)
{
- if (debugfs_lustre_root != NULL)
- debugfs_remove_recursive(debugfs_lustre_root);
+ debugfs_remove_recursive(debugfs_lustre_root);
debugfs_lustre_root = NULL;
diff --git a/drivers/staging/lustre/lustre/obdclass/llog.c b/drivers/staging/lustre/lustre/obdclass/llog.c
index f956d7e..f7ee605 100644
--- a/drivers/staging/lustre/lustre/obdclass/llog.c
+++ b/drivers/staging/lustre/lustre/obdclass/llog.c
@@ -76,8 +76,6 @@
*/
static void llog_free_handle(struct llog_handle *loghandle)
{
- LASSERT(loghandle != NULL);
-
/* failed llog_init_handle */
if (!loghandle->lgh_hdr)
goto out;
@@ -115,7 +113,7 @@
if (rc)
return rc;
- if (lop->lop_read_header == NULL)
+ if (!lop->lop_read_header)
return -EOPNOTSUPP;
rc = lop->lop_read_header(env, handle);
@@ -144,7 +142,7 @@
struct llog_log_hdr *llh;
int rc;
- LASSERT(handle->lgh_hdr == NULL);
+ LASSERT(!handle->lgh_hdr);
llh = kzalloc(sizeof(*llh), GFP_NOFS);
if (!llh)
@@ -228,11 +226,11 @@
return 0;
}
- if (cd != NULL) {
+ if (cd) {
last_called_index = cd->lpcd_first_idx;
index = cd->lpcd_first_idx + 1;
}
- if (cd != NULL && cd->lpcd_last_idx)
+ if (cd && cd->lpcd_last_idx)
last_index = cd->lpcd_last_idx;
else
last_index = LLOG_BITMAP_BYTES * 8 - 1;
@@ -262,7 +260,8 @@
/* NB: when rec->lrh_len is accessed it is already swabbed
* since it is used at the "end" of the loop and the rec
- * swabbing is done at the beginning of the loop. */
+ * swabbing is done at the beginning of the loop.
+ */
for (rec = (struct llog_rec_hdr *)buf;
(char *)rec < buf + LLOG_CHUNK_SIZE;
rec = (struct llog_rec_hdr *)((char *)rec + rec->lrh_len)) {
@@ -328,7 +327,7 @@
}
out:
- if (cd != NULL)
+ if (cd)
cd->lpcd_last_idx = last_called_index;
kfree(buf);
@@ -376,17 +375,20 @@
lpi->lpi_catdata = catdata;
if (fork) {
+ struct task_struct *task;
+
/* The new thread can't use parent env,
- * init the new one in llog_process_thread_daemonize. */
+ * init the new one in llog_process_thread_daemonize.
+ */
lpi->lpi_env = NULL;
init_completion(&lpi->lpi_completion);
- rc = PTR_ERR(kthread_run(llog_process_thread_daemonize, lpi,
- "llog_process_thread"));
- if (IS_ERR_VALUE(rc)) {
+ task = kthread_run(llog_process_thread_daemonize, lpi,
+ "llog_process_thread");
+ if (IS_ERR(task)) {
+ rc = PTR_ERR(task);
CERROR("%s: cannot start thread: rc = %d\n",
loghandle->lgh_ctxt->loc_obd->obd_name, rc);
- kfree(lpi);
- return rc;
+ goto out_lpi;
}
wait_for_completion(&lpi->lpi_completion);
} else {
@@ -394,6 +396,7 @@
llog_process_thread(lpi);
}
rc = lpi->lpi_rc;
+out_lpi:
kfree(lpi);
return rc;
}
@@ -416,13 +419,13 @@
LASSERT(ctxt);
LASSERT(ctxt->loc_logops);
- if (ctxt->loc_logops->lop_open == NULL) {
+ if (!ctxt->loc_logops->lop_open) {
*lgh = NULL;
return -EOPNOTSUPP;
}
*lgh = llog_alloc_handle();
- if (*lgh == NULL)
+ if (!*lgh)
return -ENOMEM;
(*lgh)->lgh_ctxt = ctxt;
(*lgh)->lgh_logops = ctxt->loc_logops;
@@ -449,7 +452,7 @@
rc = llog_handle2ops(loghandle, &lop);
if (rc)
goto out;
- if (lop->lop_close == NULL) {
+ if (!lop->lop_close) {
rc = -EOPNOTSUPP;
goto out;
}
diff --git a/drivers/staging/lustre/lustre/obdclass/llog_cat.c b/drivers/staging/lustre/lustre/obdclass/llog_cat.c
index 0f05e9c..b88ccba 100644
--- a/drivers/staging/lustre/lustre/obdclass/llog_cat.c
+++ b/drivers/staging/lustre/lustre/obdclass/llog_cat.c
@@ -69,7 +69,7 @@
struct llog_handle *loghandle;
int rc = 0;
- if (cathandle == NULL)
+ if (!cathandle)
return -EBADF;
down_write(&cathandle->lgh_lock);
diff --git a/drivers/staging/lustre/lustre/obdclass/llog_obd.c b/drivers/staging/lustre/lustre/obdclass/llog_obd.c
index 9bc5199..826623f 100644
--- a/drivers/staging/lustre/lustre/obdclass/llog_obd.c
+++ b/drivers/staging/lustre/lustre/obdclass/llog_obd.c
@@ -88,7 +88,8 @@
spin_unlock(&obd->obd_dev_lock);
/* obd->obd_starting is needed for the case of cleanup
- * in error case while obd is starting up. */
+ * in error case while obd is starting up.
+ */
LASSERTF(obd->obd_starting == 1 ||
obd->obd_stopping == 1 || obd->obd_set_up == 0,
"wrong obd state: %d/%d/%d\n", !!obd->obd_starting,
@@ -110,11 +111,8 @@
struct obd_llog_group *olg;
int rc, idx;
- LASSERT(ctxt != NULL);
- LASSERT(ctxt != LP_POISON);
-
olg = ctxt->loc_olg;
- LASSERT(olg != NULL);
+ LASSERT(olg);
LASSERT(olg != LP_POISON);
idx = ctxt->loc_idx;
@@ -151,7 +149,7 @@
if (index < 0 || index >= LLOG_MAX_CTXTS)
return -EINVAL;
- LASSERT(olg != NULL);
+ LASSERT(olg);
ctxt = llog_new_ctxt(obd);
if (!ctxt)
diff --git a/drivers/staging/lustre/lustre/obdclass/llog_swab.c b/drivers/staging/lustre/lustre/obdclass/llog_swab.c
index 7b8379a..967ba2e 100644
--- a/drivers/staging/lustre/lustre/obdclass/llog_swab.c
+++ b/drivers/staging/lustre/lustre/obdclass/llog_swab.c
@@ -386,7 +386,8 @@
*
* Overwrite fields from the end first, so they are not
* clobbered, and use memmove() instead of memcpy() because
- * the source and target buffers overlap. bug 16771 */
+ * the source and target buffers overlap. bug 16771
+ */
createtime = cm32->cm_createtime;
canceltime = cm32->cm_canceltime;
memmove(marker->cm_comment, cm32->cm_comment, MTI_NAMELEN32);
diff --git a/drivers/staging/lustre/lustre/obdclass/lprocfs_counters.c b/drivers/staging/lustre/lustre/obdclass/lprocfs_counters.c
index 6acc4a1..13aca5b 100644
--- a/drivers/staging/lustre/lustre/obdclass/lprocfs_counters.c
+++ b/drivers/staging/lustre/lustre/obdclass/lprocfs_counters.c
@@ -48,14 +48,15 @@
int smp_id;
unsigned long flags = 0;
- if (stats == NULL)
+ if (!stats)
return;
LASSERTF(0 <= idx && idx < stats->ls_num,
"idx %d, ls_num %hu\n", idx, stats->ls_num);
/* With per-client stats, statistics are allocated only for
- * single CPU area, so the smp_id should be 0 always. */
+ * single CPU area, so the smp_id should be 0 always.
+ */
smp_id = lprocfs_stats_lock(stats, LPROCFS_GET_SMP_ID, &flags);
if (smp_id < 0)
return;
@@ -96,14 +97,15 @@
int smp_id;
unsigned long flags = 0;
- if (stats == NULL)
+ if (!stats)
return;
LASSERTF(0 <= idx && idx < stats->ls_num,
"idx %d, ls_num %hu\n", idx, stats->ls_num);
/* With per-client stats, statistics are allocated only for
- * single CPU area, so the smp_id should be 0 always. */
+ * single CPU area, so the smp_id should be 0 always.
+ */
smp_id = lprocfs_stats_lock(stats, LPROCFS_GET_SMP_ID, &flags);
if (smp_id < 0)
return;
diff --git a/drivers/staging/lustre/lustre/obdclass/lprocfs_status.c b/drivers/staging/lustre/lustre/obdclass/lprocfs_status.c
index eda44d8..28bb4e5 100644
--- a/drivers/staging/lustre/lustre/obdclass/lprocfs_status.c
+++ b/drivers/staging/lustre/lustre/obdclass/lprocfs_status.c
@@ -109,7 +109,7 @@
__u64 mask = 1;
int i, ret = 0;
- for (i = 0; obd_connect_names[i] != NULL; i++, mask <<= 1) {
+ for (i = 0; obd_connect_names[i]; i++, mask <<= 1) {
if (flags & mask)
ret += snprintf(page + ret, count - ret, "%s%s",
ret ? sep : "", obd_connect_names[i]);
@@ -149,10 +149,10 @@
}
/*
* Need to think these cases :
- * 1. #echo x.00 > /proc/xxx output result : x
- * 2. #echo x.0x > /proc/xxx output result : x.0x
- * 3. #echo x.x0 > /proc/xxx output result : x.x
- * 4. #echo x.xx > /proc/xxx output result : x.xx
+ * 1. #echo x.00 > /sys/xxx output result : x
+ * 2. #echo x.0x > /sys/xxx output result : x.0x
+ * 3. #echo x.x0 > /sys/xxx output result : x.x
+ * 4. #echo x.xx > /sys/xxx output result : x.xx
* Only reserved 2 bits fraction.
*/
for (i = 0; i < (5 - prtn); i++)
@@ -199,7 +199,7 @@
if (pbuf == end)
return -EINVAL;
- if (end != NULL && *end == '.') {
+ if (end && *end == '.') {
int temp_val, pow = 1;
int i;
@@ -247,7 +247,7 @@
struct dentry *entry;
umode_t mode = 0;
- if (root == NULL || name == NULL || fops == NULL)
+ if (!root || !name || !fops)
return ERR_PTR(-EINVAL);
if (fops->read)
@@ -272,7 +272,7 @@
if (IS_ERR_OR_NULL(parent) || IS_ERR_OR_NULL(list))
return -EINVAL;
- while (list->name != NULL) {
+ while (list->name) {
struct dentry *entry;
umode_t mode = 0;
@@ -491,7 +491,7 @@
char *imp_state_name = NULL;
int rc;
- LASSERT(obd != NULL);
+ LASSERT(obd);
rc = lprocfs_climp_check(obd);
if (rc)
return rc;
@@ -514,7 +514,7 @@
struct ptlrpc_connection *conn;
int rc;
- LASSERT(obd != NULL);
+ LASSERT(obd);
rc = lprocfs_climp_check(obd);
if (rc)
@@ -543,7 +543,7 @@
memset(cnt, 0, sizeof(*cnt));
- if (stats == NULL) {
+ if (!stats) {
/* set count to 1 to avoid divide-by-zero errs in callers */
cnt->lc_count = 1;
return;
@@ -554,7 +554,7 @@
num_entry = lprocfs_stats_lock(stats, LPROCFS_GET_NUM_CPU, &flags);
for (i = 0; i < num_entry; i++) {
- if (stats->ls_percpu[i] == NULL)
+ if (!stats->ls_percpu[i])
continue;
percpu_cntr = lprocfs_stats_counter_get(stats, i, idx);
@@ -577,7 +577,7 @@
#define flag2str(flag, first) \
do { \
if (imp->imp_##flag) \
- seq_printf(m, "%s" #flag, first ? "" : ", "); \
+ seq_printf(m, "%s" #flag, first ? "" : ", "); \
} while (0)
static int obd_import_flags2str(struct obd_import *imp, struct seq_file *m)
{
@@ -604,7 +604,7 @@
int i;
bool first = true;
- for (i = 0; obd_connect_names[i] != NULL; i++, mask <<= 1) {
+ for (i = 0; obd_connect_names[i]; i++, mask <<= 1) {
if (flags & mask) {
seq_printf(m, "%s%s",
first ? sep : "", obd_connect_names[i]);
@@ -629,7 +629,7 @@
int rw = 0;
int rc;
- LASSERT(obd != NULL);
+ LASSERT(obd);
rc = lprocfs_climp_check(obd);
if (rc)
return rc;
@@ -665,7 +665,7 @@
seq_printf(m, "%s%s", j ? ", " : "", nidstr);
j++;
}
- if (imp->imp_connection != NULL)
+ if (imp->imp_connection)
libcfs_nid2str_r(imp->imp_connection->c_peer.nid,
nidstr, sizeof(nidstr));
else
@@ -682,7 +682,7 @@
atomic_read(&imp->imp_inval_count));
spin_unlock(&imp->imp_lock);
- if (obd->obd_svc_stats == NULL)
+ if (!obd->obd_svc_stats)
goto out_climp;
header = &obd->obd_svc_stats->ls_cnt_header[PTLRPC_REQWAIT_CNTR];
@@ -779,7 +779,7 @@
struct obd_import *imp;
int j, k, rc;
- LASSERT(obd != NULL);
+ LASSERT(obd);
rc = lprocfs_climp_check(obd);
if (rc)
return rc;
@@ -825,7 +825,7 @@
struct dhms ts;
int i, rc;
- LASSERT(obd != NULL);
+ LASSERT(obd);
rc = lprocfs_climp_check(obd);
if (rc)
return rc;
@@ -967,12 +967,12 @@
unsigned long flags = 0;
int i;
- LASSERT(stats->ls_percpu[cpuid] == NULL);
+ LASSERT(!stats->ls_percpu[cpuid]);
LASSERT((stats->ls_flags & LPROCFS_STATS_FLAG_NOPERCPU) == 0);
percpusize = lprocfs_stats_counter_size(stats);
LIBCFS_ALLOC_ATOMIC(stats->ls_percpu[cpuid], percpusize);
- if (stats->ls_percpu[cpuid] != NULL) {
+ if (stats->ls_percpu[cpuid]) {
rc = 0;
if (unlikely(stats->ls_biggest_alloc_num <= cpuid)) {
if (stats->ls_flags & LPROCFS_STATS_FLAG_IRQ_SAFE)
@@ -1017,7 +1017,7 @@
/* alloc percpu pointers for all possible cpu slots */
LIBCFS_ALLOC(stats, offsetof(typeof(*stats), ls_percpu[num_entry]));
- if (stats == NULL)
+ if (!stats)
return NULL;
stats->ls_num = num;
@@ -1027,14 +1027,14 @@
/* alloc num of counter headers */
LIBCFS_ALLOC(stats->ls_cnt_header,
stats->ls_num * sizeof(struct lprocfs_counter_header));
- if (stats->ls_cnt_header == NULL)
+ if (!stats->ls_cnt_header)
goto fail;
if ((flags & LPROCFS_STATS_FLAG_NOPERCPU) != 0) {
/* contains only one set counters */
percpusize = lprocfs_stats_counter_size(stats);
LIBCFS_ALLOC_ATOMIC(stats->ls_percpu[0], percpusize);
- if (stats->ls_percpu[0] == NULL)
+ if (!stats->ls_percpu[0])
goto fail;
stats->ls_biggest_alloc_num = 1;
} else if ((flags & LPROCFS_STATS_FLAG_IRQ_SAFE) != 0) {
@@ -1059,7 +1059,7 @@
unsigned int percpusize;
unsigned int i;
- if (stats == NULL || stats->ls_num == 0)
+ if (!stats || stats->ls_num == 0)
return;
*statsh = NULL;
@@ -1070,9 +1070,9 @@
percpusize = lprocfs_stats_counter_size(stats);
for (i = 0; i < num_entry; i++)
- if (stats->ls_percpu[i] != NULL)
+ if (stats->ls_percpu[i])
LIBCFS_FREE(stats->ls_percpu[i], percpusize);
- if (stats->ls_cnt_header != NULL)
+ if (stats->ls_cnt_header)
LIBCFS_FREE(stats->ls_cnt_header, stats->ls_num *
sizeof(struct lprocfs_counter_header));
LIBCFS_FREE(stats, offsetof(typeof(*stats), ls_percpu[num_entry]));
@@ -1090,7 +1090,7 @@
num_entry = lprocfs_stats_lock(stats, LPROCFS_GET_NUM_CPU, &flags);
for (i = 0; i < num_entry; i++) {
- if (stats->ls_percpu[i] == NULL)
+ if (!stats->ls_percpu[i])
continue;
for (j = 0; j < stats->ls_num; j++) {
percpu_cntr = lprocfs_stats_counter_get(stats, i, j);
@@ -1230,10 +1230,8 @@
unsigned int i;
unsigned int num_cpu;
- LASSERT(stats != NULL);
-
header = &stats->ls_cnt_header[index];
- LASSERTF(header != NULL, "Failed to allocate stats header:[%d]%s/%s\n",
+ LASSERTF(header, "Failed to allocate stats header:[%d]%s/%s\n",
index, name, units);
header->lc_config = conf;
@@ -1242,7 +1240,7 @@
num_cpu = lprocfs_stats_lock(stats, LPROCFS_GET_NUM_CPU, &flags);
for (i = 0; i < num_cpu; ++i) {
- if (stats->ls_percpu[i] == NULL)
+ if (!stats->ls_percpu[i])
continue;
percpu_cntr = lprocfs_stats_counter_get(stats, i, index);
percpu_cntr->lc_count = 0;
@@ -1270,7 +1268,7 @@
{
__s64 ret = 0;
- if (lc == NULL || header == NULL)
+ if (!lc || !header)
return 0;
switch (field) {
@@ -1412,7 +1410,7 @@
/* there is no strnstr() in rhel5 and ubuntu kernels */
val = lprocfs_strnstr(buffer, name, buflen);
- if (val == NULL)
+ if (!val)
return (char *)buffer;
val += strlen(name); /* skip prefix */
diff --git a/drivers/staging/lustre/lustre/obdclass/lu_object.c b/drivers/staging/lustre/lustre/obdclass/lu_object.c
index ce248f4..0fa4bac 100644
--- a/drivers/staging/lustre/lustre/obdclass/lu_object.c
+++ b/drivers/staging/lustre/lustre/obdclass/lu_object.c
@@ -86,13 +86,12 @@
*/
fid = lu_object_fid(o);
if (fid_is_zero(fid)) {
- LASSERT(top->loh_hash.next == NULL
- && top->loh_hash.pprev == NULL);
+ LASSERT(!top->loh_hash.next && !top->loh_hash.pprev);
LASSERT(list_empty(&top->loh_lru));
if (!atomic_dec_and_test(&top->loh_ref))
return;
list_for_each_entry_reverse(o, &top->loh_layers, lo_linkage) {
- if (o->lo_ops->loo_object_release != NULL)
+ if (o->lo_ops->loo_object_release)
o->lo_ops->loo_object_release(env, o);
}
lu_object_free(env, orig);
@@ -119,7 +118,7 @@
* layers, and notify them that object is no longer busy.
*/
list_for_each_entry_reverse(o, &top->loh_layers, lo_linkage) {
- if (o->lo_ops->loo_object_release != NULL)
+ if (o->lo_ops->loo_object_release)
o->lo_ops->loo_object_release(env, o);
}
@@ -210,7 +209,7 @@
* lu_object_header.
*/
top = dev->ld_ops->ldo_object_alloc(env, NULL, dev);
- if (top == NULL)
+ if (!top)
return ERR_PTR(-ENOMEM);
if (IS_ERR(top))
return top;
@@ -245,7 +244,7 @@
} while (!clean);
list_for_each_entry_reverse(scan, layers, lo_linkage) {
- if (scan->lo_ops->loo_object_start != NULL) {
+ if (scan->lo_ops->loo_object_start) {
result = scan->lo_ops->loo_object_start(env, scan);
if (result != 0) {
lu_object_free(env, top);
@@ -276,7 +275,7 @@
* First call ->loo_object_delete() method to release all resources.
*/
list_for_each_entry_reverse(scan, layers, lo_linkage) {
- if (scan->lo_ops->loo_object_delete != NULL)
+ if (scan->lo_ops->loo_object_delete)
scan->lo_ops->loo_object_delete(env, scan);
}
@@ -296,7 +295,6 @@
*/
o = container_of0(splice.prev, struct lu_object, lo_linkage);
list_del_init(&o->lo_linkage);
- LASSERT(o->lo_ops->loo_object_free != NULL);
o->lo_ops->loo_object_free(env, o);
}
@@ -451,7 +449,6 @@
va_start(args, format);
key = lu_context_key_get(&env->le_ctx, &lu_global_key);
- LASSERT(key != NULL);
used = strlen(key->lck_area);
complete = format[strlen(format) - 1] == '\n';
@@ -508,7 +505,7 @@
(*printer)(env, cookie, "%*.*s%s@%p", depth, depth, ruler,
o->lo_dev->ld_type->ldt_name, o);
- if (o->lo_ops->loo_object_print != NULL)
+ if (o->lo_ops->loo_object_print)
(*o->lo_ops->loo_object_print)(env, cookie, printer, o);
(*printer)(env, cookie, "\n");
@@ -535,9 +532,10 @@
*version = ver;
bkt = cfs_hash_bd_extra_get(s->ls_obj_hash, bd);
/* cfs_hash_bd_peek_locked is a somehow "internal" function
- * of cfs_hash, it doesn't add refcount on object. */
+ * of cfs_hash, it doesn't add refcount on object.
+ */
hnode = cfs_hash_bd_peek_locked(s->ls_obj_hash, bd, (void *)f);
- if (hnode == NULL) {
+ if (!hnode) {
lprocfs_counter_incr(s->ls_stats, LU_SS_CACHE_MISS);
return ERR_PTR(-ENOENT);
}
@@ -636,7 +634,7 @@
* If dying object is found during index search, add @waiter to the
* site wait-queue and return ERR_PTR(-EAGAIN).
*/
- if (conf != NULL && conf->loc_flags & LOC_F_NEW)
+ if (conf && conf->loc_flags & LOC_F_NEW)
return lu_object_new(env, dev, f, conf);
s = dev->ld_site;
@@ -715,7 +713,7 @@
top = lu_object_find(env, dev, f, conf);
if (!IS_ERR(top)) {
obj = lu_object_locate(top->lo_header, dev->ld_type);
- if (obj == NULL)
+ if (!obj)
lu_object_put(env, top);
} else
obj = top;
@@ -966,11 +964,11 @@
CFS_HASH_NO_ITEMREF |
CFS_HASH_DEPTH |
CFS_HASH_ASSERT_EMPTY);
- if (s->ls_obj_hash != NULL)
+ if (s->ls_obj_hash)
break;
}
- if (s->ls_obj_hash == NULL) {
+ if (!s->ls_obj_hash) {
CERROR("failed to create lu_site hash with bits: %d\n", bits);
return -ENOMEM;
}
@@ -982,7 +980,7 @@
}
s->ls_stats = lprocfs_alloc_stats(LU_SS_LAST_STAT, 0);
- if (s->ls_stats == NULL) {
+ if (!s->ls_stats) {
cfs_hash_putref(s->ls_obj_hash);
s->ls_obj_hash = NULL;
return -ENOMEM;
@@ -1031,19 +1029,19 @@
list_del_init(&s->ls_linkage);
mutex_unlock(&lu_sites_guard);
- if (s->ls_obj_hash != NULL) {
+ if (s->ls_obj_hash) {
cfs_hash_putref(s->ls_obj_hash);
s->ls_obj_hash = NULL;
}
- if (s->ls_top_dev != NULL) {
+ if (s->ls_top_dev) {
s->ls_top_dev->ld_site = NULL;
lu_ref_del(&s->ls_top_dev->ld_reference, "site-top", s);
lu_device_put(s->ls_top_dev);
s->ls_top_dev = NULL;
}
- if (s->ls_stats != NULL)
+ if (s->ls_stats)
lprocfs_free_stats(&s->ls_stats);
}
EXPORT_SYMBOL(lu_site_fini);
@@ -1088,7 +1086,7 @@
*/
int lu_device_init(struct lu_device *d, struct lu_device_type *t)
{
- if (t->ldt_device_nr++ == 0 && t->ldt_ops->ldto_start != NULL)
+ if (t->ldt_device_nr++ == 0 && t->ldt_ops->ldto_start)
t->ldt_ops->ldto_start(t);
memset(d, 0, sizeof(*d));
atomic_set(&d->ld_ref, 0);
@@ -1107,7 +1105,7 @@
struct lu_device_type *t;
t = d->ld_type;
- if (d->ld_obd != NULL) {
+ if (d->ld_obd) {
d->ld_obd->obd_lu_dev = NULL;
d->ld_obd = NULL;
}
@@ -1116,7 +1114,7 @@
LASSERTF(atomic_read(&d->ld_ref) == 0,
"Refcount is %u\n", atomic_read(&d->ld_ref));
LASSERT(t->ldt_device_nr > 0);
- if (--t->ldt_device_nr == 0 && t->ldt_ops->ldto_stop != NULL)
+ if (--t->ldt_device_nr == 0 && t->ldt_ops->ldto_stop)
t->ldt_ops->ldto_stop(t);
}
EXPORT_SYMBOL(lu_device_fini);
@@ -1148,7 +1146,7 @@
LASSERT(list_empty(&o->lo_linkage));
- if (dev != NULL) {
+ if (dev) {
lu_ref_del_at(&dev->ld_reference, &o->lo_dev_ref,
"lu_object", o);
lu_device_put(dev);
@@ -1239,7 +1237,7 @@
struct lu_device *next;
lu_site_purge(env, site, ~0);
- for (scan = top; scan != NULL; scan = next) {
+ for (scan = top; scan; scan = next) {
next = scan->ld_type->ldt_ops->ldto_device_fini(env, scan);
lu_ref_del(&scan->ld_reference, "lu-stack", &lu_site_init);
lu_device_put(scan);
@@ -1248,13 +1246,13 @@
/* purge again. */
lu_site_purge(env, site, ~0);
- for (scan = top; scan != NULL; scan = next) {
+ for (scan = top; scan; scan = next) {
const struct lu_device_type *ldt = scan->ld_type;
struct obd_type *type;
next = ldt->ldt_ops->ldto_device_free(env, scan);
type = ldt->ldt_obd_type;
- if (type != NULL) {
+ if (type) {
type->typ_refcnt--;
class_put_type(type);
}
@@ -1289,14 +1287,14 @@
int result;
int i;
- LASSERT(key->lct_init != NULL);
- LASSERT(key->lct_fini != NULL);
+ LASSERT(key->lct_init);
+ LASSERT(key->lct_fini);
LASSERT(key->lct_tags != 0);
result = -ENFILE;
spin_lock(&lu_keys_guard);
for (i = 0; i < ARRAY_SIZE(lu_keys); ++i) {
- if (lu_keys[i] == NULL) {
+ if (!lu_keys[i]) {
key->lct_index = i;
atomic_set(&key->lct_used, 1);
lu_keys[i] = key;
@@ -1313,12 +1311,10 @@
static void key_fini(struct lu_context *ctx, int index)
{
- if (ctx->lc_value != NULL && ctx->lc_value[index] != NULL) {
+ if (ctx->lc_value && ctx->lc_value[index]) {
struct lu_context_key *key;
key = lu_keys[index];
- LASSERT(key != NULL);
- LASSERT(key->lct_fini != NULL);
LASSERT(atomic_read(&key->lct_used) > 1);
key->lct_fini(ctx, key, ctx->lc_value[index]);
@@ -1376,7 +1372,7 @@
if (result)
break;
key = va_arg(args, struct lu_context_key *);
- } while (key != NULL);
+ } while (key);
va_end(args);
if (result != 0) {
@@ -1404,7 +1400,7 @@
do {
lu_context_key_degister(k);
k = va_arg(args, struct lu_context_key*);
- } while (k != NULL);
+ } while (k);
va_end(args);
}
EXPORT_SYMBOL(lu_context_key_degister_many);
@@ -1420,7 +1416,7 @@
do {
lu_context_key_revive(k);
k = va_arg(args, struct lu_context_key*);
- } while (k != NULL);
+ } while (k);
va_end(args);
}
EXPORT_SYMBOL(lu_context_key_revive_many);
@@ -1436,7 +1432,7 @@
do {
lu_context_key_quiesce(k);
k = va_arg(args, struct lu_context_key*);
- } while (k != NULL);
+ } while (k);
va_end(args);
}
EXPORT_SYMBOL(lu_context_key_quiesce_many);
@@ -1497,7 +1493,7 @@
{
int i;
- if (ctx->lc_value == NULL)
+ if (!ctx->lc_value)
return;
for (i = 0; i < ARRAY_SIZE(lu_keys); ++i)
@@ -1511,12 +1507,12 @@
{
int i;
- LINVRNT(ctx->lc_value != NULL);
+ LINVRNT(ctx->lc_value);
for (i = 0; i < ARRAY_SIZE(lu_keys); ++i) {
struct lu_context_key *key;
key = lu_keys[i];
- if (ctx->lc_value[i] == NULL && key != NULL &&
+ if (!ctx->lc_value[i] && key &&
(key->lct_tags & ctx->lc_tags) &&
/*
* Don't create values for a LCT_QUIESCENT key, as this
@@ -1525,7 +1521,7 @@
!(key->lct_tags & LCT_QUIESCENT)) {
void *value;
- LINVRNT(key->lct_init != NULL);
+ LINVRNT(key->lct_init);
LINVRNT(key->lct_index == i);
value = key->lct_init(ctx, key);
@@ -1542,7 +1538,7 @@
* value.
*/
ctx->lc_value[i] = value;
- if (key->lct_exit != NULL)
+ if (key->lct_exit)
ctx->lc_tags |= LCT_HAS_EXIT;
}
ctx->lc_version = key_set_version;
@@ -1554,7 +1550,7 @@
{
ctx->lc_value = kcalloc(ARRAY_SIZE(lu_keys), sizeof(ctx->lc_value[0]),
GFP_NOFS);
- if (likely(ctx->lc_value != NULL))
+ if (likely(ctx->lc_value))
return keys_fill(ctx);
return -ENOMEM;
@@ -1626,14 +1622,13 @@
LINVRNT(ctx->lc_state == LCS_ENTERED);
ctx->lc_state = LCS_LEFT;
- if (ctx->lc_tags & LCT_HAS_EXIT && ctx->lc_value != NULL) {
+ if (ctx->lc_tags & LCT_HAS_EXIT && ctx->lc_value) {
for (i = 0; i < ARRAY_SIZE(lu_keys); ++i) {
- if (ctx->lc_value[i] != NULL) {
+ if (ctx->lc_value[i]) {
struct lu_context_key *key;
key = lu_keys[i];
- LASSERT(key != NULL);
- if (key->lct_exit != NULL)
+ if (key->lct_exit)
key->lct_exit(ctx,
key, ctx->lc_value[i]);
}
@@ -1688,7 +1683,7 @@
int result;
result = lu_context_refill(&env->le_ctx);
- if (result == 0 && env->le_ses != NULL)
+ if (result == 0 && env->le_ses)
result = lu_context_refill(env->le_ses);
return result;
}
@@ -1922,11 +1917,11 @@
int result;
struct lu_kmem_descr *iter = caches;
- for (result = 0; iter->ckd_cache != NULL; ++iter) {
+ for (result = 0; iter->ckd_cache; ++iter) {
*iter->ckd_cache = kmem_cache_create(iter->ckd_name,
iter->ckd_size,
0, 0, NULL);
- if (*iter->ckd_cache == NULL) {
+ if (!*iter->ckd_cache) {
result = -ENOMEM;
/* free all previously allocated caches */
lu_kmem_fini(caches);
@@ -1943,7 +1938,7 @@
*/
void lu_kmem_fini(struct lu_kmem_descr *caches)
{
- for (; caches->ckd_cache != NULL; ++caches) {
+ for (; caches->ckd_cache; ++caches) {
kmem_cache_destroy(*caches->ckd_cache);
*caches->ckd_cache = NULL;
}
diff --git a/drivers/staging/lustre/lustre/obdclass/lustre_handles.c b/drivers/staging/lustre/lustre/obdclass/lustre_handles.c
index fb9147c..403ceea 100644
--- a/drivers/staging/lustre/lustre/obdclass/lustre_handles.c
+++ b/drivers/staging/lustre/lustre/obdclass/lustre_handles.c
@@ -65,7 +65,7 @@
{
struct handle_bucket *bucket;
- LASSERT(h != NULL);
+ LASSERT(h);
LASSERT(list_empty(&h->h_link));
/*
@@ -140,10 +140,11 @@
struct portals_handle *h;
void *retval = NULL;
- LASSERT(handle_hash != NULL);
+ LASSERT(handle_hash);
/* Be careful when you want to change this code. See the
- * rcu_read_lock() definition on top this file. - jxiong */
+ * rcu_read_lock() definition on top this file. - jxiong
+ */
bucket = handle_hash + (cookie & HANDLE_HASH_MASK);
rcu_read_lock();
@@ -170,7 +171,7 @@
struct portals_handle *h = RCU2HANDLE(rcu);
void *ptr = (void *)(unsigned long)h->h_cookie;
- if (h->h_ops->hop_free != NULL)
+ if (h->h_ops->hop_free)
h->h_ops->hop_free(ptr, h->h_size);
else
kfree(ptr);
@@ -183,11 +184,11 @@
struct timespec64 ts;
int seed[2];
- LASSERT(handle_hash == NULL);
+ LASSERT(!handle_hash);
handle_hash = libcfs_kvzalloc(sizeof(*bucket) * HANDLE_HASH_SIZE,
GFP_NOFS);
- if (handle_hash == NULL)
+ if (!handle_hash)
return -ENOMEM;
spin_lock_init(&handle_base_lock);
@@ -234,7 +235,7 @@
{
int count;
- LASSERT(handle_hash != NULL);
+ LASSERT(handle_hash);
count = cleanup_all_handles();
diff --git a/drivers/staging/lustre/lustre/obdclass/lustre_peer.c b/drivers/staging/lustre/lustre/obdclass/lustre_peer.c
index d6184f8..b10ea31 100644
--- a/drivers/staging/lustre/lustre/obdclass/lustre_peer.c
+++ b/drivers/staging/lustre/lustre/obdclass/lustre_peer.c
@@ -93,7 +93,8 @@
EXPORT_SYMBOL(lustre_uuid_to_peer);
/* Add a nid to a niduuid. Multiple nids can be added to a single uuid;
- LNET will choose the best one. */
+ * LNET will choose the best one.
+ */
int class_add_uuid(const char *uuid, __u64 nid)
{
struct uuid_nid_data *data, *entry;
@@ -151,7 +152,7 @@
struct uuid_nid_data *data;
spin_lock(&g_uuid_lock);
- if (uuid != NULL) {
+ if (uuid) {
struct obd_uuid tmp;
obd_str2uuid(&tmp, uuid);
@@ -165,7 +166,7 @@
list_splice_init(&g_uuid_list, &deathrow);
spin_unlock(&g_uuid_lock);
- if (uuid != NULL && list_empty(&deathrow)) {
+ if (uuid && list_empty(&deathrow)) {
CDEBUG(D_INFO, "Try to delete a non-existent uuid %s\n", uuid);
return -EINVAL;
}
diff --git a/drivers/staging/lustre/lustre/obdclass/obd_config.c b/drivers/staging/lustre/lustre/obdclass/obd_config.c
index 49cdc64..2134b60 100644
--- a/drivers/staging/lustre/lustre/obdclass/obd_config.c
+++ b/drivers/staging/lustre/lustre/obdclass/obd_config.c
@@ -71,8 +71,9 @@
EXPORT_SYMBOL(class_find_param);
/* returns 0 if this is the first key in the buffer, else 1.
- valp points to first char after key. */
-static int class_match_param(char *buf, char *key, char **valp)
+ * valp points to first char after key.
+ */
+static int class_match_param(char *buf, const char *key, char **valp)
{
if (!buf)
return 1;
@@ -114,9 +115,10 @@
};
/* 0 is good nid,
- 1 not found
- < 0 error
- endh is set to next separator */
+ * 1 not found
+ * < 0 error
+ * endh is set to next separator
+ */
static int class_parse_value(char *buf, int opc, void *value, char **endh,
int quiet)
{
@@ -210,7 +212,7 @@
name, typename, rc);
goto out;
}
- LASSERTF(obd != NULL, "Cannot get obd device %s of type %s\n",
+ LASSERTF(obd, "Cannot get obd device %s of type %s\n",
name, typename);
LASSERTF(obd->obd_magic == OBD_DEVICE_MAGIC,
"obd %p obd_magic %08X != %08X\n",
@@ -230,7 +232,8 @@
mutex_init(&obd->obd_dev_mutex);
spin_lock_init(&obd->obd_osfs_lock);
/* obd->obd_osfs_age must be set to a value in the distant
- * past to guarantee a fresh statfs is fetched on mount. */
+ * past to guarantee a fresh statfs is fetched on mount.
+ */
obd->obd_osfs_age = cfs_time_shift_64(-1000);
/* XXX belongs in setup not attach */
@@ -272,9 +275,9 @@
obd->obd_minor, typename, atomic_read(&obd->obd_refcount));
return 0;
out:
- if (obd != NULL) {
+ if (obd)
class_release_dev(obd);
- }
+
return rc;
}
@@ -286,7 +289,7 @@
int err = 0;
struct obd_export *exp;
- LASSERT(obd != NULL);
+ LASSERT(obd);
LASSERTF(obd == class_num2obd(obd->obd_minor),
"obd %p != obd_devs[%d] %p\n",
obd, obd->obd_minor, class_num2obd(obd->obd_minor));
@@ -315,7 +318,8 @@
return -EEXIST;
}
/* just leave this on forever. I can't use obd_set_up here because
- other fns check that status, and we're not actually set up yet. */
+ * other fns check that status, and we're not actually set up yet.
+ */
obd->obd_starting = 1;
obd->obd_uuid_hash = NULL;
spin_unlock(&obd->obd_dev_lock);
@@ -503,7 +507,8 @@
if ((refs == 1) && obd->obd_stopping) {
/* All exports have been destroyed; there should
- be no more in-progress ops by this point.*/
+ * be no more in-progress ops by this point.
+ */
spin_lock(&obd->obd_self_export->exp_lock);
obd->obd_self_export->exp_flags |= exp_flags_from_obd(obd);
@@ -723,7 +728,8 @@
}
/* We can't call ll_process_config or lquota_process_config directly because
- * it lives in a module that must be loaded after this one. */
+ * it lives in a module that must be loaded after this one.
+ */
static int (*client_process_config)(struct lustre_cfg *lcfg);
static int (*quota_process_config)(struct lustre_cfg *lcfg);
@@ -812,7 +818,8 @@
lustre_cfg_string(lcfg, 2),
lustre_cfg_string(lcfg, 3));
/* set these mount options somewhere, so ll_fill_super
- * can find them. */
+ * can find them.
+ */
err = class_add_profile(LUSTRE_CFG_BUFLEN(lcfg, 1),
lustre_cfg_string(lcfg, 1),
LUSTRE_CFG_BUFLEN(lcfg, 2),
@@ -988,8 +995,9 @@
fakefile.private_data = &fake_seqfile;
fake_seqfile.private = data;
/* e.g. tunefs.lustre --param mdt.group_upcall=foo /r/tmp/lustre-mdt
- or lctl conf_param lustre-MDT0000.mdt.group_upcall=bar
- or lctl conf_param lustre-OST0000.osc.max_dirty_mb=36 */
+ * or lctl conf_param lustre-MDT0000.mdt.group_upcall=bar
+ * or lctl conf_param lustre-OST0000.osc.max_dirty_mb=36
+ */
for (i = 1; i < lcfg->lcfg_bufcount; i++) {
key = lustre_cfg_buf(lcfg, i);
/* Strip off prefix */
@@ -1008,7 +1016,7 @@
/* Search proc entries */
while (lvars[j].name) {
var = &lvars[j];
- if (class_match_param(key, (char *)var->name, NULL) == 0
+ if (!class_match_param(key, var->name, NULL)
&& keylen == strlen(var->name)) {
matched++;
rc = -EROFS;
@@ -1027,9 +1035,10 @@
}
if (!matched) {
/* If the prefix doesn't match, return error so we
- can pass it down the stack */
+ * can pass it down the stack
+ */
if (strnchr(key, keylen, '.'))
- return -ENOSYS;
+ return -ENOSYS;
CERROR("%s: unknown param %s\n",
(char *)lustre_cfg_string(lcfg, 0), key);
/* rc = -EINVAL; continue parsing other params */
@@ -1116,7 +1125,8 @@
}
}
/* A config command without a start marker before it is
- illegal (post 146) */
+ * illegal (post 146)
+ */
if (!(clli->cfg_flags & CFG_F_COMPAT146) &&
!(clli->cfg_flags & CFG_F_MARKER) &&
(lcfg->lcfg_command != LCFG_MARKER)) {
@@ -1182,8 +1192,9 @@
}
/* we override the llog's uuid for clients, to insure they
- are unique */
- if (clli && clli->cfg_instance != NULL &&
+ * are unique
+ */
+ if (clli && clli->cfg_instance &&
lcfg->lcfg_command == LCFG_ATTACH) {
lustre_cfg_bufs_set_string(&bufs, 2,
clli->cfg_uuid.uuid);
@@ -1211,7 +1222,8 @@
lcfg_new->lcfg_flags = lcfg->lcfg_flags;
/* XXX Hack to try to remain binary compatible with
- * pre-newconfig logs */
+ * pre-newconfig logs
+ */
if (lcfg->lcfg_nal != 0 && /* pre-newconfig log? */
(lcfg->lcfg_nid >> 32) == 0) {
__u32 addr = (__u32)(lcfg->lcfg_nid & 0xffffffff);
@@ -1270,7 +1282,7 @@
if (cfg) {
cd.lpcd_first_idx = cfg->cfg_last_idx;
callback = cfg->cfg_callback;
- LASSERT(callback != NULL);
+ LASSERT(callback);
} else {
callback = class_config_llog_handler;
}
diff --git a/drivers/staging/lustre/lustre/obdclass/obd_mount.c b/drivers/staging/lustre/lustre/obdclass/obd_mount.c
index b5aa816..d138e05 100644
--- a/drivers/staging/lustre/lustre/obdclass/obd_mount.c
+++ b/drivers/staging/lustre/lustre/obdclass/obd_mount.c
@@ -283,9 +283,10 @@
recov_bk = 0;
/* Try all connections, but only once (again).
- We don't want to block another target from starting
- (using its local copy of the log), but we do want to connect
- if at all possible. */
+ * We don't want to block another target from starting
+ * (using its local copy of the log), but we do want to connect
+ * if at all possible.
+ */
recov_bk++;
CDEBUG(D_MOUNT, "%s: Set MGC reconnect %d\n", mgcname,
recov_bk);
@@ -375,7 +376,8 @@
goto out_free;
/* Keep a refcount of servers/clients who started with "mount",
- so we know when we can get rid of the mgc. */
+ * so we know when we can get rid of the mgc.
+ */
atomic_set(&obd->u.cli.cl_mgc_refcount, 1);
/* We connect to the MGS at setup, and don't disconnect until cleanup */
@@ -403,7 +405,8 @@
out:
/* Keep the mgc info in the sb. Note that many lsi's can point
- to the same mgc.*/
+ * to the same mgc.
+ */
lsi->lsi_mgc = obd;
out_free:
mutex_unlock(&mgc_start_lock);
@@ -432,7 +435,8 @@
LASSERT(atomic_read(&obd->u.cli.cl_mgc_refcount) > 0);
if (!atomic_dec_and_test(&obd->u.cli.cl_mgc_refcount)) {
/* This is not fatal, every client that stops
- will call in here. */
+ * will call in here.
+ */
CDEBUG(D_MOUNT, "mgc still has %d references.\n",
atomic_read(&obd->u.cli.cl_mgc_refcount));
rc = -EBUSY;
@@ -440,19 +444,20 @@
}
/* The MGC has no recoverable data in any case.
- * force shutdown set in umount_begin */
+ * force shutdown set in umount_begin
+ */
obd->obd_no_recov = 1;
if (obd->u.cli.cl_mgc_mgsexp) {
/* An error is not fatal, if we are unable to send the
- disconnect mgs ping evictor cleans up the export */
+ * disconnect mgs ping evictor cleans up the export
+ */
rc = obd_disconnect(obd->u.cli.cl_mgc_mgsexp);
if (rc)
CDEBUG(D_MOUNT, "disconnect failed %d\n", rc);
}
- /* Save the obdname for cleaning the nid uuids, which are
- obdname_XX */
+ /* Save the obdname for cleaning the nid uuids, which are obdname_XX */
len = strlen(obd->obd_name) + 6;
niduuid = kzalloc(len, GFP_NOFS);
if (niduuid) {
@@ -518,13 +523,12 @@
{
struct lustre_sb_info *lsi = s2lsi(sb);
- LASSERT(lsi != NULL);
CDEBUG(D_MOUNT, "Freeing lsi %p\n", lsi);
/* someone didn't call server_put_mount. */
LASSERT(atomic_read(&lsi->lsi_mounts) == 0);
- if (lsi->lsi_lmd != NULL) {
+ if (lsi->lsi_lmd) {
kfree(lsi->lsi_lmd->lmd_dev);
kfree(lsi->lsi_lmd->lmd_profile);
kfree(lsi->lsi_lmd->lmd_mgssec);
@@ -538,7 +542,7 @@
kfree(lsi->lsi_lmd);
}
- LASSERT(lsi->lsi_llsbi == NULL);
+ LASSERT(!lsi->lsi_llsbi);
kfree(lsi);
s2lsi_nocast(sb) = NULL;
@@ -546,13 +550,12 @@
}
/* The lsi has one reference for every server that is using the disk -
- e.g. MDT, MGS, and potentially MGC */
+ * e.g. MDT, MGS, and potentially MGC
+ */
static int lustre_put_lsi(struct super_block *sb)
{
struct lustre_sb_info *lsi = s2lsi(sb);
- LASSERT(lsi != NULL);
-
CDEBUG(D_MOUNT, "put %p %d\n", sb, atomic_read(&lsi->lsi_mounts));
if (atomic_dec_and_test(&lsi->lsi_mounts)) {
lustre_free_lsi(sb);
@@ -588,21 +591,22 @@
if (dash == svname)
return -EINVAL;
- if (fsname != NULL) {
+ if (fsname) {
strncpy(fsname, svname, dash - svname);
fsname[dash - svname] = '\0';
}
- if (endptr != NULL)
+ if (endptr)
*endptr = dash;
return 0;
}
/* Get the index from the obd name.
- rc = server type, or
- rc < 0 on error
- if endptr isn't NULL it is set to end of name */
+ * rc = server type, or
+ * rc < 0 on error
+ * if endptr isn't NULL it is set to end of name
+ */
static int server_name2index(const char *svname, __u32 *idx,
const char **endptr)
{
@@ -627,18 +631,18 @@
dash += 3;
if (strncmp(dash, "all", 3) == 0) {
- if (endptr != NULL)
+ if (endptr)
*endptr = dash + 3;
return rc | LDD_F_SV_ALL;
}
index = simple_strtoul(dash, (char **)endptr, 16);
- if (idx != NULL)
+ if (idx)
*idx = index;
/* Account for -mdc after index that is possible when specifying mdt */
- if (endptr != NULL && strncmp(LUSTRE_MDC_NAME, *endptr + 1,
- sizeof(LUSTRE_MDC_NAME)-1) == 0)
+ if (endptr && strncmp(LUSTRE_MDC_NAME, *endptr + 1,
+ sizeof(LUSTRE_MDC_NAME) - 1) == 0)
*endptr += sizeof(LUSTRE_MDC_NAME);
return rc;
@@ -661,7 +665,8 @@
return rc;
}
/* BUSY just means that there's some other obd that
- needs the mgc. Let him clean it up. */
+ * needs the mgc. Let him clean it up.
+ */
CDEBUG(D_MOUNT, "MGC still in use\n");
}
/* Drop a ref to the mounted disk */
@@ -731,8 +736,9 @@
int rc = 0, devmax;
/* The shortest an ost name can be is 8 chars: -OST0000.
- We don't actually know the fsname at this time, so in fact
- a user could specify any fsname. */
+ * We don't actually know the fsname at this time, so in fact
+ * a user could specify any fsname.
+ */
devmax = strlen(ptr) / 8 + 1;
/* temp storage until we figure out how many we have */
@@ -756,7 +762,8 @@
(uint)(s2-s1), s1, rc);
s1 = s2;
/* now we are pointing at ':' (next exclude)
- or ',' (end of excludes) */
+ * or ',' (end of excludes)
+ */
if (lmd->lmd_exclude_count >= devmax)
break;
}
@@ -788,7 +795,7 @@
lmd->lmd_mgssec = NULL;
tail = strchr(ptr, ',');
- if (tail == NULL)
+ if (!tail)
length = strlen(ptr);
else
length = tail - ptr;
@@ -807,14 +814,14 @@
char *tail;
int length;
- if ((handle == NULL) || (ptr == NULL))
+ if (!handle || !ptr)
return -EINVAL;
kfree(*handle);
*handle = NULL;
tail = strchr(ptr, ',');
- if (tail == NULL)
+ if (!tail)
length = strlen(ptr);
else
length = tail - ptr;
@@ -847,14 +854,14 @@
return -EINVAL;
}
- if (lmd->lmd_mgs != NULL)
+ if (lmd->lmd_mgs)
oldlen = strlen(lmd->lmd_mgs) + 1;
mgsnid = kzalloc(oldlen + length + 1, GFP_NOFS);
if (!mgsnid)
return -ENOMEM;
- if (lmd->lmd_mgs != NULL) {
+ if (lmd->lmd_mgs) {
/* Multiple mgsnid= are taken to mean failover locations */
memcpy(mgsnid, lmd->lmd_mgs, oldlen);
mgsnid[oldlen - 1] = ':';
@@ -909,10 +916,12 @@
s1++;
/* Client options are parsed in ll_options: eg. flock,
- user_xattr, acl */
+ * user_xattr, acl
+ */
/* Parse non-ldiskfs options here. Rather than modifying
- ldiskfs, we just zero these out here */
+ * ldiskfs, we just zero these out here
+ */
if (strncmp(s1, "abort_recov", 11) == 0) {
lmd->lmd_flags |= LMD_FLG_ABORT_RECOV;
clear++;
@@ -940,7 +949,8 @@
sizeof(PARAM_MGSNODE) - 1) == 0) {
s2 = s1 + sizeof(PARAM_MGSNODE) - 1;
/* Assume the next mount opt is the first
- invalid nid we get to. */
+ * invalid nid we get to.
+ */
rc = lmd_parse_mgs(lmd, &s2);
if (rc)
goto invalid;
@@ -981,7 +991,7 @@
size_t length, params_length;
char *tail = strchr(s1 + 6, ',');
- if (tail == NULL)
+ if (!tail)
length = strlen(s1);
else
length = tail - s1;
@@ -1000,18 +1010,20 @@
clear++;
}
/* Linux 2.4 doesn't pass the device, so we stuck it at the
- end of the options. */
+ * end of the options.
+ */
else if (strncmp(s1, "device=", 7) == 0) {
devname = s1 + 7;
/* terminate options right before device. device
- must be the last one. */
+ * must be the last one.
+ */
*s1 = '\0';
break;
}
/* Find next opt */
s2 = strchr(s1, ',');
- if (s2 == NULL) {
+ if (!s2) {
if (clear)
*s1 = '\0';
break;
@@ -1113,9 +1125,9 @@
if (lmd_is_client(lmd)) {
CDEBUG(D_MOUNT, "Mounting client %s\n", lmd->lmd_profile);
- if (client_fill_super == NULL)
+ if (!client_fill_super)
request_module("lustre");
- if (client_fill_super == NULL) {
+ if (!client_fill_super) {
LCONSOLE_ERROR_MSG(0x165, "Nothing registered for client mount! Is the 'lustre' module loaded?\n");
lustre_put_lsi(sb);
rc = -ENODEV;
@@ -1136,7 +1148,8 @@
}
/* If error happens in fill_super() call, @lsi will be killed there.
- * This is why we do not put it here. */
+ * This is why we do not put it here.
+ */
goto out;
out:
if (rc) {
@@ -1151,7 +1164,8 @@
}
/* We can't call ll_fill_super by name because it lives in a module that
- must be loaded after this one. */
+ * must be loaded after this one.
+ */
void lustre_register_client_fill_super(int (*cfs)(struct super_block *sb,
struct vfsmount *mnt))
{
@@ -1166,7 +1180,7 @@
EXPORT_SYMBOL(lustre_register_kill_super_cb);
/***************** FS registration ******************/
-struct dentry *lustre_mount(struct file_system_type *fs_type, int flags,
+static struct dentry *lustre_mount(struct file_system_type *fs_type, int flags,
const char *devname, void *data)
{
struct lustre_mount_data2 lmd2 = {
diff --git a/drivers/staging/lustre/lustre/obdclass/obdo.c b/drivers/staging/lustre/lustre/obdclass/obdo.c
index 75e1dea..e6436cb 100644
--- a/drivers/staging/lustre/lustre/obdclass/obdo.c
+++ b/drivers/staging/lustre/lustre/obdclass/obdo.c
@@ -55,7 +55,8 @@
EXPORT_SYMBOL(obdo_set_parent_fid);
/* WARNING: the file systems must take care not to tinker with
- attributes they don't manage (such as blocks). */
+ * attributes they don't manage (such as blocks).
+ */
void obdo_from_inode(struct obdo *dst, struct inode *src, u32 valid)
{
u32 newvalid = 0;
@@ -122,7 +123,8 @@
ostid_set_seq_mdt0(&ioobj->ioo_oid);
/* Since 2.4 this does not contain o_mode in the low 16 bits.
- * Instead, it holds (bd_md_max_brw - 1) for multi-bulk BRW RPCs */
+ * Instead, it holds (bd_md_max_brw - 1) for multi-bulk BRW RPCs
+ */
ioobj->ioo_max_brw = 0;
}
EXPORT_SYMBOL(obdo_to_ioobj);
diff --git a/drivers/staging/lustre/lustre/obdecho/echo_client.c b/drivers/staging/lustre/lustre/obdecho/echo_client.c
index 4ca82af..3be28c5 100644
--- a/drivers/staging/lustre/lustre/obdecho/echo_client.c
+++ b/drivers/staging/lustre/lustre/obdecho/echo_client.c
@@ -146,7 +146,7 @@
struct echo_thread_info *info;
info = lu_context_key_get(&env->le_ctx, &echo_thread_key);
- LASSERT(info != NULL);
+ LASSERT(info);
return info;
}
@@ -267,7 +267,7 @@
const struct cl_page_slice *slice,
int ioret)
{
- LASSERT(slice->cpl_page->cp_sync_io != NULL);
+ LASSERT(slice->cpl_page->cp_sync_io);
}
static void echo_page_fini(const struct lu_env *env,
@@ -393,13 +393,13 @@
struct echo_lock *el;
el = kmem_cache_alloc(echo_lock_kmem, GFP_NOFS | __GFP_ZERO);
- if (el != NULL) {
+ if (el) {
cl_lock_slice_add(lock, &el->el_cl, obj, &echo_lock_ops);
el->el_object = cl2echo_obj(obj);
INIT_LIST_HEAD(&el->el_chain);
atomic_set(&el->el_refcount, 0);
}
- return el == NULL ? -ENOMEM : 0;
+ return !el ? -ENOMEM : 0;
}
static int echo_conf_set(const struct lu_env *env, struct cl_object *obj,
@@ -439,7 +439,7 @@
under = ed->ed_next;
below = under->ld_ops->ldo_object_alloc(env, obj->lo_header,
under);
- if (below == NULL)
+ if (!below)
return -ENOMEM;
lu_object_add(obj, below);
}
@@ -470,12 +470,12 @@
int lsm_size;
/* If export is lov/osc then use their obd method */
- if (ed->ed_next != NULL)
+ if (ed->ed_next)
return obd_alloc_memmd(ed->ed_ec->ec_exp, lsmp);
/* OFD has no unpackmd method, do everything here */
lsm_size = lov_stripe_md_size(1);
- LASSERT(*lsmp == NULL);
+ LASSERT(!*lsmp);
*lsmp = kzalloc(lsm_size, GFP_NOFS);
if (!*lsmp)
return -ENOMEM;
@@ -498,12 +498,11 @@
int lsm_size;
/* If export is lov/osc then use their obd method */
- if (ed->ed_next != NULL)
+ if (ed->ed_next)
return obd_free_memmd(ed->ed_ec->ec_exp, lsmp);
/* OFD has no unpackmd method, do everything here */
lsm_size = lov_stripe_md_size(1);
- LASSERT(*lsmp != NULL);
kfree((*lsmp)->lsm_oinfo[0]);
kfree(*lsmp);
*lsmp = NULL;
@@ -562,9 +561,9 @@
struct lu_object *obj = NULL;
/* we're the top dev. */
- LASSERT(hdr == NULL);
+ LASSERT(!hdr);
eco = kmem_cache_alloc(echo_object_kmem, GFP_NOFS | __GFP_ZERO);
- if (eco != NULL) {
+ if (eco) {
struct cl_object_header *hdr = &eco->eo_hdr;
obj = &echo_obj2cl(eco)->co_lu;
@@ -627,7 +626,7 @@
struct echo_thread_info *info;
info = kmem_cache_alloc(echo_thread_kmem, GFP_NOFS | __GFP_ZERO);
- if (info == NULL)
+ if (!info)
info = ERR_PTR(-ENOMEM);
return info;
}
@@ -658,7 +657,7 @@
struct echo_session_info *session;
session = kmem_cache_alloc(echo_session_kmem, GFP_NOFS | __GFP_ZERO);
- if (session == NULL)
+ if (!session)
session = ERR_PTR(-ENOMEM);
return session;
}
@@ -715,11 +714,11 @@
cleanup = 2;
obd = class_name2obd(lustre_cfg_string(cfg, 0));
- LASSERT(obd != NULL);
- LASSERT(env != NULL);
+ LASSERT(obd);
+ LASSERT(env);
tgt = class_name2obd(lustre_cfg_string(cfg, 1));
- if (tgt == NULL) {
+ if (!tgt) {
CERROR("Can not find tgt device %s\n",
lustre_cfg_string(cfg, 1));
rc = -ENODEV;
@@ -747,14 +746,14 @@
cleanup = 4;
/* if echo client is to be stacked upon ost device, the next is
- * NULL since ost is not a clio device so far */
- if (next != NULL && !lu_device_is_cl(next))
+ * NULL since ost is not a clio device so far
+ */
+ if (next && !lu_device_is_cl(next))
next = NULL;
tgt_type_name = tgt->obd_type->typ_name;
- if (next != NULL) {
- LASSERT(next != NULL);
- if (next->ld_site != NULL) {
+ if (next) {
+ if (next->ld_site) {
rc = -EBUSY;
goto out;
}
@@ -967,7 +966,8 @@
}
/* In the function below, .hs_keycmp resolves to
- * lu_obj_hop_keycmp() */
+ * lu_obj_hop_keycmp()
+ */
/* coverity[overrun-buffer-val] */
obj = cl_object_find(env, echo_dev2cl(d), fid, &conf->eoc_cl);
if (IS_ERR(obj)) {
@@ -1063,7 +1063,6 @@
struct list_head *el;
int found = 0, still_used = 0;
- LASSERT(ec != NULL);
spin_lock(&ec->ec_lock);
list_for_each(el, &ec->ec_locks) {
ecl = list_entry(el, struct echo_lock, el_chain);
@@ -1121,7 +1120,7 @@
int i;
LASSERT((offset & ~CFS_PAGE_MASK) == 0);
- LASSERT(ed->ed_next != NULL);
+ LASSERT(ed->ed_next);
env = cl_env_get(&refcheck);
if (IS_ERR(env))
return PTR_ERR(env);
@@ -1167,7 +1166,8 @@
cl_page_list_add(&queue->c2_qin, clp);
/* drop the reference count for cl_page_find, so that the page
- * will be freed in cl_2queue_fini. */
+ * will be freed in cl_2queue_fini.
+ */
cl_page_put(env, clp);
cl_page_clip(env, clp, 0, page_size);
@@ -1393,11 +1393,11 @@
brw_flags = OBD_BRW_ASYNC;
pga = kcalloc(npages, sizeof(*pga), GFP_NOFS);
- if (pga == NULL)
+ if (!pga)
return -ENOMEM;
pages = kcalloc(npages, sizeof(*pages), GFP_NOFS);
- if (pages == NULL) {
+ if (!pages) {
kfree(pga);
return -ENOMEM;
}
@@ -1406,11 +1406,11 @@
i < npages;
i++, pgp++, off += PAGE_CACHE_SIZE) {
- LASSERT(pgp->pg == NULL); /* for cleanup */
+ LASSERT(!pgp->pg); /* for cleanup */
rc = -ENOMEM;
pgp->pg = alloc_page(gfp_mask);
- if (pgp->pg == NULL)
+ if (!pgp->pg)
goto out;
pages[i] = pgp->pg;
@@ -1425,7 +1425,7 @@
}
/* brw mode can only be used at client */
- LASSERT(ed->ed_next != NULL);
+ LASSERT(ed->ed_next);
rc = cl_echo_object_brw(eco, rw, offset, pages, npages, async);
out:
@@ -1433,7 +1433,7 @@
verify = 0;
for (i = 0, pgp = pga; i < npages; i++, pgp++) {
- if (pgp->pg == NULL)
+ if (!pgp->pg)
continue;
if (verify) {
@@ -1475,7 +1475,7 @@
lnb = kcalloc(npages, sizeof(struct niobuf_local), GFP_NOFS);
rnb = kcalloc(npages, sizeof(struct niobuf_remote), GFP_NOFS);
- if (lnb == NULL || rnb == NULL) {
+ if (!lnb || !rnb) {
ret = -ENOMEM;
goto out;
}
@@ -1513,7 +1513,7 @@
struct page *page = lnb[i].page;
/* read past eof? */
- if (page == NULL && lnb[i].rc == 0)
+ if (!page && lnb[i].rc == 0)
continue;
if (async)
@@ -1582,7 +1582,7 @@
if (test_mode == 1)
async = 0;
- if (ed->ed_next == NULL && test_mode != 3) {
+ if (!ed->ed_next && test_mode != 3) {
test_mode = 3;
data->ioc_plen1 = data->ioc_count;
}
@@ -1834,7 +1834,7 @@
{
int rc;
- if (exp == NULL) {
+ if (!exp) {
rc = -EINVAL;
goto out;
}
diff --git a/drivers/staging/lustre/lustre/obdecho/echo_internal.h b/drivers/staging/lustre/lustre/obdecho/echo_internal.h
index 69063fa..f5034a2 100644
--- a/drivers/staging/lustre/lustre/obdecho/echo_internal.h
+++ b/drivers/staging/lustre/lustre/obdecho/echo_internal.h
@@ -13,11 +13,6 @@
* General Public License version 2 for more details (a copy is included
* in the LICENSE file that accompanied this code).
*
- * You should have received a copy of the GNU General Public License
- * version 2 along with this program; if not, write to the
- * Free Software Foundation, Inc., 59 Temple Place - Suite 330,
- * Boston, MA 021110-1307, USA
- *
* GPL HEADER END
*/
/*
diff --git a/drivers/staging/lustre/lustre/osc/lproc_osc.c b/drivers/staging/lustre/lustre/osc/lproc_osc.c
index b69ec0f..b5df5e6 100644
--- a/drivers/staging/lustre/lustre/osc/lproc_osc.c
+++ b/drivers/staging/lustre/lustre/osc/lproc_osc.c
@@ -381,7 +381,7 @@
DECLARE_CKSUM_NAME;
- if (obd == NULL)
+ if (!obd)
return 0;
for (i = 0; i < ARRAY_SIZE(cksum_name); i++) {
@@ -406,7 +406,7 @@
DECLARE_CKSUM_NAME;
char kernbuf[10];
- if (obd == NULL)
+ if (!obd)
return 0;
if (count > sizeof(kernbuf) - 1)
@@ -422,8 +422,8 @@
if (((1 << i) & obd->u.cli.cl_supp_cksum_types) == 0)
continue;
if (!strcmp(kernbuf, cksum_name[i])) {
- obd->u.cli.cl_cksum_type = 1 << i;
- return count;
+ obd->u.cli.cl_cksum_type = 1 << i;
+ return count;
}
}
return -EINVAL;
diff --git a/drivers/staging/lustre/lustre/osc/osc_cache.c b/drivers/staging/lustre/lustre/osc/osc_cache.c
index 6b5f8d0..c6623c1 100644
--- a/drivers/staging/lustre/lustre/osc/osc_cache.c
+++ b/drivers/staging/lustre/lustre/osc/osc_cache.c
@@ -140,7 +140,7 @@
static inline struct osc_extent *rb_extent(struct rb_node *n)
{
- if (n == NULL)
+ if (!n)
return NULL;
return container_of(n, struct osc_extent, oe_node);
@@ -148,7 +148,7 @@
static inline struct osc_extent *next_extent(struct osc_extent *ext)
{
- if (ext == NULL)
+ if (!ext)
return NULL;
LASSERT(ext->oe_intree);
@@ -157,7 +157,7 @@
static inline struct osc_extent *prev_extent(struct osc_extent *ext)
{
- if (ext == NULL)
+ if (!ext)
return NULL;
LASSERT(ext->oe_intree);
@@ -240,7 +240,7 @@
goto out;
}
- if (ext->oe_osclock == NULL && ext->oe_grants > 0) {
+ if (!ext->oe_osclock && ext->oe_grants > 0) {
rc = 90;
goto out;
}
@@ -262,7 +262,8 @@
}
/* Do not verify page list if extent is in RPC. This is because an
- * in-RPC extent is supposed to be exclusively accessible w/o lock. */
+ * in-RPC extent is supposed to be exclusively accessible w/o lock.
+ */
if (ext->oe_state > OES_CACHE) {
rc = 0;
goto out;
@@ -319,7 +320,7 @@
if (!extent_debug)
return 0;
- for (tmp = first_extent(obj); tmp != NULL; tmp = next_extent(tmp)) {
+ for (tmp = first_extent(obj); tmp; tmp = next_extent(tmp)) {
if (tmp == ext)
continue;
if (tmp->oe_end >= ext->oe_start &&
@@ -347,7 +348,7 @@
struct osc_extent *ext;
ext = kmem_cache_alloc(osc_extent_kmem, GFP_NOFS | __GFP_ZERO);
- if (ext == NULL)
+ if (!ext)
return NULL;
RB_CLEAR_NODE(&ext->oe_node);
@@ -415,7 +416,7 @@
struct osc_extent *tmp, *p = NULL;
LASSERT(osc_object_is_locked(obj));
- while (n != NULL) {
+ while (n) {
tmp = rb_extent(n);
if (index < tmp->oe_start) {
n = n->rb_left;
@@ -439,7 +440,7 @@
struct osc_extent *ext;
ext = osc_extent_search(obj, index);
- if (ext != NULL && ext->oe_start <= index && index <= ext->oe_end)
+ if (ext && ext->oe_start <= index && index <= ext->oe_end)
return osc_extent_get(ext);
return NULL;
}
@@ -454,7 +455,7 @@
LASSERT(ext->oe_intree == 0);
LASSERT(ext->oe_obj == obj);
LASSERT(osc_object_is_locked(obj));
- while (*n != NULL) {
+ while (*n) {
tmp = rb_extent(*n);
parent = *n;
@@ -533,7 +534,7 @@
LASSERT(cur->oe_state == OES_CACHE);
LASSERT(osc_object_is_locked(obj));
- if (victim == NULL)
+ if (!victim)
return -EINVAL;
if (victim->oe_state != OES_CACHE || victim->oe_fsync_wait)
@@ -587,7 +588,8 @@
if (ext->oe_trunc_pending) {
/* a truncate process is waiting for this extent.
* This may happen due to a race, check
- * osc_cache_truncate_start(). */
+ * osc_cache_truncate_start().
+ */
osc_extent_state_set(ext, OES_TRUNC);
ext->oe_trunc_pending = 0;
} else {
@@ -639,11 +641,10 @@
int rc;
cur = osc_extent_alloc(obj);
- if (cur == NULL)
+ if (!cur)
return ERR_PTR(-ENOMEM);
lock = cl_lock_at_pgoff(env, osc2cl(obj), index, NULL, 1, 0);
- LASSERT(lock != NULL);
LASSERT(lock->cll_descr.cld_mode >= CLM_WRITE);
LASSERT(cli->cl_chunkbits >= PAGE_CACHE_SHIFT);
@@ -678,9 +679,9 @@
restart:
osc_object_lock(obj);
ext = osc_extent_search(obj, cur->oe_start);
- if (ext == NULL)
+ if (!ext)
ext = first_extent(obj);
- while (ext != NULL) {
+ while (ext) {
loff_t ext_chk_start = ext->oe_start >> ppc_bits;
loff_t ext_chk_end = ext->oe_end >> ppc_bits;
@@ -705,18 +706,21 @@
/* ok, from now on, ext and cur have these attrs:
* 1. covered by the same lock
- * 2. contiguous at chunk level or overlapping. */
+ * 2. contiguous at chunk level or overlapping.
+ */
if (overlapped(ext, cur)) {
/* cur is the minimum unit, so overlapping means
- * full contain. */
+ * full contain.
+ */
EASSERTF((ext->oe_start <= cur->oe_start &&
ext->oe_end >= cur->oe_end),
ext, EXTSTR, EXTPARA(cur));
if (ext->oe_state > OES_CACHE || ext->oe_fsync_wait) {
/* for simplicity, we wait for this extent to
- * finish before going forward. */
+ * finish before going forward.
+ */
conflict = osc_extent_get(ext);
break;
}
@@ -729,17 +733,20 @@
if (ext->oe_state != OES_CACHE || ext->oe_fsync_wait) {
/* we can't do anything for a non OES_CACHE extent, or
* if there is someone waiting for this extent to be
- * flushed, try next one. */
+ * flushed, try next one.
+ */
ext = next_extent(ext);
continue;
}
/* check if they belong to the same rpc slot before trying to
* merge. the extents are not overlapped and contiguous at
- * chunk level to get here. */
+ * chunk level to get here.
+ */
if (ext->oe_max_end != max_end) {
/* if they don't belong to the same RPC slot or
- * max_pages_per_rpc has ever changed, do not merge. */
+ * max_pages_per_rpc has ever changed, do not merge.
+ */
ext = next_extent(ext);
continue;
}
@@ -748,7 +755,8 @@
* level so that we know the whole extent is covered by grant
* (the pages in the extent are NOT required to be contiguous).
* Otherwise, it will be too much difficult to know which
- * chunks have grants allocated. */
+ * chunks have grants allocated.
+ */
/* try to do front merge - extend ext's start */
if (chunk + 1 == ext_chk_start) {
@@ -768,28 +776,29 @@
*grants -= chunksize;
/* try to merge with the next one because we just fill
- * in a gap */
+ * in a gap
+ */
if (osc_extent_merge(env, ext, next_extent(ext)) == 0)
/* we can save extent tax from next extent */
*grants += cli->cl_extent_tax;
found = osc_extent_hold(ext);
}
- if (found != NULL)
+ if (found)
break;
ext = next_extent(ext);
}
osc_extent_tree_dump(D_CACHE, obj);
- if (found != NULL) {
- LASSERT(conflict == NULL);
+ if (found) {
+ LASSERT(!conflict);
if (!IS_ERR(found)) {
LASSERT(found->oe_osclock == cur->oe_osclock);
OSC_EXTENT_DUMP(D_CACHE, found,
"found caching ext for %lu.\n", index);
}
- } else if (conflict == NULL) {
+ } else if (!conflict) {
/* create a new extent */
EASSERT(osc_extent_is_overlapped(obj, cur) == 0, cur);
cur->oe_grants = chunksize + cli->cl_extent_tax;
@@ -804,11 +813,12 @@
}
osc_object_unlock(obj);
- if (conflict != NULL) {
- LASSERT(found == NULL);
+ if (conflict) {
+ LASSERT(!found);
/* waiting for IO to finish. Please notice that it's impossible
- * to be an OES_TRUNC extent. */
+ * to be an OES_TRUNC extent.
+ */
rc = osc_extent_wait(env, conflict, OES_INV);
osc_extent_put(env, conflict);
conflict = NULL;
@@ -865,7 +875,8 @@
last_count != PAGE_CACHE_SIZE) {
/* For short writes we shouldn't count parts of pages that
* span a whole chunk on the OST side, or our accounting goes
- * wrong. Should match the code in filter_grant_check. */
+ * wrong. Should match the code in filter_grant_check.
+ */
int offset = oap->oap_page_off & ~CFS_PAGE_MASK;
int count = oap->oap_count + (offset & (blocksize - 1));
int end = (offset + oap->oap_count) & (blocksize - 1);
@@ -909,7 +920,8 @@
osc_object_lock(obj);
LASSERT(sanity_check_nolock(ext) == 0);
/* `Kick' this extent only if the caller is waiting for it to be
- * written out. */
+ * written out.
+ */
if (state == OES_INV && !ext->oe_urgent && !ext->oe_hp &&
!ext->oe_trunc_pending) {
if (ext->oe_state == OES_ACTIVE) {
@@ -967,7 +979,8 @@
/* Request new lu_env.
* We can't use that env from osc_cache_truncate_start() because
- * it's from lov_io_sub and not fully initialized. */
+ * it's from lov_io_sub and not fully initialized.
+ */
env = cl_env_nested_get(&nest);
io = &osc_env_info(env)->oti_io;
io->ci_obj = cl_object_top(osc2cl(obj));
@@ -984,7 +997,8 @@
LASSERT(list_empty(&oap->oap_rpc_item));
/* only discard the pages with their index greater than
- * trunc_index, and ... */
+ * trunc_index, and ...
+ */
if (sub->cp_index < trunc_index ||
(sub->cp_index == trunc_index && partial)) {
/* accounting how many pages remaining in the chunk
@@ -1028,11 +1042,13 @@
pgoff_t last_index;
/* if there is no pages in this chunk, we can also free grants
- * for the last chunk */
+ * for the last chunk
+ */
if (pages_in_chunk == 0) {
/* if this is the 1st chunk and no pages in this chunk,
* ext->oe_nr_pages must be zero, so we should be in
- * the other if-clause. */
+ * the other if-clause.
+ */
LASSERT(trunc_chunk > 0);
--trunc_chunk;
++chunks;
@@ -1074,13 +1090,13 @@
LASSERT(sanity_check(ext) == 0);
/* in locking state, any process should not touch this extent. */
EASSERT(ext->oe_state == OES_LOCKING, ext);
- EASSERT(ext->oe_owner != NULL, ext);
+ EASSERT(ext->oe_owner, ext);
OSC_EXTENT_DUMP(D_CACHE, ext, "make ready\n");
list_for_each_entry(oap, &ext->oe_pages, oap_pending_item) {
++page_count;
- if (last == NULL || last->oap_obj_off < oap->oap_obj_off)
+ if (!last || last->oap_obj_off < oap->oap_obj_off)
last = oap;
/* checking ASYNC_READY is race safe */
@@ -1103,9 +1119,10 @@
}
LASSERT(page_count == ext->oe_nr_pages);
- LASSERT(last != NULL);
+ LASSERT(last);
/* the last page is the only one we need to refresh its count by
- * the size of file. */
+ * the size of file.
+ */
if (!(last->oap_async_flags & ASYNC_COUNT_STABLE)) {
last->oap_count = osc_refresh_count(env, last, OBD_BRW_WRITE);
LASSERT(last->oap_count > 0);
@@ -1114,7 +1131,8 @@
}
/* for the rest of pages, we don't need to call osf_refresh_count()
- * because it's known they are not the last page */
+ * because it's known they are not the last page
+ */
list_for_each_entry(oap, &ext->oe_pages, oap_pending_item) {
if (!(oap->oap_async_flags & ASYNC_COUNT_STABLE)) {
oap->oap_count = PAGE_CACHE_SIZE - oap->oap_page_off;
@@ -1167,9 +1185,10 @@
end_index = min(ext->oe_max_end, ((chunk + 1) << ppc_bits) - 1);
next = next_extent(ext);
- if (next != NULL && next->oe_start <= end_index) {
+ if (next && next->oe_start <= end_index) {
/* complex mode - overlapped with the next extent,
- * this case will be handled by osc_extent_find() */
+ * this case will be handled by osc_extent_find()
+ */
rc = -EAGAIN;
goto out;
}
@@ -1197,7 +1216,7 @@
/* osc_object_lock(obj); */
cnt = 1;
- for (ext = first_extent(obj); ext != NULL; ext = next_extent(ext))
+ for (ext = first_extent(obj); ext; ext = next_extent(ext))
OSC_EXTENT_DUMP(level, ext, "in tree %d.\n", cnt++);
cnt = 1;
@@ -1262,7 +1281,6 @@
/* readpage queues with _COUNT_STABLE, shouldn't get here. */
LASSERT(!(cmd & OBD_BRW_READ));
- LASSERT(opg != NULL);
obj = opg->ops_cl.cpl_obj;
cl_object_attr_lock(obj);
@@ -1299,16 +1317,16 @@
* page->cp_req can be NULL if io submission failed before
* cl_req was allocated.
*/
- if (page->cp_req != NULL)
+ if (page->cp_req)
cl_req_page_done(env, page);
- LASSERT(page->cp_req == NULL);
+ LASSERT(!page->cp_req);
crt = cmd == OBD_BRW_READ ? CRT_READ : CRT_WRITE;
/* Clear opg->ops_transfer_pinned before VM lock is released. */
opg->ops_transfer_pinned = 0;
spin_lock(&obj->oo_seatbelt);
- LASSERT(opg->ops_submitter != NULL);
+ LASSERT(opg->ops_submitter);
LASSERT(!list_empty(&opg->ops_inflight));
list_del_init(&opg->ops_inflight);
opg->ops_submitter = NULL;
@@ -1367,7 +1385,8 @@
}
/* the companion to osc_consume_write_grant, called when a brw has completed.
- * must be called with the loi lock held. */
+ * must be called with the loi lock held.
+ */
static void osc_release_write_grant(struct client_obd *cli,
struct brw_page *pga)
{
@@ -1410,7 +1429,8 @@
/* it's quite normal for us to get more grant than reserved.
* Thinking about a case that two extents merged by adding a new
* chunk, we can save one extent tax. If extent tax is greater than
- * one chunk, we can save more grant by adding a new chunk */
+ * one chunk, we can save more grant by adding a new chunk
+ */
cli->cl_reserved_grant -= reserved;
if (unused > reserved) {
cli->cl_avail_grant += reserved;
@@ -1454,7 +1474,8 @@
cli->cl_lost_grant += lost_grant;
if (cli->cl_avail_grant < grant && cli->cl_lost_grant >= grant) {
/* borrow some grant from truncate to avoid the case that
- * truncate uses up all avail grant */
+ * truncate uses up all avail grant
+ */
cli->cl_lost_grant -= grant;
cli->cl_avail_grant += grant;
}
@@ -1539,7 +1560,8 @@
client_obd_list_lock(&cli->cl_loi_list_lock);
/* force the caller to try sync io. this can jump the list
- * of queued writes and create a discontiguous rpc stream */
+ * of queued writes and create a discontiguous rpc stream
+ */
if (OBD_FAIL_CHECK(OBD_FAIL_OSC_NO_GRANT) ||
cli->cl_dirty_max < PAGE_CACHE_SIZE ||
cli->cl_ar.ar_force_sync || loi->loi_ar.ar_force_sync) {
@@ -1558,7 +1580,8 @@
* Adding a cache waiter will trigger urgent write-out no matter what
* RPC size will be.
* The exiting condition is no avail grants and no dirty pages caching,
- * that really means there is no space on the OST. */
+ * that really means there is no space on the OST.
+ */
init_waitqueue_head(&ocw.ocw_waitq);
ocw.ocw_oap = oap;
ocw.ocw_grant = bytes;
@@ -1640,7 +1663,8 @@
/* This maintains the lists of pending pages to read/write for a given object
* (lop). This is used by osc_check_rpcs->osc_next_obj() and osc_list_maint()
- * to quickly find objects that are ready to send an RPC. */
+ * to quickly find objects that are ready to send an RPC.
+ */
static int osc_makes_rpc(struct client_obd *cli, struct osc_object *osc,
int cmd)
{
@@ -1649,8 +1673,9 @@
/* if we have an invalid import we want to drain the queued pages
* by forcing them through rpcs that immediately fail and complete
* the pages. recovery relies on this to empty the queued pages
- * before canceling the locks and evicting down the llite pages */
- if ((cli->cl_import == NULL || cli->cl_import->imp_invalid))
+ * before canceling the locks and evicting down the llite pages
+ */
+ if (!cli->cl_import || cli->cl_import->imp_invalid)
invalid_import = 1;
if (cmd & OBD_BRW_WRITE) {
@@ -1670,7 +1695,8 @@
}
/* trigger a write rpc stream as long as there are dirtiers
* waiting for space. as they're waiting, they're not going to
- * create more pages to coalesce with what's waiting.. */
+ * create more pages to coalesce with what's waiting..
+ */
if (!list_empty(&cli->cl_cache_waiters)) {
CDEBUG(D_CACHE, "cache waiters forcing RPC\n");
return 1;
@@ -1723,7 +1749,8 @@
}
/* maintain the osc's cli list membership invariants so that osc_send_oap_rpc
- * can find pages to build into rpcs quickly */
+ * can find pages to build into rpcs quickly
+ */
static int __osc_list_maint(struct client_obd *cli, struct osc_object *osc)
{
if (osc_makes_hprpc(osc)) {
@@ -1761,7 +1788,8 @@
* application. As an async write fails we record the error code for later if
* the app does an fsync. As long as errors persist we force future rpcs to be
* sync so that the app can get a sync error and break the cycle of queueing
- * pages for which writeback will fail. */
+ * pages for which writeback will fail.
+ */
static void osc_process_ar(struct osc_async_rc *ar, __u64 xid,
int rc)
{
@@ -1780,7 +1808,8 @@
}
/* this must be called holding the loi list lock to give coverage to exit_cache,
- * async_flag maintenance, and oap_request */
+ * async_flag maintenance, and oap_request
+ */
static void osc_ap_completion(const struct lu_env *env, struct client_obd *cli,
struct osc_async_page *oap, int sent, int rc)
{
@@ -1788,7 +1817,7 @@
struct lov_oinfo *loi = osc->oo_oinfo;
__u64 xid = 0;
- if (oap->oap_request != NULL) {
+ if (oap->oap_request) {
xid = ptlrpc_req_xid(oap->oap_request);
ptlrpc_req_finished(oap->oap_request);
oap->oap_request = NULL;
@@ -1906,7 +1935,7 @@
while ((ext = next_extent(ext)) != NULL) {
if ((ext->oe_state != OES_CACHE) ||
(!list_empty(&ext->oe_link) &&
- ext->oe_owner != NULL))
+ ext->oe_owner))
continue;
if (!try_to_add_extent_for_io(cli, ext, rpclist,
@@ -1918,10 +1947,10 @@
return page_count;
ext = first_extent(obj);
- while (ext != NULL) {
+ while (ext) {
if ((ext->oe_state != OES_CACHE) ||
/* this extent may be already in current rpclist */
- (!list_empty(&ext->oe_link) && ext->oe_owner != NULL)) {
+ (!list_empty(&ext->oe_link) && ext->oe_owner)) {
ext = next_extent(ext);
continue;
}
@@ -1968,7 +1997,8 @@
}
/* we're going to grab page lock, so release object lock because
- * lock order is page lock -> object lock. */
+ * lock order is page lock -> object lock.
+ */
osc_object_unlock(osc);
list_for_each_entry_safe(ext, tmp, &rpclist, oe_link) {
@@ -1980,7 +2010,7 @@
continue;
}
}
- if (first == NULL) {
+ if (!first) {
first = ext;
srvlock = ext->oe_srvlock;
} else {
@@ -2053,12 +2083,14 @@
})
/* This is called by osc_check_rpcs() to find which objects have pages that
- * we could be sending. These lists are maintained by osc_makes_rpc(). */
+ * we could be sending. These lists are maintained by osc_makes_rpc().
+ */
static struct osc_object *osc_next_obj(struct client_obd *cli)
{
/* First return objects that have blocked locks so that they
* will be flushed quickly and other clients can get the lock,
- * then objects which have pages ready to be stuffed into RPCs */
+ * then objects which have pages ready to be stuffed into RPCs
+ */
if (!list_empty(&cli->cl_loi_hp_ready_list))
return list_to_obj(&cli->cl_loi_hp_ready_list, hp_ready_item);
if (!list_empty(&cli->cl_loi_ready_list))
@@ -2067,14 +2099,16 @@
/* then if we have cache waiters, return all objects with queued
* writes. This is especially important when many small files
* have filled up the cache and not been fired into rpcs because
- * they don't pass the nr_pending/object threshold */
+ * they don't pass the nr_pending/object threshold
+ */
if (!list_empty(&cli->cl_cache_waiters) &&
!list_empty(&cli->cl_loi_write_list))
return list_to_obj(&cli->cl_loi_write_list, write_item);
/* then return all queued objects when we have an invalid import
- * so that they get flushed */
- if (cli->cl_import == NULL || cli->cl_import->imp_invalid) {
+ * so that they get flushed
+ */
+ if (!cli->cl_import || cli->cl_import->imp_invalid) {
if (!list_empty(&cli->cl_loi_write_list))
return list_to_obj(&cli->cl_loi_write_list, write_item);
if (!list_empty(&cli->cl_loi_read_list))
@@ -2111,7 +2145,8 @@
* would be redundant if we were getting read/write work items
* instead of objects. we don't want send_oap_rpc to drain a
* partial read pending queue when we're given this object to
- * do io on writes while there are cache waiters */
+ * do io on writes while there are cache waiters
+ */
osc_object_lock(osc);
if (osc_makes_rpc(cli, osc, OBD_BRW_WRITE)) {
rc = osc_send_write_rpc(env, cli, osc);
@@ -2133,7 +2168,8 @@
* because it might be blocked at grabbing
* the page lock as we mentioned.
*
- * Anyway, continue to drain pages. */
+ * Anyway, continue to drain pages.
+ */
/* break; */
}
}
@@ -2158,12 +2194,13 @@
{
int rc = 0;
- if (osc != NULL && osc_list_maint(cli, osc) == 0)
+ if (osc && osc_list_maint(cli, osc) == 0)
return 0;
if (!async) {
/* disable osc_lru_shrink() temporarily to avoid
- * potential stack overrun problem. LU-2859 */
+ * potential stack overrun problem. LU-2859
+ */
atomic_inc(&cli->cl_lru_shrinkers);
client_obd_list_lock(&cli->cl_loi_list_lock);
osc_check_rpcs(env, cli);
@@ -2171,7 +2208,7 @@
atomic_dec(&cli->cl_lru_shrinkers);
} else {
CDEBUG(D_CACHE, "Queue writeback work for client %p.\n", cli);
- LASSERT(cli->cl_writeback_work != NULL);
+ LASSERT(cli->cl_writeback_work);
rc = ptlrpcd_queue_work(cli->cl_writeback_work);
}
return rc;
@@ -2236,7 +2273,7 @@
if (oap->oap_magic != OAP_MAGIC)
return -EINVAL;
- if (cli->cl_import == NULL || cli->cl_import->imp_invalid)
+ if (!cli->cl_import || cli->cl_import->imp_invalid)
return -EIO;
if (!list_empty(&oap->oap_pending_item) ||
@@ -2287,12 +2324,14 @@
* 1. if there exists an active extent for this IO, mostly this page
* can be added to the active extent and sometimes we need to
* expand extent to accommodate this page;
- * 2. otherwise, a new extent will be allocated. */
+ * 2. otherwise, a new extent will be allocated.
+ */
ext = oio->oi_active;
- if (ext != NULL && ext->oe_start <= index && ext->oe_max_end >= index) {
+ if (ext && ext->oe_start <= index && ext->oe_max_end >= index) {
/* one chunk plus extent overhead must be enough to write this
- * page */
+ * page
+ */
grants = (1 << cli->cl_chunkbits) + cli->cl_extent_tax;
if (ext->oe_end >= index)
grants = 0;
@@ -2319,7 +2358,7 @@
}
}
rc = 0;
- } else if (ext != NULL) {
+ } else if (ext) {
/* index is located outside of active extent */
need_release = 1;
}
@@ -2329,13 +2368,14 @@
ext = NULL;
}
- if (ext == NULL) {
+ if (!ext) {
int tmp = (1 << cli->cl_chunkbits) + cli->cl_extent_tax;
/* try to find new extent to cover this page */
- LASSERT(oio->oi_active == NULL);
+ LASSERT(!oio->oi_active);
/* we may have allocated grant for this page if we failed
- * to expand the previous active extent. */
+ * to expand the previous active extent.
+ */
LASSERT(ergo(grants > 0, grants >= tmp));
rc = 0;
@@ -2362,8 +2402,8 @@
osc_unreserve_grant(cli, grants, tmp);
}
- LASSERT(ergo(rc == 0, ext != NULL));
- if (ext != NULL) {
+ LASSERT(ergo(rc == 0, ext));
+ if (ext) {
EASSERTF(ext->oe_end >= index && ext->oe_start <= index,
ext, "index = %lu.\n", index);
LASSERT((oap->oap_brw_flags & OBD_BRW_FROM_GRANT) != 0);
@@ -2400,15 +2440,16 @@
ext = osc_extent_lookup(obj, oap2cl_page(oap)->cp_index);
/* only truncated pages are allowed to be taken out.
* See osc_extent_truncate() and osc_cache_truncate_start()
- * for details. */
- if (ext != NULL && ext->oe_state != OES_TRUNC) {
+ * for details.
+ */
+ if (ext && ext->oe_state != OES_TRUNC) {
OSC_EXTENT_DUMP(D_ERROR, ext, "trunc at %lu.\n",
oap2cl_page(oap)->cp_index);
rc = -EBUSY;
}
}
osc_object_unlock(obj);
- if (ext != NULL)
+ if (ext)
osc_extent_put(env, ext);
return rc;
}
@@ -2433,7 +2474,7 @@
osc_object_lock(obj);
ext = osc_extent_lookup(obj, index);
- if (ext == NULL) {
+ if (!ext) {
osc_extent_tree_dump(D_ERROR, obj);
LASSERTF(0, "page index %lu is NOT covered.\n", index);
}
@@ -2451,7 +2492,8 @@
* exists a deadlock problem because other process can wait for
* page writeback bit holding page lock; and meanwhile in
* vvp_page_make_ready(), we need to grab page lock before
- * really sending the RPC. */
+ * really sending the RPC.
+ */
case OES_TRUNC:
/* race with truncate, page will be redirtied */
case OES_ACTIVE:
@@ -2459,7 +2501,8 @@
* re-dirty the page. If we continued on here, and we were the
* one making the extent active, we could deadlock waiting for
* the page writeback to clear but it won't because the extent
- * is active and won't be written out. */
+ * is active and won't be written out.
+ */
rc = -EAGAIN;
goto out;
default:
@@ -2530,12 +2573,13 @@
if (ext->oe_start <= index && ext->oe_end >= index) {
LASSERT(ext->oe_state == OES_LOCK_DONE);
/* For OES_LOCK_DONE state extent, it has already held
- * a refcount for RPC. */
+ * a refcount for RPC.
+ */
found = osc_extent_get(ext);
break;
}
}
- if (found != NULL) {
+ if (found) {
list_del_init(&found->oe_link);
osc_update_pending(obj, cmd, -found->oe_nr_pages);
osc_object_unlock(obj);
@@ -2546,8 +2590,9 @@
} else {
osc_object_unlock(obj);
/* ok, it's been put in an rpc. only one oap gets a request
- * reference */
- if (oap->oap_request != NULL) {
+ * reference
+ */
+ if (oap->oap_request) {
ptlrpc_mark_interrupted(oap->oap_request);
ptlrpcd_wake(oap->oap_request);
ptlrpc_req_finished(oap->oap_request);
@@ -2582,7 +2627,7 @@
}
ext = osc_extent_alloc(obj);
- if (ext == NULL) {
+ if (!ext) {
list_for_each_entry_safe(oap, tmp, list, oap_pending_item) {
list_del_init(&oap->oap_pending_item);
osc_ap_completion(env, cli, oap, 0, -ENOMEM);
@@ -2637,18 +2682,19 @@
again:
osc_object_lock(obj);
ext = osc_extent_search(obj, index);
- if (ext == NULL)
+ if (!ext)
ext = first_extent(obj);
else if (ext->oe_end < index)
ext = next_extent(ext);
- while (ext != NULL) {
+ while (ext) {
EASSERT(ext->oe_state != OES_TRUNC, ext);
if (ext->oe_state > OES_CACHE || ext->oe_urgent) {
/* if ext is in urgent state, it means there must exist
* a page already having been flushed by write_page().
* We have to wait for this extent because we can't
- * truncate that page. */
+ * truncate that page.
+ */
LASSERT(!ext->oe_hp);
OSC_EXTENT_DUMP(D_CACHE, ext,
"waiting for busy extent\n");
@@ -2663,7 +2709,8 @@
/* though we grab inode mutex for write path, but we
* release it before releasing extent(in osc_io_end()),
* so there is a race window that an extent is still
- * in OES_ACTIVE when truncate starts. */
+ * in OES_ACTIVE when truncate starts.
+ */
LASSERT(!ext->oe_trunc_pending);
ext->oe_trunc_pending = 1;
} else {
@@ -2688,7 +2735,8 @@
list_del_init(&ext->oe_link);
/* extent may be in OES_ACTIVE state because inode mutex
- * is released before osc_io_end() in file write case */
+ * is released before osc_io_end() in file write case
+ */
if (ext->oe_state != OES_TRUNC)
osc_extent_wait(env, ext, OES_TRUNC);
@@ -2713,19 +2761,21 @@
/* we need to hold this extent in OES_TRUNC state so
* that no writeback will happen. This is to avoid
- * BUG 17397. */
- LASSERT(oio->oi_trunc == NULL);
+ * BUG 17397.
+ */
+ LASSERT(!oio->oi_trunc);
oio->oi_trunc = osc_extent_get(ext);
OSC_EXTENT_DUMP(D_CACHE, ext,
"trunc at %llu\n", size);
}
osc_extent_put(env, ext);
}
- if (waiting != NULL) {
+ if (waiting) {
int rc;
/* ignore the result of osc_extent_wait the write initiator
- * should take care of it. */
+ * should take care of it.
+ */
rc = osc_extent_wait(env, waiting, OES_INV);
if (rc < 0)
OSC_EXTENT_DUMP(D_CACHE, waiting, "error: %d.\n", rc);
@@ -2746,7 +2796,7 @@
struct osc_extent *ext = oio->oi_trunc;
oio->oi_trunc = NULL;
- if (ext != NULL) {
+ if (ext) {
bool unplug = false;
EASSERT(ext->oe_nr_pages > 0, ext);
@@ -2789,11 +2839,11 @@
again:
osc_object_lock(obj);
ext = osc_extent_search(obj, index);
- if (ext == NULL)
+ if (!ext)
ext = first_extent(obj);
else if (ext->oe_end < index)
ext = next_extent(ext);
- while (ext != NULL) {
+ while (ext) {
int rc;
if (ext->oe_start > end)
@@ -2844,11 +2894,11 @@
osc_object_lock(obj);
ext = osc_extent_search(obj, start);
- if (ext == NULL)
+ if (!ext)
ext = first_extent(obj);
else if (ext->oe_end < start)
ext = next_extent(ext);
- while (ext != NULL) {
+ while (ext) {
if (ext->oe_start > end)
break;
@@ -2867,12 +2917,13 @@
ext->oe_urgent = 1;
list = &obj->oo_urgent_exts;
}
- if (list != NULL)
+ if (list)
list_move_tail(&ext->oe_link, list);
unplug = true;
} else {
/* the only discarder is lock cancelling, so
- * [start, end] must contain this extent */
+ * [start, end] must contain this extent
+ */
EASSERT(ext->oe_start >= start &&
ext->oe_max_end <= end, ext);
osc_extent_state_set(ext, OES_LOCKING);
@@ -2887,14 +2938,16 @@
/* It's pretty bad to wait for ACTIVE extents, because
* we don't know how long we will wait for it to be
* flushed since it may be blocked at awaiting more
- * grants. We do this for the correctness of fsync. */
+ * grants. We do this for the correctness of fsync.
+ */
LASSERT(hp == 0 && discard == 0);
ext->oe_urgent = 1;
break;
case OES_TRUNC:
/* this extent is being truncated, can't do anything
* for it now. it will be set to urgent after truncate
- * is finished in osc_cache_truncate_end(). */
+ * is finished in osc_cache_truncate_end().
+ */
default:
break;
}
@@ -2913,7 +2966,8 @@
EASSERT(ext->oe_state == OES_LOCKING, ext);
/* Discard caching pages. We don't actually write this
- * extent out but we complete it as if we did. */
+ * extent out but we complete it as if we did.
+ */
rc = osc_extent_make_ready(env, ext);
if (unlikely(rc < 0)) {
OSC_EXTENT_DUMP(D_ERROR, ext,
diff --git a/drivers/staging/lustre/lustre/osc/osc_cl_internal.h b/drivers/staging/lustre/lustre/osc/osc_cl_internal.h
index 415c27e..d55d04d 100644
--- a/drivers/staging/lustre/lustre/osc/osc_cl_internal.h
+++ b/drivers/staging/lustre/lustre/osc/osc_cl_internal.h
@@ -69,10 +69,12 @@
/** true if this io is lockless. */
int oi_lockless;
/** active extents, we know how many bytes is going to be written,
- * so having an active extent will prevent it from being fragmented */
+ * so having an active extent will prevent it from being fragmented
+ */
struct osc_extent *oi_active;
/** partially truncated extent, we need to hold this extent to prevent
- * page writeback from happening. */
+ * page writeback from happening.
+ */
struct osc_extent *oi_trunc;
struct obd_info oi_info;
@@ -154,7 +156,8 @@
atomic_t oo_nr_writes;
/** Protect extent tree. Will be used to protect
- * oo_{read|write}_pages soon. */
+ * oo_{read|write}_pages soon.
+ */
spinlock_t oo_lock;
};
@@ -472,7 +475,7 @@
struct osc_thread_info *info;
info = lu_context_key_get(&env->le_ctx, &osc_key);
- LASSERT(info != NULL);
+ LASSERT(info);
return info;
}
@@ -481,7 +484,7 @@
struct osc_session *ses;
ses = lu_context_key_get(env->le_ses, &osc_session_key);
- LASSERT(ses != NULL);
+ LASSERT(ses);
return ses;
}
@@ -522,7 +525,7 @@
return (struct cl_object *)&obj->oo_cl;
}
-static inline ldlm_mode_t osc_cl_lock2ldlm(enum cl_lock_mode mode)
+static inline enum ldlm_mode osc_cl_lock2ldlm(enum cl_lock_mode mode)
{
LASSERT(mode == CLM_READ || mode == CLM_WRITE || mode == CLM_GROUP);
if (mode == CLM_READ)
@@ -533,7 +536,7 @@
return LCK_GROUP;
}
-static inline enum cl_lock_mode osc_ldlm2cl_lock(ldlm_mode_t mode)
+static inline enum cl_lock_mode osc_ldlm2cl_lock(enum ldlm_mode mode)
{
LASSERT(mode == LCK_PR || mode == LCK_PW || mode == LCK_GROUP);
if (mode == LCK_PR)
@@ -627,22 +630,26 @@
oe_srvlock:1,
oe_memalloc:1,
/** an ACTIVE extent is going to be truncated, so when this extent
- * is released, it will turn into TRUNC state instead of CACHE. */
+ * is released, it will turn into TRUNC state instead of CACHE.
+ */
oe_trunc_pending:1,
/** this extent should be written asap and someone may wait for the
* write to finish. This bit is usually set along with urgent if
* the extent was CACHE state.
* fsync_wait extent can't be merged because new extent region may
- * exceed fsync range. */
+ * exceed fsync range.
+ */
oe_fsync_wait:1,
/** covering lock is being canceled */
oe_hp:1,
/** this extent should be written back asap. set if one of pages is
- * called by page WB daemon, or sync write or reading requests. */
+ * called by page WB daemon, or sync write or reading requests.
+ */
oe_urgent:1;
/** how many grants allocated for this extent.
* Grant allocated for this extent. There is no grant allocated
- * for reading extents and sync write extents. */
+ * for reading extents and sync write extents.
+ */
unsigned int oe_grants;
/** # of dirty pages in this extent */
unsigned int oe_nr_pages;
@@ -655,21 +662,25 @@
struct osc_page *oe_next_page;
/** start and end index of this extent, include start and end
* themselves. Page offset here is the page index of osc_pages.
- * oe_start is used as keyword for red-black tree. */
+ * oe_start is used as keyword for red-black tree.
+ */
pgoff_t oe_start;
pgoff_t oe_end;
/** maximum ending index of this extent, this is limited by
- * max_pages_per_rpc, lock extent and chunk size. */
+ * max_pages_per_rpc, lock extent and chunk size.
+ */
pgoff_t oe_max_end;
/** waitqueue - for those who want to be notified if this extent's
- * state has changed. */
+ * state has changed.
+ */
wait_queue_head_t oe_waitq;
/** lock covering this extent */
struct cl_lock *oe_osclock;
/** terminator of this extent. Must be true if this extent is in IO. */
struct task_struct *oe_owner;
/** return value of writeback. If somebody is waiting for this extent,
- * this value can be known by outside world. */
+ * this value can be known by outside world.
+ */
int oe_rc;
/** max pages per rpc when this extent was created */
unsigned int oe_mppr;
diff --git a/drivers/staging/lustre/lustre/osc/osc_dev.c b/drivers/staging/lustre/lustre/osc/osc_dev.c
index 7078cc5..67cb6e4 100644
--- a/drivers/staging/lustre/lustre/osc/osc_dev.c
+++ b/drivers/staging/lustre/lustre/osc/osc_dev.c
@@ -123,7 +123,7 @@
struct osc_thread_info *info;
info = kmem_cache_alloc(osc_thread_kmem, GFP_NOFS | __GFP_ZERO);
- if (info == NULL)
+ if (!info)
info = ERR_PTR(-ENOMEM);
return info;
}
@@ -148,7 +148,7 @@
struct osc_session *info;
info = kmem_cache_alloc(osc_session_kmem, GFP_NOFS | __GFP_ZERO);
- if (info == NULL)
+ if (!info)
info = ERR_PTR(-ENOMEM);
return info;
}
@@ -228,7 +228,7 @@
/* Setup OSC OBD */
obd = class_name2obd(lustre_cfg_string(cfg, 0));
- LASSERT(obd != NULL);
+ LASSERT(obd);
rc = osc_setup(obd, cfg);
if (rc) {
osc_device_free(env, d);
diff --git a/drivers/staging/lustre/lustre/osc/osc_internal.h b/drivers/staging/lustre/lustre/osc/osc_internal.h
index a4c6146..ea695c2 100644
--- a/drivers/staging/lustre/lustre/osc/osc_internal.h
+++ b/drivers/staging/lustre/lustre/osc/osc_internal.h
@@ -47,11 +47,13 @@
enum async_flags {
ASYNC_READY = 0x1, /* ap_make_ready will not be called before this
- page is added to an rpc */
+ * page is added to an rpc
+ */
ASYNC_URGENT = 0x2, /* page must be put into an RPC before return */
ASYNC_COUNT_STABLE = 0x4, /* ap_refresh_count will not be called
- to give the caller a chance to update
- or cancel the size of the io */
+ * to give the caller a chance to update
+ * or cancel the size of the io
+ */
ASYNC_HP = 0x10,
};
diff --git a/drivers/staging/lustre/lustre/osc/osc_io.c b/drivers/staging/lustre/lustre/osc/osc_io.c
index abd0beb..2d8d93c 100644
--- a/drivers/staging/lustre/lustre/osc/osc_io.c
+++ b/drivers/staging/lustre/lustre/osc/osc_io.c
@@ -73,7 +73,7 @@
const struct cl_page_slice *slice;
slice = cl_page_at(page, &osc_device_type);
- LASSERT(slice != NULL);
+ LASSERT(slice);
return cl2osc_page(slice);
}
@@ -135,7 +135,7 @@
/* Top level IO. */
io = page->cp_owner;
- LASSERT(io != NULL);
+ LASSERT(io);
opg = osc_cl_page_osc(page);
oap = &opg->ops_oap;
@@ -266,13 +266,14 @@
* This implements OBD_BRW_CHECK logic from old client.
*/
- if (imp == NULL || imp->imp_invalid)
+ if (!imp || imp->imp_invalid)
result = -EIO;
if (result == 0 && oio->oi_lockless)
/* this page contains `invalid' data, but who cares?
* nobody can access the invalid data.
* in osc_io_commit_write(), we're going to write exact
- * [from, to) bytes of this page to OST. -jay */
+ * [from, to) bytes of this page to OST. -jay
+ */
cl_page_export(env, slice->cpl_page, 1);
return result;
@@ -349,7 +350,7 @@
__u64 start = *(__u64 *)cbdata;
slice = cl_page_at(page, &osc_device_type);
- LASSERT(slice != NULL);
+ LASSERT(slice);
ops = cl2osc_page(slice);
oap = &ops->ops_oap;
@@ -500,7 +501,7 @@
__u64 size = io->u.ci_setattr.sa_attr.lvb_size;
osc_trunc_check(env, io, oio, size);
- if (oio->oi_trunc != NULL) {
+ if (oio->oi_trunc) {
osc_cache_truncate_end(env, oio, cl2osc(obj));
oio->oi_trunc = NULL;
}
@@ -596,7 +597,8 @@
* send OST_SYNC RPC. This is bad because it causes extents
* to be written osc by osc. However, we usually start
* writeback before CL_FSYNC_ALL so this won't have any real
- * problem. */
+ * problem.
+ */
rc = osc_cache_wait_range(env, osc, start, end);
if (result == 0)
result = rc;
@@ -754,7 +756,7 @@
opg = osc_cl_page_osc(apage);
apage = opg->ops_cl.cpl_page; /* now apage is a sub-page */
lock = cl_lock_at_page(env, apage->cp_obj, apage, NULL, 1, 1);
- if (lock == NULL) {
+ if (!lock) {
struct cl_object_header *head;
struct cl_lock *scan;
@@ -770,10 +772,9 @@
}
olck = osc_lock_at(lock);
- LASSERT(olck != NULL);
- LASSERT(ergo(opg->ops_srvlock, olck->ols_lock == NULL));
+ LASSERT(ergo(opg->ops_srvlock, !olck->ols_lock));
/* check for lockless io. */
- if (olck->ols_lock != NULL) {
+ if (olck->ols_lock) {
oa->o_handle = olck->ols_lock->l_remote_handle;
oa->o_valid |= OBD_MD_FLHANDLE;
}
@@ -804,7 +805,7 @@
int result;
or = kmem_cache_alloc(osc_req_kmem, GFP_NOFS | __GFP_ZERO);
- if (or != NULL) {
+ if (or) {
cl_req_slice_add(req, &or->or_cl, dev, &osc_req_ops);
result = 0;
} else
diff --git a/drivers/staging/lustre/lustre/osc/osc_lock.c b/drivers/staging/lustre/lustre/osc/osc_lock.c
index 71f2810..87f3522 100644
--- a/drivers/staging/lustre/lustre/osc/osc_lock.c
+++ b/drivers/staging/lustre/lustre/osc/osc_lock.c
@@ -79,7 +79,7 @@
struct ldlm_lock *lock;
lock = ldlm_handle2lock(handle);
- if (lock != NULL)
+ if (lock)
LDLM_LOCK_PUT(lock);
return lock;
}
@@ -94,42 +94,40 @@
int handle_used = lustre_handle_is_used(&ols->ols_handle);
if (ergo(osc_lock_is_lockless(ols),
- ols->ols_locklessable && ols->ols_lock == NULL))
+ ols->ols_locklessable && !ols->ols_lock))
return 1;
/*
* If all the following "ergo"s are true, return 1, otherwise 0
*/
- if (!ergo(olock != NULL, handle_used))
+ if (!ergo(olock, handle_used))
return 0;
- if (!ergo(olock != NULL,
- olock->l_handle.h_cookie == ols->ols_handle.cookie))
+ if (!ergo(olock, olock->l_handle.h_cookie == ols->ols_handle.cookie))
return 0;
if (!ergo(handle_used,
- ergo(lock != NULL && olock != NULL, lock == olock) &&
- ergo(lock == NULL, olock == NULL)))
+ ergo(lock && olock, lock == olock) &&
+ ergo(!lock, !olock)))
return 0;
/*
* Check that ->ols_handle and ->ols_lock are consistent, but
* take into account that they are set at the different time.
*/
if (!ergo(ols->ols_state == OLS_CANCELLED,
- olock == NULL && !handle_used))
+ !olock && !handle_used))
return 0;
/*
* DLM lock is destroyed only after we have seen cancellation
* ast.
*/
- if (!ergo(olock != NULL && ols->ols_state < OLS_CANCELLED,
- ((olock->l_flags & LDLM_FL_DESTROYED) == 0)))
+ if (!ergo(olock && ols->ols_state < OLS_CANCELLED,
+ ((olock->l_flags & LDLM_FL_DESTROYED) == 0)))
return 0;
if (!ergo(ols->ols_state == OLS_GRANTED,
- olock != NULL &&
- olock->l_req_mode == olock->l_granted_mode &&
- ols->ols_hold))
+ olock && olock->l_req_mode == olock->l_granted_mode &&
+ ols->ols_hold))
return 0;
return 1;
}
@@ -149,14 +147,15 @@
spin_lock(&osc_ast_guard);
dlmlock = olck->ols_lock;
- if (dlmlock == NULL) {
+ if (!dlmlock) {
spin_unlock(&osc_ast_guard);
return;
}
olck->ols_lock = NULL;
/* wb(); --- for all who checks (ols->ols_lock != NULL) before
- * call to osc_lock_detach() */
+ * call to osc_lock_detach()
+ */
dlmlock->l_ast_data = NULL;
olck->ols_handle.cookie = 0ULL;
spin_unlock(&osc_ast_guard);
@@ -171,7 +170,8 @@
/* Must get the value under the lock to avoid possible races. */
old_kms = cl2osc(obj)->oo_oinfo->loi_kms;
/* Update the kms. Need to loop all granted locks.
- * Not a problem for the client */
+ * Not a problem for the client
+ */
attr->cat_kms = ldlm_extent_shift_kms(dlmlock, old_kms);
cl_object_attr_set(env, obj, attr, CAT_KMS);
@@ -247,7 +247,7 @@
* lock is destroyed immediately after upcall.
*/
osc_lock_unhold(ols);
- LASSERT(ols->ols_lock == NULL);
+ LASSERT(!ols->ols_lock);
LASSERT(atomic_read(&ols->ols_pageref) == 0 ||
atomic_read(&ols->ols_pageref) == _PAGEREF_MAGIC);
@@ -292,7 +292,7 @@
lock_res_and_lock(dlm_lock);
spin_lock(&osc_ast_guard);
olck = dlm_lock->l_ast_data;
- if (olck != NULL) {
+ if (olck) {
struct cl_lock *lock = olck->ols_cl.cls_lock;
/*
* If osc_lock holds a reference on ldlm lock, return it even
@@ -359,13 +359,13 @@
__u64 size;
dlmlock = olck->ols_lock;
- LASSERT(dlmlock != NULL);
/* re-grab LVB from a dlm lock under DLM spin-locks. */
*lvb = *(struct ost_lvb *)dlmlock->l_lvb_data;
size = lvb->lvb_size;
/* Extend KMS up to the end of this lock and no further
- * A lock on [x,y] means a KMS of up to y + 1 bytes! */
+ * A lock on [x,y] means a KMS of up to y + 1 bytes!
+ */
if (size > dlmlock->l_policy_data.l_extent.end)
size = dlmlock->l_policy_data.l_extent.end + 1;
if (size >= oinfo->loi_kms) {
@@ -429,7 +429,8 @@
* to take a semaphore on a parent lock. This is safe, because
* spin-locks are needed to protect consistency of
* dlmlock->l_*_mode and LVB, and we have finished processing
- * them. */
+ * them.
+ */
unlock_res_and_lock(dlmlock);
cl_lock_modify(env, lock, descr);
cl_lock_signal(env, lock);
@@ -444,12 +445,12 @@
struct ldlm_lock *dlmlock;
dlmlock = ldlm_handle2lock_long(&olck->ols_handle, 0);
- LASSERT(dlmlock != NULL);
+ LASSERT(dlmlock);
lock_res_and_lock(dlmlock);
spin_lock(&osc_ast_guard);
LASSERT(dlmlock->l_ast_data == olck);
- LASSERT(olck->ols_lock == NULL);
+ LASSERT(!olck->ols_lock);
olck->ols_lock = dlmlock;
spin_unlock(&osc_ast_guard);
@@ -470,7 +471,8 @@
olck->ols_hold = 1;
/* lock reference taken by ldlm_handle2lock_long() is owned by
- * osc_lock and released in osc_lock_detach() */
+ * osc_lock and released in osc_lock_detach()
+ */
lu_ref_add(&dlmlock->l_reference, "osc_lock", olck);
olck->ols_has_ref = 1;
}
@@ -508,10 +510,10 @@
struct ldlm_lock *dlmlock;
dlmlock = ldlm_handle2lock(&olck->ols_handle);
- if (dlmlock != NULL) {
+ if (dlmlock) {
lock_res_and_lock(dlmlock);
spin_lock(&osc_ast_guard);
- LASSERT(olck->ols_lock == NULL);
+ LASSERT(!olck->ols_lock);
dlmlock->l_ast_data = NULL;
olck->ols_handle.cookie = 0ULL;
spin_unlock(&osc_ast_guard);
@@ -548,7 +550,8 @@
/* For AGL case, the RPC sponsor may exits the cl_lock
* processing without wait() called before related OSC
* lock upcall(). So update the lock status according
- * to the enqueue result inside AGL upcall(). */
+ * to the enqueue result inside AGL upcall().
+ */
if (olck->ols_agl) {
lock->cll_flags |= CLF_FROM_UPCALL;
cl_wait_try(env, lock);
@@ -571,7 +574,8 @@
lu_ref_del(&lock->cll_reference, "upcall", lock);
/* This maybe the last reference, so must be called after
- * cl_lock_mutex_put(). */
+ * cl_lock_mutex_put().
+ */
cl_lock_put(env, lock);
cl_env_nested_put(&nest, env);
@@ -634,7 +638,7 @@
cancel = 0;
olck = osc_ast_data_get(dlmlock);
- if (olck != NULL) {
+ if (olck) {
lock = olck->ols_cl.cls_lock;
cl_lock_mutex_get(env, lock);
LINVRNT(osc_lock_invariant(olck));
@@ -786,17 +790,17 @@
env = cl_env_nested_get(&nest);
if (!IS_ERR(env)) {
olck = osc_ast_data_get(dlmlock);
- if (olck != NULL) {
+ if (olck) {
lock = olck->ols_cl.cls_lock;
cl_lock_mutex_get(env, lock);
/*
* ldlm_handle_cp_callback() copied LVB from request
* to lock->l_lvb_data, store it in osc_lock.
*/
- LASSERT(dlmlock->l_lvb_data != NULL);
+ LASSERT(dlmlock->l_lvb_data);
lock_res_and_lock(dlmlock);
olck->ols_lvb = *(struct ost_lvb *)dlmlock->l_lvb_data;
- if (olck->ols_lock == NULL) {
+ if (!olck->ols_lock) {
/*
* upcall (osc_lock_upcall()) hasn't yet been
* called. Do nothing now, upcall will bind
@@ -850,14 +854,15 @@
* environment.
*/
olck = osc_ast_data_get(dlmlock);
- if (olck != NULL) {
+ if (olck) {
lock = olck->ols_cl.cls_lock;
/* Do not grab the mutex of cl_lock for glimpse.
* See LU-1274 for details.
* BTW, it's okay for cl_lock to be cancelled during
* this period because server can handle this race.
* See ldlm_server_glimpse_ast() for details.
- * cl_lock_mutex_get(env, lock); */
+ * cl_lock_mutex_get(env, lock);
+ */
cap = &req->rq_pill;
req_capsule_extend(cap, &RQF_LDLM_GL_CALLBACK);
req_capsule_set_size(cap, &RMF_DLM_LVB, RCL_SERVER,
@@ -1017,7 +1022,8 @@
LASSERT(cl_lock_is_mutexed(lock));
/* make it enqueue anyway for glimpse lock, because we actually
- * don't need to cancel any conflicting locks. */
+ * don't need to cancel any conflicting locks.
+ */
if (olck->ols_glimpse)
return 0;
@@ -1051,7 +1057,8 @@
* imagine that client has PR lock on [0, 1000], and thread T0
* is doing lockless IO in [500, 1500] region. Concurrent
* thread T1 can see lockless data in [500, 1000], which is
- * wrong, because these data are possibly stale. */
+ * wrong, because these data are possibly stale.
+ */
if (!lockless && osc_lock_compatible(olck, scan_ols))
continue;
@@ -1074,7 +1081,7 @@
} else {
CDEBUG(D_DLMTRACE, "lock %p is conflicted with %p, will wait\n",
lock, conflict);
- LASSERT(lock->cll_conflict == NULL);
+ LASSERT(!lock->cll_conflict);
lu_ref_add(&conflict->cll_reference, "cancel-wait",
lock);
lock->cll_conflict = conflict;
@@ -1123,7 +1130,8 @@
struct ldlm_enqueue_info *einfo = &ols->ols_einfo;
/* lock will be passed as upcall cookie,
- * hold ref to prevent to be released. */
+ * hold ref to prevent to be released.
+ */
cl_lock_hold_add(env, lock, "upcall", lock);
/* a user for lock also */
cl_lock_user_add(env, lock);
@@ -1174,7 +1182,8 @@
} else if (olck->ols_agl) {
if (lock->cll_flags & CLF_FROM_UPCALL)
/* It is from enqueue RPC reply upcall for
- * updating state. Do not re-enqueue. */
+ * updating state. Do not re-enqueue.
+ */
return -ENAVAIL;
olck->ols_state = OLS_NEW;
} else {
@@ -1197,7 +1206,7 @@
}
LASSERT(equi(olck->ols_state >= OLS_UPCALL_RECEIVED &&
- lock->cll_error == 0, olck->ols_lock != NULL));
+ lock->cll_error == 0, olck->ols_lock));
return lock->cll_error ?: olck->ols_state >= OLS_GRANTED ? 0 : CLO_WAIT;
}
@@ -1235,7 +1244,8 @@
LASSERT(lock->cll_state == CLS_INTRANSIT);
LASSERT(lock->cll_users > 0);
/* set a flag for osc_dlm_blocking_ast0() to signal the
- * lock.*/
+ * lock.
+ */
olck->ols_ast_wait = 1;
rc = CLO_WAIT;
}
@@ -1306,7 +1316,7 @@
LASSERT(cl_lock_is_mutexed(lock));
LINVRNT(osc_lock_invariant(olck));
- if (dlmlock != NULL) {
+ if (dlmlock) {
int do_cancel;
discard = !!(dlmlock->l_flags & LDLM_FL_DISCARD_DATA);
@@ -1318,7 +1328,8 @@
/* Now that we're the only user of dlm read/write reference,
* mostly the ->l_readers + ->l_writers should be zero.
* However, there is a corner case.
- * See bug 18829 for details.*/
+ * See bug 18829 for details.
+ */
do_cancel = (dlmlock->l_readers == 0 &&
dlmlock->l_writers == 0);
dlmlock->l_flags |= LDLM_FL_CBPENDING;
@@ -1382,7 +1393,7 @@
if (state == CLS_HELD && slice->cls_lock->cll_state != CLS_HELD) {
struct osc_io *oio = osc_env_io(env);
- LASSERT(lock->ols_owner == NULL);
+ LASSERT(!lock->ols_owner);
lock->ols_owner = oio;
} else if (state != CLS_HELD)
lock->ols_owner = NULL;
@@ -1517,7 +1528,8 @@
lock->ols_owner = oio;
/* set the io to be lockless if this lock is for io's
- * host object */
+ * host object
+ */
if (cl_object_same(oio->oi_cl.cis_obj, slice->cls_obj))
oio->oi_lockless = 1;
}
@@ -1556,7 +1568,7 @@
int result;
clk = kmem_cache_alloc(osc_lock_kmem, GFP_NOFS | __GFP_ZERO);
- if (clk != NULL) {
+ if (clk) {
__u32 enqflags = lock->cll_descr.cld_enq_flags;
osc_lock_build_einfo(env, lock, clk, &clk->ols_einfo);
@@ -1599,7 +1611,7 @@
* doesn't matter because in the worst case we don't cancel a lock
* which we actually can, that's no harm.
*/
- if (olock != NULL &&
+ if (olock &&
atomic_add_return(_PAGEREF_MAGIC,
&olock->ols_pageref) != _PAGEREF_MAGIC) {
atomic_sub(_PAGEREF_MAGIC, &olock->ols_pageref);
diff --git a/drivers/staging/lustre/lustre/osc/osc_object.c b/drivers/staging/lustre/lustre/osc/osc_object.c
index fdd6219..60d8230 100644
--- a/drivers/staging/lustre/lustre/osc/osc_object.c
+++ b/drivers/staging/lustre/lustre/osc/osc_object.c
@@ -113,7 +113,7 @@
LASSERT(list_empty(&osc->oo_write_item));
LASSERT(list_empty(&osc->oo_read_item));
- LASSERT(osc->oo_root.rb_node == NULL);
+ LASSERT(!osc->oo_root.rb_node);
LASSERT(list_empty(&osc->oo_hp_exts));
LASSERT(list_empty(&osc->oo_urgent_exts));
LASSERT(list_empty(&osc->oo_rpc_exts));
@@ -256,7 +256,7 @@
struct lu_object *obj;
osc = kmem_cache_alloc(osc_object_kmem, GFP_NOFS | __GFP_ZERO);
- if (osc != NULL) {
+ if (osc) {
obj = osc2lu(osc);
lu_object_init(obj, NULL, dev);
osc->oo_cl.co_ops = &osc_ops;
diff --git a/drivers/staging/lustre/lustre/osc/osc_page.c b/drivers/staging/lustre/lustre/osc/osc_page.c
index 8943f0a..d11582a 100644
--- a/drivers/staging/lustre/lustre/osc/osc_page.c
+++ b/drivers/staging/lustre/lustre/osc/osc_page.c
@@ -51,111 +51,12 @@
* @{
*/
-/*
- * Comment out osc_page_protected because it may sleep inside the
- * the client_obd_list_lock.
- * client_obd_list_lock -> osc_ap_completion -> osc_completion ->
- * -> osc_page_protected -> osc_page_is_dlocked -> osc_match_base
- * -> ldlm_lock_match -> sptlrpc_import_check_ctx -> sleep.
- */
-#if 0
-static int osc_page_is_dlocked(const struct lu_env *env,
- const struct osc_page *opg,
- enum cl_lock_mode mode, int pending, int unref)
-{
- struct cl_page *page;
- struct osc_object *obj;
- struct osc_thread_info *info;
- struct ldlm_res_id *resname;
- struct lustre_handle *lockh;
- ldlm_policy_data_t *policy;
- ldlm_mode_t dlmmode;
- __u64 flags;
-
- might_sleep();
-
- info = osc_env_info(env);
- resname = &info->oti_resname;
- policy = &info->oti_policy;
- lockh = &info->oti_handle;
- page = opg->ops_cl.cpl_page;
- obj = cl2osc(opg->ops_cl.cpl_obj);
-
- flags = LDLM_FL_TEST_LOCK | LDLM_FL_BLOCK_GRANTED;
- if (pending)
- flags |= LDLM_FL_CBPENDING;
-
- dlmmode = osc_cl_lock2ldlm(mode) | LCK_PW;
- osc_lock_build_res(env, obj, resname);
- osc_index2policy(policy, page->cp_obj, page->cp_index, page->cp_index);
- return osc_match_base(osc_export(obj), resname, LDLM_EXTENT, policy,
- dlmmode, &flags, NULL, lockh, unref);
-}
-
-/**
- * Checks an invariant that a page in the cache is covered by a lock, as
- * needed.
- */
-static int osc_page_protected(const struct lu_env *env,
- const struct osc_page *opg,
- enum cl_lock_mode mode, int unref)
-{
- struct cl_object_header *hdr;
- struct cl_lock *scan;
- struct cl_page *page;
- struct cl_lock_descr *descr;
- int result;
-
- LINVRNT(!opg->ops_temp);
-
- page = opg->ops_cl.cpl_page;
- if (page->cp_owner != NULL &&
- cl_io_top(page->cp_owner)->ci_lockreq == CILR_NEVER)
- /*
- * If IO is done without locks (liblustre, or lloop), lock is
- * not required.
- */
- result = 1;
- else
- /* otherwise check for a DLM lock */
- result = osc_page_is_dlocked(env, opg, mode, 1, unref);
- if (result == 0) {
- /* maybe this page is a part of a lockless io? */
- hdr = cl_object_header(opg->ops_cl.cpl_obj);
- descr = &osc_env_info(env)->oti_descr;
- descr->cld_mode = mode;
- descr->cld_start = page->cp_index;
- descr->cld_end = page->cp_index;
- spin_lock(&hdr->coh_lock_guard);
- list_for_each_entry(scan, &hdr->coh_locks, cll_linkage) {
- /*
- * Lock-less sub-lock has to be either in HELD state
- * (when io is actively going on), or in CACHED state,
- * when top-lock is being unlocked:
- * cl_io_unlock()->cl_unuse()->...->lov_lock_unuse().
- */
- if ((scan->cll_state == CLS_HELD ||
- scan->cll_state == CLS_CACHED) &&
- cl_lock_ext_match(&scan->cll_descr, descr)) {
- struct osc_lock *olck;
-
- olck = osc_lock_at(scan);
- result = osc_lock_is_lockless(olck);
- break;
- }
- }
- spin_unlock(&hdr->coh_lock_guard);
- }
- return result;
-}
-#else
static int osc_page_protected(const struct lu_env *env,
const struct osc_page *opg,
enum cl_lock_mode mode, int unref)
{
return 1;
}
-#endif
/*****************************************************************************
*
@@ -168,7 +69,7 @@
struct osc_page *opg = cl2osc_page(slice);
CDEBUG(D_TRACE, "%p\n", opg);
- LASSERT(opg->ops_lock == NULL);
+ LASSERT(!opg->ops_lock);
}
static void osc_page_transfer_get(struct osc_page *opg, const char *label)
@@ -204,7 +105,8 @@
struct osc_object *obj = cl2osc(opg->ops_cl.cpl_obj);
/* ops_lru and ops_inflight share the same field, so take it from LRU
- * first and then use it as inflight. */
+ * first and then use it as inflight.
+ */
osc_lru_del(osc_cli(obj), opg, false);
spin_lock(&obj->oo_seatbelt);
@@ -232,9 +134,10 @@
/* for sync write, kernel will wait for this page to be flushed before
* osc_io_end() is called, so release it earlier.
- * for mkwrite(), it's known there is no further pages. */
+ * for mkwrite(), it's known there is no further pages.
+ */
if (cl_io_is_sync_write(io) || cl_io_is_mkwrite(io)) {
- if (oio->oi_active != NULL) {
+ if (oio->oi_active) {
osc_extent_release(env, oio->oi_active);
oio->oi_active = NULL;
}
@@ -258,7 +161,7 @@
struct osc_lock *olock;
int rc;
- LASSERT(opg->ops_lock == NULL);
+ LASSERT(!opg->ops_lock);
olock = osc_lock_at(lock);
if (atomic_inc_return(&olock->ols_pageref) <= 0) {
@@ -278,7 +181,7 @@
struct cl_lock *lock = opg->ops_lock;
struct osc_lock *olock;
- LASSERT(lock != NULL);
+ LASSERT(lock);
olock = osc_lock_at(lock);
atomic_dec(&olock->ols_pageref);
@@ -296,7 +199,7 @@
lock = cl_lock_at_page(env, slice->cpl_obj, slice->cpl_page,
NULL, 1, 0);
- if (lock != NULL) {
+ if (lock) {
if (osc_page_addref_lock(env, cl2osc_page(slice), lock) == 0)
result = -EBUSY;
cl_lock_put(env, lock);
@@ -424,7 +327,7 @@
}
spin_lock(&obj->oo_seatbelt);
- if (opg->ops_submitter != NULL) {
+ if (opg->ops_submitter) {
LASSERT(!list_empty(&opg->ops_inflight));
list_del_init(&opg->ops_inflight);
opg->ops_submitter = NULL;
@@ -458,7 +361,8 @@
LINVRNT(osc_page_protected(env, opg, CLM_READ, 0));
/* Check if the transferring against this page
- * is completed, or not even queued. */
+ * is completed, or not even queued.
+ */
if (opg->ops_transfer_pinned)
/* FIXME: may not be interrupted.. */
rc = osc_cancel_async_page(env, opg);
@@ -522,7 +426,8 @@
* creates temporary pages outside of a lock.
*/
/* ops_inflight and ops_lru are the same field, but it doesn't
- * hurt to initialize it twice :-) */
+ * hurt to initialize it twice :-)
+ */
INIT_LIST_HEAD(&opg->ops_inflight);
INIT_LIST_HEAD(&opg->ops_lru);
@@ -581,7 +486,8 @@
static DECLARE_WAIT_QUEUE_HEAD(osc_lru_waitq);
static atomic_t osc_lru_waiters = ATOMIC_INIT(0);
/* LRU pages are freed in batch mode. OSC should at least free this
- * number of pages to avoid running out of LRU budget, and.. */
+ * number of pages to avoid running out of LRU budget, and..
+ */
static const int lru_shrink_min = 2 << (20 - PAGE_CACHE_SHIFT); /* 2M */
/* free this number at most otherwise it will take too long time to finish. */
static const int lru_shrink_max = 32 << (20 - PAGE_CACHE_SHIFT); /* 32M */
@@ -590,7 +496,8 @@
* we should free slots aggressively. In this way, slots are freed in a steady
* step to maintain fairness among OSCs.
*
- * Return how many LRU pages should be freed. */
+ * Return how many LRU pages should be freed.
+ */
static int osc_cache_too_much(struct client_obd *cli)
{
struct cl_client_cache *cache = cli->cl_cache;
@@ -602,7 +509,8 @@
return min(pages, lru_shrink_max);
/* if it's going to run out LRU slots, we should free some, but not
- * too much to maintain fairness among OSCs. */
+ * too much to maintain fairness among OSCs.
+ */
if (atomic_read(cli->cl_lru_left) < cache->ccc_lru_max >> 4) {
unsigned long tmp;
@@ -630,7 +538,8 @@
/* free LRU page only if nobody is using it.
* This check is necessary to avoid freeing the pages
* having already been removed from LRU and pinned
- * for IO. */
+ * for IO.
+ */
if (!cl_page_in_use(page)) {
cl_page_unmap(env, io, page);
cl_page_discard(env, io, page);
@@ -688,14 +597,14 @@
continue;
}
- LASSERT(page->cp_obj != NULL);
+ LASSERT(page->cp_obj);
if (clobj != page->cp_obj) {
struct cl_object *tmp = page->cp_obj;
cl_object_get(tmp);
client_obd_list_unlock(&cli->cl_lru_list_lock);
- if (clobj != NULL) {
+ if (clobj) {
count -= discard_pagevec(env, io, pvec, index);
index = 0;
@@ -720,11 +629,13 @@
/* move this page to the end of list as it will be discarded
* soon. The page will be finally removed from LRU list in
- * osc_page_delete(). */
+ * osc_page_delete().
+ */
list_move_tail(&opg->ops_lru, &cli->cl_lru_list);
/* it's okay to grab a refcount here w/o holding lock because
- * it has to grab cl_lru_list_lock to delete the page. */
+ * it has to grab cl_lru_list_lock to delete the page.
+ */
cl_page_get(page);
pvec[index++] = page;
if (++count >= target)
@@ -740,7 +651,7 @@
}
client_obd_list_unlock(&cli->cl_lru_list_lock);
- if (clobj != NULL) {
+ if (clobj) {
count -= discard_pagevec(env, io, pvec, index);
cl_io_fini(env, io);
@@ -775,7 +686,8 @@
}
/* delete page from LRUlist. The page can be deleted from LRUlist for two
- * reasons: redirtied or deleted from page cache. */
+ * reasons: redirtied or deleted from page cache.
+ */
static void osc_lru_del(struct client_obd *cli, struct osc_page *opg, bool del)
{
if (opg->ops_in_lru) {
@@ -797,7 +709,8 @@
* this osc occupies too many LRU pages and kernel is
* stealing one of them.
* cl_lru_shrinkers is to avoid recursive call in case
- * we're already in the context of osc_lru_shrink(). */
+ * we're already in the context of osc_lru_shrink().
+ */
if (atomic_read(&cli->cl_lru_shrinkers) == 0 &&
!memory_pressure_get())
osc_lru_shrink(cli, osc_cache_too_much(cli));
@@ -819,7 +732,7 @@
int max_scans;
int rc;
- LASSERT(cache != NULL);
+ LASSERT(cache);
rc = osc_lru_shrink(cli, lru_shrink_min);
if (rc != 0) {
@@ -834,7 +747,8 @@
atomic_read(&cli->cl_lru_busy));
/* Reclaim LRU slots from other client_obd as it can't free enough
- * from its own. This should rarely happen. */
+ * from its own. This should rarely happen.
+ */
spin_lock(&cache->ccc_lru_lock);
LASSERT(!list_empty(&cache->ccc_lru));
@@ -875,7 +789,7 @@
struct client_obd *cli = osc_cli(obj);
int rc = 0;
- if (cli->cl_cache == NULL) /* shall not be in LRU */
+ if (!cli->cl_cache) /* shall not be in LRU */
return 0;
LASSERT(atomic_read(cli->cl_lru_left) >= 0);
@@ -892,7 +806,8 @@
cond_resched();
/* slowest case, all of caching pages are busy, notifying
- * other OSCs that we're lack of LRU slots. */
+ * other OSCs that we're lack of LRU slots.
+ */
atomic_inc(&osc_lru_waiters);
gen = atomic_read(&cli->cl_lru_in_list);
diff --git a/drivers/staging/lustre/lustre/osc/osc_quota.c b/drivers/staging/lustre/lustre/osc/osc_quota.c
index e70e796..208f72b 100644
--- a/drivers/staging/lustre/lustre/osc/osc_quota.c
+++ b/drivers/staging/lustre/lustre/osc/osc_quota.c
@@ -13,11 +13,6 @@
* General Public License version 2 for more details (a copy is included
* in the LICENSE file that accompanied this code).
*
- * You should have received a copy of the GNU General Public License
- * version 2 along with this program; if not, write to the
- * Free Software Foundation, Inc., 59 Temple Place - Suite 330,
- * Boston, MA 021110-1307, USA
- *
* GPL HEADER END
*/
/*
@@ -36,7 +31,7 @@
struct osc_quota_info *oqi;
oqi = kmem_cache_alloc(osc_quota_kmem, GFP_NOFS | __GFP_ZERO);
- if (oqi != NULL)
+ if (oqi)
oqi->oqi_id = id;
return oqi;
@@ -52,10 +47,12 @@
oqi = cfs_hash_lookup(cli->cl_quota_hash[type], &qid[type]);
if (oqi) {
/* do not try to access oqi here, it could have been
- * freed by osc_quota_setdq() */
+ * freed by osc_quota_setdq()
+ */
/* the slot is busy, the user is about to run out of
- * quota space on this OST */
+ * quota space on this OST
+ */
CDEBUG(D_QUOTA, "chkdq found noquota for %s %d\n",
type == USRQUOTA ? "user" : "grout", qid[type]);
return NO_QUOTA;
@@ -89,12 +86,13 @@
oqi = cfs_hash_lookup(cli->cl_quota_hash[type], &qid[type]);
if ((flags & FL_QUOTA_FLAG(type)) != 0) {
/* This ID is getting close to its quota limit, let's
- * switch to sync I/O */
- if (oqi != NULL)
+ * switch to sync I/O
+ */
+ if (oqi)
continue;
oqi = osc_oqi_alloc(qid[type]);
- if (oqi == NULL) {
+ if (!oqi) {
rc = -ENOMEM;
break;
}
@@ -113,8 +111,9 @@
qid[type], rc);
} else {
/* This ID is now off the hook, let's remove it from
- * the hash table */
- if (oqi == NULL)
+ * the hash table
+ */
+ if (!oqi)
continue;
oqi = cfs_hash_del_key(cli->cl_quota_hash[type],
@@ -147,7 +146,7 @@
struct osc_quota_info *oqi;
u32 uid;
- LASSERT(key != NULL);
+ LASSERT(key);
uid = *((u32 *)key);
oqi = hlist_entry(hnode, struct osc_quota_info, oqi_hash);
@@ -218,7 +217,7 @@
CFS_HASH_MAX_THETA,
"a_hash_ops,
CFS_HASH_DEFAULT);
- if (cli->cl_quota_hash[type] == NULL)
+ if (!cli->cl_quota_hash[type])
break;
}
@@ -252,7 +251,7 @@
req = ptlrpc_request_alloc_pack(class_exp2cliimp(exp),
&RQF_OST_QUOTACTL, LUSTRE_OST_VERSION,
OST_QUOTACTL);
- if (req == NULL)
+ if (!req)
return -ENOMEM;
oqc = req_capsule_client_get(&req->rq_pill, &RMF_OBD_QUOTACTL);
@@ -294,7 +293,7 @@
req = ptlrpc_request_alloc_pack(class_exp2cliimp(exp),
&RQF_OST_QUOTACHECK, LUSTRE_OST_VERSION,
OST_QUOTACHECK);
- if (req == NULL)
+ if (!req)
return -ENOMEM;
body = req_capsule_client_get(&req->rq_pill, &RMF_OBD_QUOTACTL);
@@ -302,8 +301,8 @@
ptlrpc_request_set_replen(req);
- /* the next poll will find -ENODATA, that means quotacheck is
- * going on */
+ /* the next poll will find -ENODATA, that means quotacheck is going on
+ */
cli->cl_qchk_stat = -ENODATA;
rc = ptlrpc_queue_wait(req);
if (rc)
diff --git a/drivers/staging/lustre/lustre/osc/osc_request.c b/drivers/staging/lustre/lustre/osc/osc_request.c
index 3ae00fc..10f262f 100644
--- a/drivers/staging/lustre/lustre/osc/osc_request.c
+++ b/drivers/staging/lustre/lustre/osc/osc_request.c
@@ -104,7 +104,6 @@
static void osc_release_ppga(struct brw_page **ppga, u32 count);
static int brw_interpret(const struct lu_env *env,
struct ptlrpc_request *req, void *data, int rc);
-static int osc_cleanup(struct obd_device *obd);
/* Pack OSC object metadata for disk storage (LE byte order). */
static int osc_packmd(struct obd_export *exp, struct lov_mds_md **lmmp,
@@ -113,18 +112,18 @@
int lmm_size;
lmm_size = sizeof(**lmmp);
- if (lmmp == NULL)
+ if (!lmmp)
return lmm_size;
- if (*lmmp != NULL && lsm == NULL) {
+ if (*lmmp && !lsm) {
kfree(*lmmp);
*lmmp = NULL;
return 0;
- } else if (unlikely(lsm != NULL && ostid_id(&lsm->lsm_oi) == 0)) {
+ } else if (unlikely(lsm && ostid_id(&lsm->lsm_oi) == 0)) {
return -EBADF;
}
- if (*lmmp == NULL) {
+ if (!*lmmp) {
*lmmp = kzalloc(lmm_size, GFP_NOFS);
if (!*lmmp)
return -ENOMEM;
@@ -143,7 +142,7 @@
int lsm_size;
struct obd_import *imp = class_exp2cliimp(exp);
- if (lmm != NULL) {
+ if (lmm) {
if (lmm_bytes < sizeof(*lmm)) {
CERROR("%s: lov_mds_md too small: %d, need %d\n",
exp->exp_obd->obd_name, lmm_bytes,
@@ -160,23 +159,23 @@
}
lsm_size = lov_stripe_md_size(1);
- if (lsmp == NULL)
+ if (!lsmp)
return lsm_size;
- if (*lsmp != NULL && lmm == NULL) {
+ if (*lsmp && !lmm) {
kfree((*lsmp)->lsm_oinfo[0]);
kfree(*lsmp);
*lsmp = NULL;
return 0;
}
- if (*lsmp == NULL) {
+ if (!*lsmp) {
*lsmp = kzalloc(lsm_size, GFP_NOFS);
- if (unlikely(*lsmp == NULL))
+ if (unlikely(!*lsmp))
return -ENOMEM;
(*lsmp)->lsm_oinfo[0] = kzalloc(sizeof(struct lov_oinfo),
GFP_NOFS);
- if (unlikely((*lsmp)->lsm_oinfo[0] == NULL)) {
+ if (unlikely(!(*lsmp)->lsm_oinfo[0])) {
kfree(*lsmp);
return -ENOMEM;
}
@@ -185,11 +184,11 @@
return -EBADF;
}
- if (lmm != NULL)
+ if (lmm)
/* XXX zero *lsmp? */
ostid_le_to_cpu(&lmm->lmm_oi, &(*lsmp)->lsm_oi);
- if (imp != NULL &&
+ if (imp &&
(imp->imp_connect_data.ocd_connect_flags & OBD_CONNECT_MAXBYTES))
(*lsmp)->lsm_maxbytes = imp->imp_connect_data.ocd_maxbytes;
else
@@ -246,7 +245,7 @@
int rc;
req = ptlrpc_request_alloc(class_exp2cliimp(exp), &RQF_OST_GETATTR);
- if (req == NULL)
+ if (!req)
return -ENOMEM;
rc = ptlrpc_request_pack(req, LUSTRE_OST_VERSION, OST_GETATTR);
@@ -276,7 +275,7 @@
int rc;
req = ptlrpc_request_alloc(class_exp2cliimp(exp), &RQF_OST_GETATTR);
- if (req == NULL)
+ if (!req)
return -ENOMEM;
rc = ptlrpc_request_pack(req, LUSTRE_OST_VERSION, OST_GETATTR);
@@ -294,7 +293,7 @@
goto out;
body = req_capsule_server_get(&req->rq_pill, &RMF_OST_BODY);
- if (body == NULL) {
+ if (!body) {
rc = -EPROTO;
goto out;
}
@@ -321,7 +320,7 @@
LASSERT(oinfo->oi_oa->o_valid & OBD_MD_FLGROUP);
req = ptlrpc_request_alloc(class_exp2cliimp(exp), &RQF_OST_SETATTR);
- if (req == NULL)
+ if (!req)
return -ENOMEM;
rc = ptlrpc_request_pack(req, LUSTRE_OST_VERSION, OST_SETATTR);
@@ -339,7 +338,7 @@
goto out;
body = req_capsule_server_get(&req->rq_pill, &RMF_OST_BODY);
- if (body == NULL) {
+ if (!body) {
rc = -EPROTO;
goto out;
}
@@ -362,7 +361,7 @@
goto out;
body = req_capsule_server_get(&req->rq_pill, &RMF_OST_BODY);
- if (body == NULL) {
+ if (!body) {
rc = -EPROTO;
goto out;
}
@@ -384,7 +383,7 @@
int rc;
req = ptlrpc_request_alloc(class_exp2cliimp(exp), &RQF_OST_SETATTR);
- if (req == NULL)
+ if (!req)
return -ENOMEM;
rc = ptlrpc_request_pack(req, LUSTRE_OST_VERSION, OST_SETATTR);
@@ -451,7 +450,7 @@
}
req = ptlrpc_request_alloc(class_exp2cliimp(exp), &RQF_OST_CREATE);
- if (req == NULL) {
+ if (!req) {
rc = -ENOMEM;
goto out;
}
@@ -482,7 +481,7 @@
goto out_req;
body = req_capsule_server_get(&req->rq_pill, &RMF_OST_BODY);
- if (body == NULL) {
+ if (!body) {
rc = -EPROTO;
goto out_req;
}
@@ -500,7 +499,7 @@
lsm->lsm_oi = oa->o_oi;
*ea = lsm;
- if (oti != NULL) {
+ if (oti) {
oti->oti_transno = lustre_msg_get_transno(req->rq_repmsg);
if (oa->o_valid & OBD_MD_FLCOOKIE) {
@@ -530,7 +529,7 @@
int rc;
req = ptlrpc_request_alloc(class_exp2cliimp(exp), &RQF_OST_PUNCH);
- if (req == NULL)
+ if (!req)
return -ENOMEM;
rc = ptlrpc_request_pack(req, LUSTRE_OST_VERSION, OST_PUNCH);
@@ -573,7 +572,7 @@
goto out;
body = req_capsule_server_get(&req->rq_pill, &RMF_OST_BODY);
- if (body == NULL) {
+ if (!body) {
CERROR("can't unpack ost_body\n");
rc = -EPROTO;
goto out;
@@ -595,7 +594,7 @@
int rc;
req = ptlrpc_request_alloc(class_exp2cliimp(exp), &RQF_OST_SYNC);
- if (req == NULL)
+ if (!req)
return -ENOMEM;
rc = ptlrpc_request_pack(req, LUSTRE_OST_VERSION, OST_SYNC);
@@ -629,10 +628,11 @@
/* Find and cancel locally locks matched by @mode in the resource found by
* @objid. Found locks are added into @cancel list. Returns the amount of
- * locks added to @cancels list. */
+ * locks added to @cancels list.
+ */
static int osc_resource_get_unused(struct obd_export *exp, struct obdo *oa,
struct list_head *cancels,
- ldlm_mode_t mode, __u64 lock_flags)
+ enum ldlm_mode mode, __u64 lock_flags)
{
struct ldlm_namespace *ns = exp->exp_obd->obd_namespace;
struct ldlm_res_id res_id;
@@ -644,13 +644,14 @@
*
* This distinguishes from a case when ELC is not supported originally,
* when we still want to cancel locks in advance and just cancel them
- * locally, without sending any RPC. */
+ * locally, without sending any RPC.
+ */
if (exp_connect_cancelset(exp) && !ns_connect_cancelset(ns))
return 0;
ostid_build_res_name(&oa->o_oi, &res_id);
res = ldlm_resource_get(ns, NULL, &res_id, 0, 0);
- if (res == NULL)
+ if (!res)
return 0;
LDLM_RESOURCE_ADDREF(res);
@@ -723,7 +724,8 @@
* If the client dies, or the OST is down when the object should be destroyed,
* the records are not cancelled, and when the OST reconnects to the MDS next,
* it will retrieve the llog unlink logs and then sends the log cancellation
- * cookies to the MDS after committing destroy transactions. */
+ * cookies to the MDS after committing destroy transactions.
+ */
static int osc_destroy(const struct lu_env *env, struct obd_export *exp,
struct obdo *oa, struct lov_stripe_md *ea,
struct obd_trans_info *oti, struct obd_export *md_export)
@@ -743,7 +745,7 @@
LDLM_FL_DISCARD_DATA);
req = ptlrpc_request_alloc(class_exp2cliimp(exp), &RQF_OST_DESTROY);
- if (req == NULL) {
+ if (!req) {
ldlm_lock_list_put(&cancels, l_bl_ast, count);
return -ENOMEM;
}
@@ -758,7 +760,7 @@
req->rq_request_portal = OST_IO_PORTAL; /* bug 7198 */
ptlrpc_at_set_req_timeout(req);
- if (oti != NULL && oa->o_valid & OBD_MD_FLCOOKIE)
+ if (oti && oa->o_valid & OBD_MD_FLCOOKIE)
oa->o_lcookie = *oti->oti_logcookies;
body = req_capsule_client_get(&req->rq_pill, &RMF_OST_BODY);
LASSERT(body);
@@ -769,7 +771,8 @@
/* If osc_destroy is for destroying the unlink orphan,
* sent from MDT to OST, which should not be blocked here,
* because the process might be triggered by ptlrpcd, and
- * it is not good to block ptlrpcd thread (b=16006)*/
+ * it is not good to block ptlrpcd thread (b=16006
+ **/
if (!(oa->o_flags & OBD_FL_DELORPHAN)) {
req->rq_interpret_reply = osc_destroy_interpret;
if (!osc_can_send_destroy(cli)) {
@@ -810,7 +813,8 @@
(long)(obd_max_dirty_pages + 1))) {
/* The atomic_read() allowing the atomic_inc() are
* not covered by a lock thus they may safely race and trip
- * this CERROR() unless we add in a small fudge factor (+1). */
+ * this CERROR() unless we add in a small fudge factor (+1).
+ */
CERROR("dirty %d - %d > system dirty_max %d\n",
atomic_read(&obd_dirty_pages),
atomic_read(&obd_dirty_transit_pages),
@@ -839,7 +843,7 @@
{
cli->cl_next_shrink_grant =
cfs_time_shift(cli->cl_grant_shrink_interval);
- CDEBUG(D_CACHE, "next time %ld to shrink grant \n",
+ CDEBUG(D_CACHE, "next time %ld to shrink grant\n",
cli->cl_next_shrink_grant);
}
@@ -900,7 +904,8 @@
/* Shrink the current grant, either from some large amount to enough for a
* full set of in-flight RPCs, or if we have already shrunk to that limit
* then to enough for a single RPC. This avoids keeping more grant than
- * needed, and avoids shrinking the grant piecemeal. */
+ * needed, and avoids shrinking the grant piecemeal.
+ */
static int osc_shrink_grant(struct client_obd *cli)
{
__u64 target_bytes = (cli->cl_max_rpcs_in_flight + 1) *
@@ -922,7 +927,8 @@
client_obd_list_lock(&cli->cl_loi_list_lock);
/* Don't shrink if we are already above or below the desired limit
* We don't want to shrink below a single RPC, as that will negatively
- * impact block allocation and long-term performance. */
+ * impact block allocation and long-term performance.
+ */
if (target_bytes < cli->cl_max_pages_per_rpc << PAGE_CACHE_SHIFT)
target_bytes = cli->cl_max_pages_per_rpc << PAGE_CACHE_SHIFT;
@@ -970,7 +976,8 @@
if (cfs_time_aftereq(time, next_shrink - 5 * CFS_TICK)) {
/* Get the current RPC size directly, instead of going via:
* cli_brw_size(obd->u.cli.cl_import->imp_obd->obd_self_export)
- * Keep comment here so that it can be found by searching. */
+ * Keep comment here so that it can be found by searching.
+ */
int brw_size = client->cl_max_pages_per_rpc << PAGE_CACHE_SHIFT;
if (client->cl_import->imp_state == LUSTRE_IMP_FULL &&
@@ -1007,7 +1014,7 @@
client->cl_import->imp_obd->obd_name, rc);
return rc;
}
- CDEBUG(D_CACHE, "add grant client %s \n",
+ CDEBUG(D_CACHE, "add grant client %s\n",
client->cl_import->imp_obd->obd_name);
osc_update_next_shrink(client);
return 0;
@@ -1040,7 +1047,8 @@
cli->cl_import->imp_obd->obd_name, cli->cl_avail_grant,
ocd->ocd_grant, cli->cl_dirty);
/* workaround for servers which do not have the patch from
- * LU-2679 */
+ * LU-2679
+ */
cli->cl_avail_grant = ocd->ocd_grant;
}
@@ -1060,7 +1068,8 @@
/* We assume that the reason this OSC got a short read is because it read
* beyond the end of a stripe file; i.e. lustre is reading a sparse file
* via the LOV, and it _knows_ it's reading inside the file, it's just that
- * this stripe never got written at or beyond this stripe offset yet. */
+ * this stripe never got written at or beyond this stripe offset yet.
+ */
static void handle_short_read(int nob_read, u32 page_count,
struct brw_page **pga)
{
@@ -1106,7 +1115,7 @@
remote_rcs = req_capsule_server_sized_get(&req->rq_pill, &RMF_RCS,
sizeof(*remote_rcs) *
niocount);
- if (remote_rcs == NULL) {
+ if (!remote_rcs) {
CDEBUG(D_INFO, "Missing/short RC vector on BRW_WRITE reply\n");
return -EPROTO;
}
@@ -1139,7 +1148,8 @@
OBD_BRW_SYNC | OBD_BRW_ASYNC|OBD_BRW_NOQUOTA);
/* warn if we try to combine flags that we don't know to be
- * safe to combine */
+ * safe to combine
+ */
if (unlikely((p1->flag & mask) != (p2->flag & mask))) {
CWARN("Saw flags 0x%x and 0x%x in the same brw, please report this at http://bugs.whamcloud.com/\n",
p1->flag, p2->flag);
@@ -1152,7 +1162,7 @@
static u32 osc_checksum_bulk(int nob, u32 pg_count,
struct brw_page **pga, int opc,
- cksum_type_t cksum_type)
+ enum cksum_type cksum_type)
{
__u32 cksum;
int i = 0;
@@ -1174,7 +1184,8 @@
int count = pga[i]->count > nob ? nob : pga[i]->count;
/* corrupt the data before we compute the checksum, to
- * simulate an OST->client data error */
+ * simulate an OST->client data error
+ */
if (i == 0 && opc == OST_READ &&
OBD_FAIL_CHECK(OBD_FAIL_OSC_CHECKSUM_RECEIVE)) {
unsigned char *ptr = kmap(pga[i]->pg);
@@ -1205,7 +1216,8 @@
cfs_crypto_hash_final(hdesc, NULL, NULL);
/* For sending we only compute the wrong checksum instead
- * of corrupting the data so it is still correct on a redo */
+ * of corrupting the data so it is still correct on a redo
+ */
if (opc == OST_WRITE && OBD_FAIL_CHECK(OBD_FAIL_OSC_CHECKSUM_SEND))
cksum++;
@@ -1244,7 +1256,7 @@
opc = OST_READ;
req = ptlrpc_request_alloc(cli->cl_import, &RQF_OST_BRW_READ);
}
- if (req == NULL)
+ if (!req)
return -ENOMEM;
for (niocount = i = 1; i < page_count; i++) {
@@ -1266,7 +1278,8 @@
req->rq_request_portal = OST_IO_PORTAL; /* bug 7198 */
ptlrpc_at_set_req_timeout(req);
/* ask ptlrpc not to resend on EINPROGRESS since BRWs have their own
- * retry logic */
+ * retry logic
+ */
req->rq_no_retry_einprogress = 1;
desc = ptlrpc_prep_bulk_imp(req, page_count,
@@ -1274,7 +1287,7 @@
opc == OST_WRITE ? BULK_GET_SOURCE : BULK_PUT_SINK,
OST_BULK_PORTAL);
- if (desc == NULL) {
+ if (!desc) {
rc = -ENOMEM;
goto out;
}
@@ -1283,7 +1296,7 @@
body = req_capsule_client_get(pill, &RMF_OST_BODY);
ioobj = req_capsule_client_get(pill, &RMF_OBD_IOOBJ);
niobuf = req_capsule_client_get(pill, &RMF_NIOBUF_REMOTE);
- LASSERT(body != NULL && ioobj != NULL && niobuf != NULL);
+ LASSERT(body && ioobj && niobuf);
lustre_set_wire_obdo(&req->rq_import->imp_connect_data, &body->oa, oa);
@@ -1293,7 +1306,8 @@
* that might be send for this request. The actual number is decided
* when the RPC is finally sent in ptlrpc_register_bulk(). It sends
* "max - 1" for old client compatibility sending "0", and also so the
- * the actual maximum is a power-of-two number, not one less. LU-1431 */
+ * the actual maximum is a power-of-two number, not one less. LU-1431
+ */
ioobj_max_brw_set(ioobj, desc->bd_md_max_brw);
LASSERT(page_count > 0);
pg_prev = pga[0];
@@ -1355,8 +1369,9 @@
if (cli->cl_checksum &&
!sptlrpc_flavor_has_bulk(&req->rq_flvr)) {
/* store cl_cksum_type in a local variable since
- * it can be changed via lprocfs */
- cksum_type_t cksum_type = cli->cl_cksum_type;
+ * it can be changed via lprocfs
+ */
+ enum cksum_type cksum_type = cli->cl_cksum_type;
if ((body->oa.o_valid & OBD_MD_FLFLAGS) == 0) {
oa->o_flags &= OBD_FL_LOCAL_MASK;
@@ -1375,7 +1390,8 @@
oa->o_flags |= cksum_type_pack(cksum_type);
} else {
/* clear out the checksum flag, in case this is a
- * resend but cl_checksum is no longer set. b=11238 */
+ * resend but cl_checksum is no longer set. b=11238
+ */
oa->o_valid &= ~OBD_MD_FLCKSUM;
}
oa->o_cksum = body->oa.o_cksum;
@@ -1415,11 +1431,11 @@
static int check_write_checksum(struct obdo *oa, const lnet_process_id_t *peer,
__u32 client_cksum, __u32 server_cksum, int nob,
u32 page_count, struct brw_page **pga,
- cksum_type_t client_cksum_type)
+ enum cksum_type client_cksum_type)
{
__u32 new_cksum;
char *msg;
- cksum_type_t cksum_type;
+ enum cksum_type cksum_type;
if (server_cksum == client_cksum) {
CDEBUG(D_PAGE, "checksum %x confirmed\n", client_cksum);
@@ -1472,9 +1488,9 @@
return rc;
}
- LASSERTF(req->rq_repmsg != NULL, "rc = %d\n", rc);
+ LASSERTF(req->rq_repmsg, "rc = %d\n", rc);
body = req_capsule_server_get(&req->rq_pill, &RMF_OST_BODY);
- if (body == NULL) {
+ if (!body) {
DEBUG_REQ(D_INFO, req, "Can't unpack body\n");
return -EPROTO;
}
@@ -1550,7 +1566,7 @@
__u32 server_cksum = body->oa.o_cksum;
char *via = "";
char *router = "";
- cksum_type_t cksum_type;
+ enum cksum_type cksum_type;
cksum_type = cksum_type_unpack(body->oa.o_valid&OBD_MD_FLFLAGS ?
body->oa.o_flags : 0);
@@ -1627,7 +1643,7 @@
return rc;
list_for_each_entry(oap, &aa->aa_oaps, oap_rpc_item) {
- if (oap->oap_request != NULL) {
+ if (oap->oap_request) {
LASSERTF(request == oap->oap_request,
"request %p != oap_request %p\n",
request, oap->oap_request);
@@ -1638,12 +1654,14 @@
}
}
/* New request takes over pga and oaps from old request.
- * Note that copying a list_head doesn't work, need to move it... */
+ * Note that copying a list_head doesn't work, need to move it...
+ */
aa->aa_resends++;
new_req->rq_interpret_reply = request->rq_interpret_reply;
new_req->rq_async_args = request->rq_async_args;
/* cap resend delay to the current request timeout, this is similar to
- * what ptlrpc does (see after_reply()) */
+ * what ptlrpc does (see after_reply())
+ */
if (aa->aa_resends > new_req->rq_timeout)
new_req->rq_sent = ktime_get_real_seconds() + new_req->rq_timeout;
else
@@ -1669,7 +1687,8 @@
/* XXX: This code will run into problem if we're going to support
* to add a series of BRW RPCs into a self-defined ptlrpc_request_set
* and wait for all of them to be finished. We should inherit request
- * set from old request. */
+ * set from old request.
+ */
ptlrpcd_add_req(new_req);
DEBUG_REQ(D_INFO, new_req, "new request");
@@ -1709,7 +1728,7 @@
static void osc_release_ppga(struct brw_page **ppga, u32 count)
{
- LASSERT(ppga != NULL);
+ LASSERT(ppga);
kfree(ppga);
}
@@ -1725,7 +1744,8 @@
rc = osc_brw_fini_request(req, rc);
CDEBUG(D_INODE, "request %p aa %p rc %d\n", req, aa, rc);
/* When server return -EINPROGRESS, client should always retry
- * regardless of the number of times the bulk was resent already. */
+ * regardless of the number of times the bulk was resent already.
+ */
if (osc_recoverable_error(rc)) {
if (req->rq_import_generation !=
req->rq_import->imp_generation) {
@@ -1748,7 +1768,7 @@
}
list_for_each_entry_safe(ext, tmp, &aa->aa_exts, oe_link) {
- if (obj == NULL && rc == 0) {
+ if (!obj && rc == 0) {
obj = osc2cl(ext->oe_obj);
cl_object_get(obj);
}
@@ -1759,7 +1779,7 @@
LASSERT(list_empty(&aa->aa_exts));
LASSERT(list_empty(&aa->aa_oaps));
- if (obj != NULL) {
+ if (obj) {
struct obdo *oa = aa->aa_oa;
struct cl_attr *attr = &osc_env_info(env)->oti_attr;
unsigned long valid = 0;
@@ -1798,7 +1818,8 @@
client_obd_list_lock(&cli->cl_loi_list_lock);
/* We need to decrement before osc_ap_completion->osc_wake_cache_waiters
* is called so we know whether to go to sync BRWs or wait for more
- * RPCs to complete */
+ * RPCs to complete
+ */
if (lustre_msg_get_opc(req->rq_reqmsg) == OST_WRITE)
cli->cl_w_in_flight--;
else
@@ -1871,13 +1892,13 @@
}
pga = kcalloc(page_count, sizeof(*pga), GFP_NOFS);
- if (pga == NULL) {
+ if (!pga) {
rc = -ENOMEM;
goto out;
}
oa = kmem_cache_alloc(obdo_cachep, GFP_NOFS | __GFP_ZERO);
- if (oa == NULL) {
+ if (!oa) {
rc = -ENOMEM;
goto out;
}
@@ -1886,7 +1907,7 @@
list_for_each_entry(oap, &rpc_list, oap_rpc_item) {
struct cl_page *page = oap2cl_page(oap);
- if (clerq == NULL) {
+ if (!clerq) {
clerq = cl_req_alloc(env, page, crt,
1 /* only 1-object rpcs for now */);
if (IS_ERR(clerq)) {
@@ -1907,7 +1928,7 @@
}
/* always get the data for the obdo for the rpc */
- LASSERT(clerq != NULL);
+ LASSERT(clerq);
crattr->cra_oa = oa;
cl_req_attr_set(env, clerq, crattr, ~0ULL);
if (lock) {
@@ -1938,7 +1959,8 @@
* we race with setattr (locally or in queue at OST). If OST gets
* later setattr before earlier BRW (as determined by the request xid),
* the OST will not use BRW timestamps. Sadly, there is no obvious
- * way to do this in a single call. bug 10150 */
+ * way to do this in a single call. bug 10150
+ */
body = req_capsule_client_get(&req->rq_pill, &RMF_OST_BODY);
crattr->cra_oa = &body->oa;
cl_req_attr_set(env, clerq, crattr,
@@ -1955,11 +1977,12 @@
aa->aa_clerq = clerq;
/* queued sync pages can be torn down while the pages
- * were between the pending list and the rpc */
+ * were between the pending list and the rpc
+ */
tmp = NULL;
list_for_each_entry(oap, &aa->aa_oaps, oap_rpc_item) {
/* only one oap gets a request reference */
- if (tmp == NULL)
+ if (!tmp)
tmp = oap;
if (oap->oap_interrupted && !req->rq_intr) {
CDEBUG(D_INODE, "oap %p in req %p interrupted\n",
@@ -1967,7 +1990,7 @@
ptlrpc_mark_interrupted(req);
}
}
- if (tmp != NULL)
+ if (tmp)
tmp->oap_request = ptlrpc_request_addref(req);
client_obd_list_lock(&cli->cl_loi_list_lock);
@@ -2001,13 +2024,14 @@
kfree(crattr);
if (rc != 0) {
- LASSERT(req == NULL);
+ LASSERT(!req);
if (oa)
kmem_cache_free(obdo_cachep, oa);
kfree(pga);
/* this should happen rarely and is pretty bad, it makes the
- * pending list not follow the dirty order */
+ * pending list not follow the dirty order
+ */
while (!list_empty(ext_list)) {
ext = list_entry(ext_list->next, struct osc_extent,
oe_link);
@@ -2026,7 +2050,6 @@
void *data = einfo->ei_cbdata;
int set = 0;
- LASSERT(lock != NULL);
LASSERT(lock->l_blocking_ast == einfo->ei_cb_bl);
LASSERT(lock->l_resource->lr_type == einfo->ei_type);
LASSERT(lock->l_completion_ast == einfo->ei_cb_cp);
@@ -2035,7 +2058,7 @@
lock_res_and_lock(lock);
spin_lock(&osc_ast_guard);
- if (lock->l_ast_data == NULL)
+ if (!lock->l_ast_data)
lock->l_ast_data = data;
if (lock->l_ast_data == data)
set = 1;
@@ -2052,7 +2075,7 @@
struct ldlm_lock *lock = ldlm_handle2lock(lockh);
int set = 0;
- if (lock != NULL) {
+ if (lock) {
set = osc_set_lock_data_with_check(lock, einfo);
LDLM_LOCK_PUT(lock);
} else
@@ -2064,7 +2087,8 @@
/* find any ldlm lock of the inode in osc
* return 0 not find
* 1 find one
- * < 0 error */
+ * < 0 error
+ */
static int osc_find_cbdata(struct obd_export *exp, struct lov_stripe_md *lsm,
ldlm_iterator_t replace, void *data)
{
@@ -2095,7 +2119,6 @@
rep = req_capsule_server_get(&req->rq_pill,
&RMF_DLM_REP);
- LASSERT(rep != NULL);
rep->lock_policy_res1 =
ptlrpc_status_ntoh(rep->lock_policy_res1);
if (rep->lock_policy_res1)
@@ -2127,18 +2150,21 @@
__u64 *flags = aa->oa_flags;
/* Make a local copy of a lock handle and a mode, because aa->oa_*
- * might be freed anytime after lock upcall has been called. */
+ * might be freed anytime after lock upcall has been called.
+ */
lustre_handle_copy(&handle, aa->oa_lockh);
mode = aa->oa_ei->ei_mode;
/* ldlm_cli_enqueue is holding a reference on the lock, so it must
- * be valid. */
+ * be valid.
+ */
lock = ldlm_handle2lock(&handle);
/* Take an additional reference so that a blocking AST that
* ldlm_cli_enqueue_fini() might post for a failed lock, is guaranteed
* to arrive after an upcall has been executed by
- * osc_enqueue_fini(). */
+ * osc_enqueue_fini().
+ */
ldlm_lock_addref(&handle, mode);
/* Let CP AST to grant the lock first. */
@@ -2170,7 +2196,7 @@
*/
ldlm_lock_decref(&handle, mode);
- LASSERTF(lock != NULL, "lockh %p, req %p, aa %p - client evicted?\n",
+ LASSERTF(lock, "lockh %p, req %p, aa %p - client evicted?\n",
aa->oa_lockh, req, aa);
ldlm_lock_decref(&handle, mode);
LDLM_LOCK_PUT(lock);
@@ -2185,7 +2211,8 @@
* others may take a considerable amount of time in a case of ost failure; and
* when other sync requests do not get released lock from a client, the client
* is excluded from the cluster -- such scenarious make the life difficult, so
- * release locks just after they are obtained. */
+ * release locks just after they are obtained.
+ */
int osc_enqueue_base(struct obd_export *exp, struct ldlm_res_id *res_id,
__u64 *flags, ldlm_policy_data_t *policy,
struct ost_lvb *lvb, int kms_valid,
@@ -2198,11 +2225,12 @@
struct ptlrpc_request *req = NULL;
int intent = *flags & LDLM_FL_HAS_INTENT;
__u64 match_lvb = (agl != 0 ? 0 : LDLM_FL_LVB_READY);
- ldlm_mode_t mode;
+ enum ldlm_mode mode;
int rc;
/* Filesystem lock extents are extended to page boundaries so that
- * dealing with the page cache is a little smoother. */
+ * dealing with the page cache is a little smoother.
+ */
policy->l_extent.start -= policy->l_extent.start & ~CFS_PAGE_MASK;
policy->l_extent.end |= ~CFS_PAGE_MASK;
@@ -2226,7 +2254,8 @@
*
* At some point we should cancel the read lock instead of making them
* send us a blocking callback, but there are problems with canceling
- * locks out from other users right now, too. */
+ * locks out from other users right now, too.
+ */
mode = einfo->ei_mode;
if (einfo->ei_mode == LCK_PR)
mode |= LCK_PW;
@@ -2238,7 +2267,8 @@
if ((agl != 0) && !(matched->l_flags & LDLM_FL_LVB_READY)) {
/* For AGL, if enqueue RPC is sent but the lock is not
* granted, then skip to process this strpe.
- * Return -ECANCELED to tell the caller. */
+ * Return -ECANCELED to tell the caller.
+ */
ldlm_lock_decref(lockh, mode);
LDLM_LOCK_PUT(matched);
return -ECANCELED;
@@ -2247,19 +2277,22 @@
if (osc_set_lock_data_with_check(matched, einfo)) {
*flags |= LDLM_FL_LVB_READY;
/* addref the lock only if not async requests and PW
- * lock is matched whereas we asked for PR. */
+ * lock is matched whereas we asked for PR.
+ */
if (!rqset && einfo->ei_mode != mode)
ldlm_lock_addref(lockh, LCK_PR);
if (intent) {
/* I would like to be able to ASSERT here that
* rss <= kms, but I can't, for reasons which
- * are explained in lov_enqueue() */
+ * are explained in lov_enqueue()
+ */
}
/* We already have a lock, and it's referenced.
*
* At this point, the cl_lock::cll_state is CLS_QUEUING,
- * AGL upcall may change it to CLS_HELD directly. */
+ * AGL upcall may change it to CLS_HELD directly.
+ */
(*upcall)(cookie, ELDLM_OK);
if (einfo->ei_mode != mode)
@@ -2281,7 +2314,7 @@
req = ptlrpc_request_alloc(class_exp2cliimp(exp),
&RQF_LDLM_ENQUEUE_LVB);
- if (req == NULL)
+ if (!req)
return -ENOMEM;
rc = ldlm_prep_enqueue_req(exp, req, &cancels, 0);
@@ -2341,27 +2374,29 @@
{
struct obd_device *obd = exp->exp_obd;
__u64 lflags = *flags;
- ldlm_mode_t rc;
+ enum ldlm_mode rc;
if (OBD_FAIL_CHECK(OBD_FAIL_OSC_MATCH))
return -EIO;
/* Filesystem lock extents are extended to page boundaries so that
- * dealing with the page cache is a little smoother */
+ * dealing with the page cache is a little smoother
+ */
policy->l_extent.start -= policy->l_extent.start & ~CFS_PAGE_MASK;
policy->l_extent.end |= ~CFS_PAGE_MASK;
/* Next, search for already existing extent locks that will cover us */
/* If we're trying to read, we also search for an existing PW lock. The
* VFS and page cache already protect us locally, so lots of readers/
- * writers can share a single PW lock. */
+ * writers can share a single PW lock.
+ */
rc = mode;
if (mode == LCK_PR)
rc |= LCK_PW;
rc = ldlm_lock_match(obd->obd_namespace, lflags,
res_id, type, policy, rc, lockh, unref);
if (rc) {
- if (data != NULL) {
+ if (data) {
if (!osc_set_data_with_check(lockh, data)) {
if (!(lflags & LDLM_FL_TEST_LOCK))
ldlm_lock_decref(lockh, rc);
@@ -2398,8 +2433,9 @@
* due to issues at a higher level (LOV).
* Exit immediately since the caller is
* aware of the problem and takes care
- * of the clean up */
- return rc;
+ * of the clean up
+ */
+ return rc;
if ((rc == -ENOTCONN || rc == -EAGAIN) &&
(aa->aa_oi->oi_flags & OBD_STATFS_NODELAY)) {
@@ -2411,7 +2447,7 @@
goto out;
msfs = req_capsule_server_get(&req->rq_pill, &RMF_OBD_STATFS);
- if (msfs == NULL) {
+ if (!msfs) {
rc = -EPROTO;
goto out;
}
@@ -2436,9 +2472,10 @@
* extra calls into the filesystem if that isn't necessary (e.g.
* during mount that would help a bit). Having relative timestamps
* is not so great if request processing is slow, while absolute
- * timestamps are not ideal because they need time synchronization. */
+ * timestamps are not ideal because they need time synchronization.
+ */
req = ptlrpc_request_alloc(obd->u.cli.cl_import, &RQF_OST_STATFS);
- if (req == NULL)
+ if (!req)
return -ENOMEM;
rc = ptlrpc_request_pack(req, LUSTRE_OST_VERSION, OST_STATFS);
@@ -2474,8 +2511,9 @@
struct obd_import *imp = NULL;
int rc;
- /*Since the request might also come from lprocfs, so we need
- *sync this with client_disconnect_export Bug15684*/
+ /* Since the request might also come from lprocfs, so we need
+ * sync this with client_disconnect_export Bug15684
+ */
down_read(&obd->u.cli.cl_sem);
if (obd->u.cli.cl_import)
imp = class_import_get(obd->u.cli.cl_import);
@@ -2488,12 +2526,13 @@
* extra calls into the filesystem if that isn't necessary (e.g.
* during mount that would help a bit). Having relative timestamps
* is not so great if request processing is slow, while absolute
- * timestamps are not ideal because they need time synchronization. */
+ * timestamps are not ideal because they need time synchronization.
+ */
req = ptlrpc_request_alloc(imp, &RQF_OST_STATFS);
class_import_put(imp);
- if (req == NULL)
+ if (!req)
return -ENOMEM;
rc = ptlrpc_request_pack(req, LUSTRE_OST_VERSION, OST_STATFS);
@@ -2516,7 +2555,7 @@
goto out;
msfs = req_capsule_server_get(&req->rq_pill, &RMF_OBD_STATFS);
- if (msfs == NULL) {
+ if (!msfs) {
rc = -EPROTO;
goto out;
}
@@ -2546,7 +2585,8 @@
return -ENODATA;
/* we only need the header part from user space to get lmm_magic and
- * lmm_stripe_count, (the header part is common to v1 and v3) */
+ * lmm_stripe_count, (the header part is common to v1 and v3)
+ */
lum_size = sizeof(struct lov_user_md_v1);
if (copy_from_user(&lum, lump, lum_size))
return -EFAULT;
@@ -2561,7 +2601,8 @@
LASSERT(sizeof(lum.lmm_objects[0]) == sizeof(lumk->lmm_objects[0]));
/* we can use lov_mds_md_size() to compute lum_size
- * because lov_user_md_vX and lov_mds_md_vX have the same size */
+ * because lov_user_md_vX and lov_mds_md_vX have the same size
+ */
if (lum.lmm_stripe_count > 0) {
lum_size = lov_mds_md_size(lum.lmm_stripe_count, lum.lmm_magic);
lumk = kzalloc(lum_size, GFP_NOFS);
@@ -2701,7 +2742,7 @@
req = ptlrpc_request_alloc(class_exp2cliimp(exp),
&RQF_OST_GET_INFO_LAST_ID);
- if (req == NULL)
+ if (!req)
return -ENOMEM;
req_capsule_set_size(&req->rq_pill, &RMF_SETINFO_KEY,
@@ -2722,7 +2763,7 @@
goto out;
reply = req_capsule_server_get(&req->rq_pill, &RMF_OBD_ID);
- if (reply == NULL) {
+ if (!reply) {
rc = -EPROTO;
goto out;
}
@@ -2736,7 +2777,7 @@
struct ldlm_res_id res_id;
ldlm_policy_data_t policy;
struct lustre_handle lockh;
- ldlm_mode_t mode = 0;
+ enum ldlm_mode mode = 0;
struct ptlrpc_request *req;
struct ll_user_fiemap *reply;
char *tmp;
@@ -2775,7 +2816,7 @@
skip_locking:
req = ptlrpc_request_alloc(class_exp2cliimp(exp),
&RQF_OST_GET_INFO_FIEMAP);
- if (req == NULL) {
+ if (!req) {
rc = -ENOMEM;
goto drop_lock;
}
@@ -2804,7 +2845,7 @@
goto fini_req;
reply = req_capsule_server_get(&req->rq_pill, &RMF_FIEMAP_VAL);
- if (reply == NULL) {
+ if (!reply) {
rc = -EPROTO;
goto fini_req;
}
@@ -2853,7 +2894,7 @@
if (KEY_IS(KEY_CACHE_SET)) {
struct client_obd *cli = &obd->u.cli;
- LASSERT(cli->cl_cache == NULL); /* only once */
+ LASSERT(!cli->cl_cache); /* only once */
cli->cl_cache = val;
atomic_inc(&cli->cl_cache->ccc_users);
cli->cl_lru_left = &cli->cl_cache->ccc_lru_left;
@@ -2881,16 +2922,17 @@
return -EINVAL;
/* We pass all other commands directly to OST. Since nobody calls osc
- methods directly and everybody is supposed to go through LOV, we
- assume lov checked invalid values for us.
- The only recognised values so far are evict_by_nid and mds_conn.
- Even if something bad goes through, we'd get a -EINVAL from OST
- anyway. */
+ * methods directly and everybody is supposed to go through LOV, we
+ * assume lov checked invalid values for us.
+ * The only recognised values so far are evict_by_nid and mds_conn.
+ * Even if something bad goes through, we'd get a -EINVAL from OST
+ * anyway.
+ */
req = ptlrpc_request_alloc(imp, KEY_IS(KEY_GRANT_SHRINK) ?
&RQF_OST_SET_GRANT_INFO :
&RQF_OBD_SET_INFO);
- if (req == NULL)
+ if (!req)
return -ENOMEM;
req_capsule_set_size(&req->rq_pill, &RMF_SETINFO_KEY,
@@ -2929,7 +2971,7 @@
ptlrpc_request_set_replen(req);
if (!KEY_IS(KEY_GRANT_SHRINK)) {
- LASSERT(set != NULL);
+ LASSERT(set);
ptlrpc_set_add_req(set, req);
ptlrpc_check_set(NULL, set);
} else {
@@ -2947,7 +2989,7 @@
{
struct client_obd *cli = &obd->u.cli;
- if (data != NULL && (data->ocd_connect_flags & OBD_CONNECT_GRANT)) {
+ if (data && (data->ocd_connect_flags & OBD_CONNECT_GRANT)) {
long lost_grant;
client_obd_list_lock(&cli->cl_loi_list_lock);
@@ -2988,7 +3030,7 @@
* So the osc should be disconnected from the shrink list, after we
* are sure the import has been destroyed. BUG18662
*/
- if (obd->u.cli.cl_import == NULL)
+ if (!obd->u.cli.cl_import)
osc_del_shrink_grant(&obd->u.cli);
return rc;
}
@@ -3025,7 +3067,8 @@
/* Reset grants */
cli = &obd->u.cli;
/* all pages go to failing rpcs due to the invalid
- * import */
+ * import
+ */
osc_io_unplug(env, cli, NULL);
ldlm_namespace_cleanup(ns, LDLM_FL_LOCAL_ONLY);
@@ -3207,13 +3250,13 @@
return 0;
}
-int osc_cleanup(struct obd_device *obd)
+static int osc_cleanup(struct obd_device *obd)
{
struct client_obd *cli = &obd->u.cli;
int rc;
/* lru cleanup */
- if (cli->cl_cache != NULL) {
+ if (cli->cl_cache) {
LASSERT(atomic_read(&cli->cl_cache->ccc_users) > 0);
spin_lock(&cli->cl_cache->ccc_lru_lock);
list_del_init(&cli->cl_lru_osc);
@@ -3256,7 +3299,7 @@
return osc_process_config_base(obd, buf);
}
-struct obd_ops osc_obd_ops = {
+static struct obd_ops osc_obd_ops = {
.owner = THIS_MODULE,
.setup = osc_setup,
.precleanup = osc_precleanup,
@@ -3299,7 +3342,8 @@
/* print an address of _any_ initialized kernel symbol from this
* module, to allow debugging with gdb that doesn't support data
- * symbols from modules.*/
+ * symbols from modules.
+ */
CDEBUG(D_INFO, "Lustre OSC module (%p).\n", &osc_caches);
rc = lu_kmem_init(osc_caches);
diff --git a/drivers/staging/lustre/lustre/ptlrpc/client.c b/drivers/staging/lustre/lustre/ptlrpc/client.c
index 1cc3c69..9b89068 100644
--- a/drivers/staging/lustre/lustre/ptlrpc/client.c
+++ b/drivers/staging/lustre/lustre/ptlrpc/client.c
@@ -145,7 +145,7 @@
LASSERT(type == BULK_PUT_SINK || type == BULK_GET_SOURCE);
desc = ptlrpc_new_bulk(npages, max_brw, type, portal);
- if (desc == NULL)
+ if (!desc)
return NULL;
desc->bd_import_generation = req->rq_import_generation;
@@ -171,7 +171,7 @@
struct page *page, int pageoffset, int len, int pin)
{
LASSERT(desc->bd_iov_count < desc->bd_max_iov);
- LASSERT(page != NULL);
+ LASSERT(page);
LASSERT(pageoffset >= 0);
LASSERT(len > 0);
LASSERT(pageoffset + len <= PAGE_CACHE_SIZE);
@@ -193,7 +193,6 @@
{
int i;
- LASSERT(desc != NULL);
LASSERT(desc->bd_iov_count != LI_POISON); /* not freed already */
LASSERT(desc->bd_md_count == 0); /* network hands off */
LASSERT((desc->bd_export != NULL) ^ (desc->bd_import != NULL));
@@ -412,7 +411,7 @@
request_cache = kmem_cache_create("ptlrpc_cache",
sizeof(struct ptlrpc_request),
0, SLAB_HWCACHE_ALIGN, NULL);
- return request_cache == NULL ? -ENOMEM : 0;
+ return !request_cache ? -ENOMEM : 0;
}
void ptlrpc_request_cache_fini(void)
@@ -442,8 +441,6 @@
struct list_head *l, *tmp;
struct ptlrpc_request *req;
- LASSERT(pool != NULL);
-
spin_lock(&pool->prp_lock);
list_for_each_safe(l, tmp, &pool->prp_req_list) {
req = list_entry(l, struct ptlrpc_request, rq_list);
@@ -753,7 +750,7 @@
struct ptlrpc_request *request;
request = __ptlrpc_request_alloc(imp, pool);
- if (request == NULL)
+ if (!request)
return NULL;
req_capsule_init(&request->rq_pill, request, RCL_CLIENT);
@@ -952,10 +949,10 @@
atomic_inc(&set->set_remaining);
req->rq_queued_time = cfs_time_current();
- if (req->rq_reqmsg != NULL)
+ if (req->rq_reqmsg)
lustre_msg_set_jobid(req->rq_reqmsg, NULL);
- if (set->set_producer != NULL)
+ if (set->set_producer)
/*
* If the request set has a producer callback, the RPC must be
* sent straight away
@@ -975,7 +972,7 @@
struct ptlrpc_request_set *set = pc->pc_set;
int count, i;
- LASSERT(req->rq_set == NULL);
+ LASSERT(!req->rq_set);
LASSERT(test_bit(LIOD_STOP, &pc->pc_flags) == 0);
spin_lock(&set->set_new_req_lock);
@@ -1016,7 +1013,6 @@
{
int delay = 0;
- LASSERT(status != NULL);
*status = 0;
if (req->rq_ctx_init || req->rq_ctx_fini) {
@@ -1079,7 +1075,7 @@
__u32 opc;
int err;
- LASSERT(req->rq_reqmsg != NULL);
+ LASSERT(req->rq_reqmsg);
opc = lustre_msg_get_opc(req->rq_reqmsg);
/*
@@ -1168,7 +1164,7 @@
struct timespec64 work_start;
long timediff;
- LASSERT(obd != NULL);
+ LASSERT(obd);
/* repbuf must be unlinked */
LASSERT(!req->rq_receiving_reply && !req->rq_reply_unlink);
@@ -1248,7 +1244,7 @@
ktime_get_real_ts64(&work_start);
timediff = (work_start.tv_sec - req->rq_arrival_time.tv_sec) * USEC_PER_SEC +
(work_start.tv_nsec - req->rq_arrival_time.tv_nsec) / NSEC_PER_USEC;
- if (obd->obd_svc_stats != NULL) {
+ if (obd->obd_svc_stats) {
lprocfs_counter_add(obd->obd_svc_stats, PTLRPC_REQWAIT_CNTR,
timediff);
ptlrpc_lprocfs_rpc_sent(req, timediff);
@@ -1311,7 +1307,7 @@
/* version recovery */
ptlrpc_save_versions(req);
ptlrpc_retain_replayable_request(req, imp);
- } else if (req->rq_commit_cb != NULL &&
+ } else if (req->rq_commit_cb &&
list_empty(&req->rq_replay_list)) {
/*
* NB: don't call rq_commit_cb if it's already on
@@ -1438,7 +1434,7 @@
{
int remaining, rc;
- LASSERT(set->set_producer != NULL);
+ LASSERT(set->set_producer);
remaining = atomic_read(&set->set_remaining);
@@ -1751,7 +1747,7 @@
* process the reply. Similarly if the RPC returned
* an error, and therefore the bulk will never arrive.
*/
- if (req->rq_bulk == NULL || req->rq_status < 0) {
+ if (!req->rq_bulk || req->rq_status < 0) {
ptlrpc_rqphase_move(req, RQ_PHASE_INTERPRET);
goto interpret;
}
@@ -1803,7 +1799,7 @@
}
ptlrpc_rqphase_move(req, RQ_PHASE_COMPLETE);
- CDEBUG(req->rq_reqmsg != NULL ? D_RPCTRACE : 0,
+ CDEBUG(req->rq_reqmsg ? D_RPCTRACE : 0,
"Completed RPC pname:cluuid:pid:xid:nid:opc %s:%s:%d:%llu:%s:%d\n",
current_comm(), imp->imp_obd->obd_uuid.uuid,
lustre_msg_get_status(req->rq_reqmsg), req->rq_xid,
@@ -1883,7 +1879,7 @@
"timed out for sent delay" : "timed out for slow reply"),
(s64)req->rq_sent, (s64)req->rq_real_sent);
- if (imp != NULL && obd_debug_peer_on_timeout)
+ if (imp && obd_debug_peer_on_timeout)
LNetDebugPeer(imp->imp_connection->c_peer);
ptlrpc_unregister_reply(req, async_unlink);
@@ -1892,7 +1888,7 @@
if (obd_dump_on_timeout)
libcfs_debug_dumplog();
- if (imp == NULL) {
+ if (!imp) {
DEBUG_REQ(D_HA, req, "NULL import: already cleaned up?");
return 1;
}
@@ -1945,8 +1941,6 @@
struct list_head *tmp;
time64_t now = ktime_get_real_seconds();
- LASSERT(set != NULL);
-
/* A timeout expired. See which reqs it applies to... */
list_for_each(tmp, &set->set_requests) {
struct ptlrpc_request *req =
@@ -2003,7 +1997,6 @@
struct ptlrpc_request_set *set = data;
struct list_head *tmp;
- LASSERT(set != NULL);
CDEBUG(D_RPCTRACE, "INTERRUPTED SET %p\n", set);
list_for_each(tmp, &set->set_requests) {
@@ -2175,7 +2168,7 @@
rc = req->rq_status;
}
- if (set->set_interpret != NULL) {
+ if (set->set_interpret) {
int (*interpreter)(struct ptlrpc_request_set *set, void *, int) =
set->set_interpret;
rc = interpreter(set, set->set_arg, rc);
@@ -2207,10 +2200,10 @@
*/
static void __ptlrpc_free_req(struct ptlrpc_request *request, int locked)
{
- if (request == NULL)
+ if (!request)
return;
LASSERTF(!request->rq_receiving_reply, "req %p\n", request);
- LASSERTF(request->rq_rqbd == NULL, "req %p\n", request);/* client-side */
+ LASSERTF(!request->rq_rqbd, "req %p\n", request);/* client-side */
LASSERTF(list_empty(&request->rq_list), "req %p\n", request);
LASSERTF(list_empty(&request->rq_set_chain), "req %p\n", request);
LASSERTF(list_empty(&request->rq_exp_list), "req %p\n", request);
@@ -2222,7 +2215,7 @@
* We must take it off the imp_replay_list first. Otherwise, we'll set
* request->rq_reqmsg to NULL while osc_close is dereferencing it.
*/
- if (request->rq_import != NULL) {
+ if (request->rq_import) {
if (!locked)
spin_lock(&request->rq_import->imp_lock);
list_del_init(&request->rq_replay_list);
@@ -2237,20 +2230,20 @@
LBUG();
}
- if (request->rq_repbuf != NULL)
+ if (request->rq_repbuf)
sptlrpc_cli_free_repbuf(request);
- if (request->rq_export != NULL) {
+ if (request->rq_export) {
class_export_put(request->rq_export);
request->rq_export = NULL;
}
- if (request->rq_import != NULL) {
+ if (request->rq_import) {
class_import_put(request->rq_import);
request->rq_import = NULL;
}
- if (request->rq_bulk != NULL)
+ if (request->rq_bulk)
ptlrpc_free_bulk_pin(request->rq_bulk);
- if (request->rq_reqbuf != NULL || request->rq_clrbuf != NULL)
+ if (request->rq_reqbuf || request->rq_clrbuf)
sptlrpc_cli_free_reqbuf(request);
if (request->rq_cli_ctx)
@@ -2270,7 +2263,7 @@
*/
static int __ptlrpc_req_finished(struct ptlrpc_request *request, int locked)
{
- if (request == NULL)
+ if (!request)
return 1;
if (request == LP_POISON ||
@@ -2352,7 +2345,7 @@
* a chance to run reply_in_callback(), and to make sure we've
* unlinked before returning a req to the pool.
*/
- if (request->rq_set != NULL)
+ if (request->rq_set)
wq = &request->rq_set->set_waitq;
else
wq = &request->rq_reply_waitq;
@@ -2387,7 +2380,7 @@
req->rq_replay = 0;
spin_unlock(&req->rq_lock);
- if (req->rq_commit_cb != NULL)
+ if (req->rq_commit_cb)
req->rq_commit_cb(req);
list_del_init(&req->rq_replay_list);
@@ -2428,7 +2421,6 @@
struct ptlrpc_request *last_req = NULL; /* temporary fire escape */
bool skip_committed_list = true;
- LASSERT(imp != NULL);
assert_spin_locked(&imp->imp_lock);
if (imp->imp_peer_committed_transno == imp->imp_last_transno_checked &&
@@ -2612,11 +2604,11 @@
struct ptlrpc_request_set *set;
int rc;
- LASSERT(req->rq_set == NULL);
+ LASSERT(!req->rq_set);
LASSERT(!req->rq_receiving_reply);
set = ptlrpc_prep_set();
- if (set == NULL) {
+ if (!set) {
CERROR("Unable to allocate ptlrpc set.");
return -ENOMEM;
}
@@ -2848,8 +2840,6 @@
{
struct list_head *tmp, *pos;
- LASSERT(set != NULL);
-
list_for_each_safe(pos, tmp, &set->set_requests) {
struct ptlrpc_request *req =
list_entry(pos, struct ptlrpc_request,
@@ -2995,7 +2985,6 @@
struct ptlrpc_work_async_args *arg = data;
LASSERT(ptlrpcd_check_work(req));
- LASSERT(arg->cb != NULL);
rc = arg->cb(env, arg->cbdata);
@@ -3027,12 +3016,12 @@
might_sleep();
- if (cb == NULL)
+ if (!cb)
return ERR_PTR(-EINVAL);
/* copy some code from deprecated fakereq. */
req = ptlrpc_request_cache_alloc(GFP_NOFS);
- if (req == NULL) {
+ if (!req) {
CERROR("ptlrpc: run out of memory!\n");
return ERR_PTR(-ENOMEM);
}
diff --git a/drivers/staging/lustre/lustre/ptlrpc/connection.c b/drivers/staging/lustre/lustre/ptlrpc/connection.c
index da1f0b1..a14daff 100644
--- a/drivers/staging/lustre/lustre/ptlrpc/connection.c
+++ b/drivers/staging/lustre/lustre/ptlrpc/connection.c
@@ -72,7 +72,8 @@
* returned and may be compared against out object.
*/
/* In the function below, .hs_keycmp resolves to
- * conn_keycmp() */
+ * conn_keycmp()
+ */
/* coverity[overrun-buffer-val] */
conn2 = cfs_hash_findadd_unique(conn_hash, &peer, &conn->c_hash);
if (conn != conn2) {
@@ -172,7 +173,7 @@
struct ptlrpc_connection *conn;
const lnet_process_id_t *conn_key;
- LASSERT(key != NULL);
+ LASSERT(key);
conn_key = key;
conn = hlist_entry(hnode, struct ptlrpc_connection, c_hash);
diff --git a/drivers/staging/lustre/lustre/ptlrpc/events.c b/drivers/staging/lustre/lustre/ptlrpc/events.c
index 07e76a2..47be21a 100644
--- a/drivers/staging/lustre/lustre/ptlrpc/events.c
+++ b/drivers/staging/lustre/lustre/ptlrpc/events.c
@@ -71,7 +71,8 @@
if (ev->type == LNET_EVENT_UNLINK || ev->status != 0) {
/* Failed send: make it seem like the reply timed out, just
- * like failing sends in client.c does currently... */
+ * like failing sends in client.c does currently...
+ */
req->rq_net_err = 1;
ptlrpc_client_wake_req(req);
@@ -95,7 +96,8 @@
LASSERT(ev->md.start == req->rq_repbuf);
LASSERT(ev->offset + ev->mlength <= req->rq_repbuf_len);
/* We've set LNET_MD_MANAGE_REMOTE for all outgoing requests
- for adaptive timeouts' early reply. */
+ * for adaptive timeouts' early reply.
+ */
LASSERT((ev->md.options & LNET_MD_MANAGE_REMOTE) != 0);
spin_lock(&req->rq_lock);
@@ -151,7 +153,8 @@
req->rq_reply_off = ev->offset;
req->rq_nob_received = ev->mlength;
/* LNetMDUnlink can't be called under the LNET_LOCK,
- so we must unlink in ptlrpc_unregister_reply */
+ * so we must unlink in ptlrpc_unregister_reply
+ */
DEBUG_REQ(D_INFO, req,
"reply in flags=%x mlen=%u offset=%d replen=%d",
lustre_msg_get_flags(req->rq_reqmsg),
@@ -162,7 +165,8 @@
out_wake:
/* NB don't unlock till after wakeup; req can disappear under us
- * since we don't have our own ref */
+ * since we don't have our own ref
+ */
ptlrpc_client_wake_req(req);
spin_unlock(&req->rq_lock);
}
@@ -213,7 +217,8 @@
desc->bd_failure = 1;
/* NB don't unlock till after wakeup; desc can disappear under us
- * otherwise */
+ * otherwise
+ */
if (desc->bd_md_count == 0)
ptlrpc_client_wake_req(desc->bd_req);
@@ -250,7 +255,8 @@
__u64 new_seq;
/* set sequence ID for request and add it to history list,
- * it must be called with hold svcpt::scp_lock */
+ * it must be called with hold svcpt::scp_lock
+ */
new_seq = (sec << REQS_SEC_SHIFT) |
(usec << REQS_USEC_SHIFT) |
@@ -258,7 +264,8 @@
if (new_seq > svcpt->scp_hist_seq) {
/* This handles the initial case of scp_hist_seq == 0 or
- * we just jumped into a new time window */
+ * we just jumped into a new time window
+ */
svcpt->scp_hist_seq = new_seq;
} else {
LASSERT(REQS_SEQ_SHIFT(svcpt) < REQS_USEC_SHIFT);
@@ -266,7 +273,8 @@
* however, it's possible that we used up all bits for
* sequence and jumped into the next usec bucket (future time),
* then we hope there will be less RPCs per bucket at some
- * point, and sequence will catch up again */
+ * point, and sequence will catch up again
+ */
svcpt->scp_hist_seq += (1U << REQS_SEQ_SHIFT(svcpt));
new_seq = svcpt->scp_hist_seq;
}
@@ -302,7 +310,8 @@
* request buffer we can use the request object embedded in
* rqbd. Note that if we failed to allocate a request,
* we'd have to re-post the rqbd, which we can't do in this
- * context. */
+ * context.
+ */
req = &rqbd->rqbd_req;
memset(req, 0, sizeof(*req));
} else {
@@ -312,7 +321,7 @@
return;
}
req = ptlrpc_request_cache_alloc(GFP_ATOMIC);
- if (req == NULL) {
+ if (!req) {
CERROR("Can't allocate incoming request descriptor: Dropping %s RPC from %s\n",
service->srv_name,
libcfs_id2str(ev->initiator));
@@ -322,7 +331,8 @@
/* NB we ABSOLUTELY RELY on req being zeroed, so pointers are NULL,
* flags are reset and scalars are zero. We only set the message
- * size to non-zero if this was a successful receive. */
+ * size to non-zero if this was a successful receive.
+ */
req->rq_xid = ev->match_bits;
req->rq_reqbuf = ev->md.start + ev->offset;
if (ev->type == LNET_EVENT_PUT && ev->status == 0)
@@ -352,7 +362,8 @@
svcpt->scp_nrqbds_posted);
/* Normally, don't complain about 0 buffers posted; LNET won't
- * drop incoming reqs since we set the portal lazy */
+ * drop incoming reqs since we set the portal lazy
+ */
if (test_req_buffer_pressure &&
ev->type != LNET_EVENT_UNLINK &&
svcpt->scp_nrqbds_posted == 0)
@@ -369,7 +380,8 @@
svcpt->scp_nreqs_incoming++;
/* NB everything can disappear under us once the request
- * has been queued and we unlock, so do the wake now... */
+ * has been queued and we unlock, so do the wake now...
+ */
wake_up(&svcpt->scp_waitq);
spin_unlock(&svcpt->scp_lock);
@@ -390,7 +402,8 @@
if (!rs->rs_difficult) {
/* 'Easy' replies have no further processing so I drop the
- * net's ref on 'rs' */
+ * net's ref on 'rs'
+ */
LASSERT(ev->unlinked);
ptlrpc_rs_decref(rs);
return;
@@ -400,7 +413,8 @@
if (ev->unlinked) {
/* Last network callback. The net's ref on 'rs' stays put
- * until ptlrpc_handle_rs() is done with it */
+ * until ptlrpc_handle_rs() is done with it
+ */
spin_lock(&svcpt->scp_rep_lock);
spin_lock(&rs->rs_lock);
@@ -443,7 +457,7 @@
lnet_nid_t dst_nid;
lnet_nid_t src_nid;
- peer->pid = LUSTRE_SRV_LNET_PID;
+ peer->pid = LNET_PID_LUSTRE;
/* Choose the matching UUID that's closest */
while (lustre_uuid_to_peer(uuid->uuid, &dst_nid, count++) == 0) {
@@ -483,7 +497,8 @@
/* Wait for the event queue to become idle since there may still be
* messages in flight with pending events (i.e. the fire-and-forget
* messages == client requests and "non-difficult" server
- * replies */
+ * replies
+ */
for (retries = 0;; retries++) {
rc = LNetEQFree(ptlrpc_eq_h);
@@ -513,7 +528,7 @@
{
lnet_pid_t pid;
- pid = LUSTRE_SRV_LNET_PID;
+ pid = LNET_PID_LUSTRE;
return pid;
}
@@ -533,11 +548,13 @@
}
/* CAVEAT EMPTOR: how we process portals events is _radically_
- * different depending on... */
+ * different depending on...
+ */
/* kernel LNet calls our master callback when there are new event,
* because we are guaranteed to get every event via callback,
* so we just set EQ size to 0 to avoid overhead of serializing
- * enqueue/dequeue operations in LNet. */
+ * enqueue/dequeue operations in LNet.
+ */
rc = LNetEQAlloc(0, ptlrpc_master_callback, &ptlrpc_eq_h);
if (rc == 0)
return 0;
diff --git a/drivers/staging/lustre/lustre/ptlrpc/import.c b/drivers/staging/lustre/lustre/ptlrpc/import.c
index f752c78..70fcac1 100644
--- a/drivers/staging/lustre/lustre/ptlrpc/import.c
+++ b/drivers/staging/lustre/lustre/ptlrpc/import.c
@@ -112,7 +112,8 @@
* CLOSED. I would rather refcount the import and free it after
* disconnection like we do with exports. To do that, the client_obd
* will need to save the peer info somewhere other than in the import,
- * though. */
+ * though.
+ */
int ptlrpc_init_import(struct obd_import *imp)
{
spin_lock(&imp->imp_lock);
@@ -282,11 +283,13 @@
/* Wait forever until inflight == 0. We really can't do it another
* way because in some cases we need to wait for very long reply
* unlink. We can't do anything before that because there is really
- * no guarantee that some rdma transfer is not in progress right now. */
+ * no guarantee that some rdma transfer is not in progress right now.
+ */
do {
/* Calculate max timeout for waiting on rpcs to error
* out. Use obd_timeout if calculated value is smaller
- * than it. */
+ * than it.
+ */
if (!OBD_FAIL_CHECK(OBD_FAIL_PTLRPC_LONG_REPL_UNLINK)) {
timeout = ptlrpc_inflight_timeout(imp);
timeout += timeout / 3;
@@ -304,7 +307,8 @@
/* Wait for all requests to error out and call completion
* callbacks. Cap it at obd_timeout -- these should all
- * have been locally cancelled by ptlrpc_abort_inflight. */
+ * have been locally cancelled by ptlrpc_abort_inflight.
+ */
lwi = LWI_TIMEOUT_INTERVAL(
cfs_timeout_cap(cfs_time_seconds(timeout)),
(timeout > 1)?cfs_time_seconds(1):cfs_time_seconds(1)/2,
@@ -328,13 +332,15 @@
* maybe waiting for long reply unlink in
* sluggish nets). Let's check this. If there
* is no inflight and unregistering != 0, this
- * is bug. */
+ * is bug.
+ */
LASSERTF(count == 0, "Some RPCs are still unregistering: %d\n",
count);
/* Let's save one loop as soon as inflight have
* dropped to zero. No new inflights possible at
- * this point. */
+ * this point.
+ */
rc = 0;
} else {
list_for_each_safe(tmp, n,
@@ -501,7 +507,8 @@
conn->oic_last_attempt);
/* If we have not tried this connection since
- the last successful attempt, go with this one */
+ * the last successful attempt, go with this one
+ */
if ((conn->oic_last_attempt == 0) ||
cfs_time_beforeq_64(conn->oic_last_attempt,
imp->imp_last_success_conn)) {
@@ -511,8 +518,9 @@
}
/* If all of the connections have already been tried
- since the last successful connection; just choose the
- least recently used */
+ * since the last successful connection; just choose the
+ * least recently used
+ */
if (!imp_conn)
imp_conn = conn;
else if (cfs_time_before_64(conn->oic_last_attempt,
@@ -529,10 +537,11 @@
LASSERT(imp_conn->oic_conn);
/* If we've tried everything, and we're back to the beginning of the
- list, increase our timeout and try again. It will be reset when
- we do finally connect. (FIXME: really we should wait for all network
- state associated with the last connection attempt to drain before
- trying to reconnect on it.) */
+ * list, increase our timeout and try again. It will be reset when
+ * we do finally connect. (FIXME: really we should wait for all network
+ * state associated with the last connection attempt to drain before
+ * trying to reconnect on it.)
+ */
if (tried_all && (imp->imp_conn_list.next == &imp_conn->oic_item)) {
struct adaptive_timeout *at = &imp->imp_at.iat_net_latency;
@@ -553,7 +562,6 @@
imp->imp_connection = ptlrpc_connection_addref(imp_conn->oic_conn);
dlmexp = class_conn2export(&imp->imp_dlm_handle);
- LASSERT(dlmexp != NULL);
ptlrpc_connection_put(dlmexp->exp_connection);
dlmexp->exp_connection = ptlrpc_connection_addref(imp_conn->oic_conn);
class_export_put(dlmexp);
@@ -590,7 +598,8 @@
struct list_head *tmp;
/* The requests in committed_list always have smaller transnos than
- * the requests in replay_list */
+ * the requests in replay_list
+ */
if (!list_empty(&imp->imp_committed_list)) {
tmp = imp->imp_committed_list.next;
req = list_entry(tmp, struct ptlrpc_request, rq_replay_list);
@@ -674,7 +683,8 @@
goto out;
/* Reset connect flags to the originally requested flags, in case
- * the server is updated on-the-fly we will get the new features. */
+ * the server is updated on-the-fly we will get the new features.
+ */
imp->imp_connect_data.ocd_connect_flags = imp->imp_connect_flags_orig;
/* Reset ocd_version each time so the server knows the exact versions */
imp->imp_connect_data.ocd_version = LUSTRE_VERSION_CODE;
@@ -687,7 +697,7 @@
goto out;
request = ptlrpc_request_alloc(imp, &RQF_MDS_CONNECT);
- if (request == NULL) {
+ if (!request) {
rc = -ENOMEM;
goto out;
}
@@ -700,7 +710,8 @@
}
/* Report the rpc service time to the server so that it knows how long
- * to wait for clients to join recovery */
+ * to wait for clients to join recovery
+ */
lustre_msg_set_service_time(request->rq_reqmsg,
at_timeout2est(request->rq_timeout));
@@ -708,7 +719,8 @@
* import_select_connection will increase the net latency on
* repeated reconnect attempts to cover slow networks.
* We override/ignore the server rpc completion estimate here,
- * which may be large if this is a reconnect attempt */
+ * which may be large if this is a reconnect attempt
+ */
request->rq_timeout = INITIAL_CONNECT_TIMEOUT;
lustre_msg_set_timeout(request->rq_reqmsg, request->rq_timeout);
@@ -799,7 +811,8 @@
if (rc) {
/* if this reconnect to busy export - not need select new target
- * for connecting*/
+ * for connecting
+ */
imp->imp_force_reconnect = ptlrpc_busy_reconnect(rc);
spin_unlock(&imp->imp_lock);
ptlrpc_maybe_ping_import_soon(imp);
@@ -817,7 +830,7 @@
ocd = req_capsule_server_sized_get(&request->rq_pill,
&RMF_CONNECT_DATA, ret);
- if (ocd == NULL) {
+ if (!ocd) {
CERROR("%s: no connect data from server\n",
imp->imp_obd->obd_name);
rc = -EPROTO;
@@ -851,7 +864,8 @@
if (!exp) {
/* This could happen if export is cleaned during the
- connect attempt */
+ * connect attempt
+ */
CERROR("%s: missing export after connect\n",
imp->imp_obd->obd_name);
rc = -ENODEV;
@@ -877,14 +891,16 @@
}
/* if applies, adjust the imp->imp_msg_magic here
- * according to reply flags */
+ * according to reply flags
+ */
imp->imp_remote_handle =
*lustre_msg_get_handle(request->rq_repmsg);
/* Initial connects are allowed for clients with non-random
* uuids when servers are in recovery. Simply signal the
- * servers replay is complete and wait in REPLAY_WAIT. */
+ * servers replay is complete and wait in REPLAY_WAIT.
+ */
if (msg_flags & MSG_CONNECT_RECOVERING) {
CDEBUG(D_HA, "connect to %s during recovery\n",
obd2cli_tgt(imp->imp_obd));
@@ -923,7 +939,8 @@
* already erased all of our state because of previous
* eviction. If it is in recovery - we are safe to
* participate since we can reestablish all of our state
- * with server again */
+ * with server again
+ */
if ((msg_flags & MSG_CONNECT_RECOVERING)) {
CDEBUG(level, "%s@%s changed server handle from %#llx to %#llx but is still in recovery\n",
obd2cli_tgt(imp->imp_obd),
@@ -1039,7 +1056,8 @@
ocd->ocd_version < LUSTRE_VERSION_CODE -
LUSTRE_VERSION_OFFSET_WARN)) {
/* Sigh, some compilers do not like #ifdef in the middle
- of macro arguments */
+ * of macro arguments
+ */
const char *older = "older. Consider upgrading server or downgrading client"
;
const char *newer = "newer than client version. Consider upgrading client"
@@ -1061,7 +1079,8 @@
* fixup is version-limited, because we don't want to carry the
* OBD_CONNECT_MNE_SWAB flag around forever, just so long as we
* need interop with unpatched 2.2 servers. For newer servers,
- * the client will do MNE swabbing only as needed. LU-1644 */
+ * the client will do MNE swabbing only as needed. LU-1644
+ */
if (unlikely((ocd->ocd_connect_flags & OBD_CONNECT_VERSION) &&
!(ocd->ocd_connect_flags & OBD_CONNECT_MNE_SWAB) &&
OBD_OCD_VERSION_MAJOR(ocd->ocd_version) == 2 &&
@@ -1079,7 +1098,8 @@
if (ocd->ocd_connect_flags & OBD_CONNECT_CKSUM) {
/* We sent to the server ocd_cksum_types with bits set
* for algorithms we understand. The server masked off
- * the checksum types it doesn't support */
+ * the checksum types it doesn't support
+ */
if ((ocd->ocd_cksum_types &
cksum_types_supported_client()) == 0) {
LCONSOLE_WARN("The negotiation of the checksum algorithm to use with server %s failed (%x/%x), disabling checksums\n",
@@ -1093,7 +1113,8 @@
}
} else {
/* The server does not support OBD_CONNECT_CKSUM.
- * Enforce ADLER for backward compatibility*/
+ * Enforce ADLER for backward compatibility
+ */
cli->cl_supp_cksum_types = OBD_CKSUM_ADLER;
}
cli->cl_cksum_type = cksum_type_select(cli->cl_supp_cksum_types);
@@ -1109,7 +1130,8 @@
/* Reset ns_connect_flags only for initial connect. It might be
* changed in while using FS and if we reset it in reconnect
* this leads to losing user settings done before such as
- * disable lru_resize, etc. */
+ * disable lru_resize, etc.
+ */
if (old_connect_flags != exp_connect_flags(exp) ||
aa->pcaa_initial_connect) {
CDEBUG(D_HA, "%s: Resetting ns_connect_flags to server flags: %#llx\n",
@@ -1123,13 +1145,14 @@
if ((ocd->ocd_connect_flags & OBD_CONNECT_AT) &&
(imp->imp_msg_magic == LUSTRE_MSG_MAGIC_V2))
/* We need a per-message support flag, because
- a. we don't know if the incoming connect reply
- supports AT or not (in reply_in_callback)
- until we unpack it.
- b. failovered server means export and flags are gone
- (in ptlrpc_send_reply).
- Can only be set when we know AT is supported at
- both ends */
+ * a. we don't know if the incoming connect reply
+ * supports AT or not (in reply_in_callback)
+ * until we unpack it.
+ * b. failovered server means export and flags are gone
+ * (in ptlrpc_send_reply).
+ * Can only be set when we know AT is supported at
+ * both ends
+ */
imp->imp_msghdr_flags |= MSGHDR_AT_SUPPORT;
else
imp->imp_msghdr_flags &= ~MSGHDR_AT_SUPPORT;
@@ -1162,7 +1185,7 @@
struct obd_connect_data *ocd;
/* reply message might not be ready */
- if (request->rq_repmsg == NULL)
+ if (!request->rq_repmsg)
return -EPROTO;
ocd = req_capsule_server_get(&request->rq_pill,
@@ -1243,7 +1266,7 @@
req = ptlrpc_request_alloc_pack(imp, &RQF_OBD_PING, LUSTRE_OBD_VERSION,
OBD_PING);
- if (req == NULL) {
+ if (!req) {
atomic_dec(&imp->imp_replay_inflight);
return -ENOMEM;
}
@@ -1339,7 +1362,8 @@
/* bug 17802: XXX client_disconnect_export vs connect request
* race. if client will evicted at this time, we start
* invalidate thread without reference to import and import can
- * be freed at same time. */
+ * be freed at same time.
+ */
class_import_get(imp);
task = kthread_run(ptlrpc_invalidate_import_thread, imp,
"ll_imp_inval");
@@ -1471,11 +1495,13 @@
if (req) {
/* We are disconnecting, do not retry a failed DISCONNECT rpc if
* it fails. We can get through the above with a down server
- * if the client doesn't know the server is gone yet. */
+ * if the client doesn't know the server is gone yet.
+ */
req->rq_no_resend = 1;
/* We want client umounts to happen quickly, no matter the
- server state... */
+ * server state...
+ */
req->rq_timeout = min_t(int, req->rq_timeout,
INITIAL_CONNECT_TIMEOUT);
@@ -1507,9 +1533,10 @@
extern unsigned int at_min, at_max, at_history;
/* Bin into timeslices using AT_BINS bins.
- This gives us a max of the last binlimit*AT_BINS secs without the storage,
- but still smoothing out a return to normalcy from a slow response.
- (E.g. remember the maximum latency in each minute of the last 4 minutes.) */
+ * This gives us a max of the last binlimit*AT_BINS secs without the storage,
+ * but still smoothing out a return to normalcy from a slow response.
+ * (E.g. remember the maximum latency in each minute of the last 4 minutes.)
+ */
int at_measured(struct adaptive_timeout *at, unsigned int val)
{
unsigned int old = at->at_current;
@@ -1523,7 +1550,8 @@
if (val == 0)
/* 0's don't count, because we never want our timeout to
- drop to 0, and because 0 could mean an error */
+ * drop to 0, and because 0 could mean an error
+ */
return 0;
spin_lock(&at->at_lock);
@@ -1565,7 +1593,8 @@
if (at->at_flags & AT_FLG_NOHIST)
/* Only keep last reported val; keeping the rest of the history
- for proc only */
+ * for debugfs only
+ */
at->at_current = val;
if (at_max > 0)
diff --git a/drivers/staging/lustre/lustre/ptlrpc/layout.c b/drivers/staging/lustre/lustre/ptlrpc/layout.c
index c0e613c..0e60264 100644
--- a/drivers/staging/lustre/lustre/ptlrpc/layout.c
+++ b/drivers/staging/lustre/lustre/ptlrpc/layout.c
@@ -118,25 +118,6 @@
&RMF_OBD_QUOTACTL
};
-static const struct req_msg_field *quota_body_only[] = {
- &RMF_PTLRPC_BODY,
- &RMF_QUOTA_BODY
-};
-
-static const struct req_msg_field *ldlm_intent_quota_client[] = {
- &RMF_PTLRPC_BODY,
- &RMF_DLM_REQ,
- &RMF_LDLM_INTENT,
- &RMF_QUOTA_BODY
-};
-
-static const struct req_msg_field *ldlm_intent_quota_server[] = {
- &RMF_PTLRPC_BODY,
- &RMF_DLM_REP,
- &RMF_DLM_LVB,
- &RMF_QUOTA_BODY
-};
-
static const struct req_msg_field *mdt_close_client[] = {
&RMF_PTLRPC_BODY,
&RMF_MDT_EPOCH,
@@ -514,16 +495,6 @@
&RMF_CAPA2
};
-static const struct req_msg_field *mds_update_client[] = {
- &RMF_PTLRPC_BODY,
- &RMF_UPDATE,
-};
-
-static const struct req_msg_field *mds_update_server[] = {
- &RMF_PTLRPC_BODY,
- &RMF_UPDATE_REPLY,
-};
-
static const struct req_msg_field *llog_origin_handle_create_client[] = {
&RMF_PTLRPC_BODY,
&RMF_LLOGD_BODY,
@@ -551,16 +522,6 @@
&RMF_EADATA
};
-static const struct req_msg_field *obd_idx_read_client[] = {
- &RMF_PTLRPC_BODY,
- &RMF_IDX_INFO
-};
-
-static const struct req_msg_field *obd_idx_read_server[] = {
- &RMF_PTLRPC_BODY,
- &RMF_IDX_INFO
-};
-
static const struct req_msg_field *ost_body_only[] = {
&RMF_PTLRPC_BODY,
&RMF_OST_BODY
@@ -676,7 +637,6 @@
static struct req_format *req_formats[] = {
&RQF_OBD_PING,
&RQF_OBD_SET_INFO,
- &RQF_OBD_IDX_READ,
&RQF_SEC_CTX,
&RQF_MGS_TARGET_REG,
&RQF_MGS_SET_INFO,
@@ -721,7 +681,6 @@
&RQF_MDS_HSM_ACTION,
&RQF_MDS_HSM_REQUEST,
&RQF_MDS_SWAP_LAYOUTS,
- &RQF_UPDATE_OBJ,
&RQF_QC_CALLBACK,
&RQF_OST_CONNECT,
&RQF_OST_DISCONNECT,
@@ -759,8 +718,6 @@
&RQF_LDLM_INTENT_CREATE,
&RQF_LDLM_INTENT_UNLINK,
&RQF_LDLM_INTENT_GETXATTR,
- &RQF_LDLM_INTENT_QUOTA,
- &RQF_QUOTA_DQACQ,
&RQF_LOG_CANCEL,
&RQF_LLOG_ORIGIN_HANDLE_CREATE,
&RQF_LLOG_ORIGIN_HANDLE_DESTROY,
@@ -899,11 +856,6 @@
lustre_swab_obd_quotactl, NULL);
EXPORT_SYMBOL(RMF_OBD_QUOTACTL);
-struct req_msg_field RMF_QUOTA_BODY =
- DEFINE_MSGF("quota_body", 0,
- sizeof(struct quota_body), lustre_swab_quota_body, NULL);
-EXPORT_SYMBOL(RMF_QUOTA_BODY);
-
struct req_msg_field RMF_MDT_EPOCH =
DEFINE_MSGF("mdt_ioepoch", 0,
sizeof(struct mdt_ioepoch), lustre_swab_mdt_ioepoch, NULL);
@@ -1105,10 +1057,6 @@
DEFINE_MSGF("fiemap", 0, -1, lustre_swab_fiemap, NULL);
EXPORT_SYMBOL(RMF_FIEMAP_VAL);
-struct req_msg_field RMF_IDX_INFO =
- DEFINE_MSGF("idx_info", 0, sizeof(struct idx_info),
- lustre_swab_idx_info, NULL);
-EXPORT_SYMBOL(RMF_IDX_INFO);
struct req_msg_field RMF_HSM_USER_STATE =
DEFINE_MSGF("hsm_user_state", 0, sizeof(struct hsm_user_state),
lustre_swab_hsm_user_state, NULL);
@@ -1145,15 +1093,6 @@
lustre_swab_hsm_request, NULL);
EXPORT_SYMBOL(RMF_MDS_HSM_REQUEST);
-struct req_msg_field RMF_UPDATE = DEFINE_MSGF("update", 0, -1,
- lustre_swab_update_buf, NULL);
-EXPORT_SYMBOL(RMF_UPDATE);
-
-struct req_msg_field RMF_UPDATE_REPLY = DEFINE_MSGF("update_reply", 0, -1,
- lustre_swab_update_reply_buf,
- NULL);
-EXPORT_SYMBOL(RMF_UPDATE_REPLY);
-
struct req_msg_field RMF_SWAP_LAYOUTS =
DEFINE_MSGF("swap_layouts", 0, sizeof(struct mdc_swap_layouts),
lustre_swab_swap_layouts, NULL);
@@ -1196,12 +1135,6 @@
DEFINE_REQ_FMT0("OBD_SET_INFO", obd_set_info_client, empty);
EXPORT_SYMBOL(RQF_OBD_SET_INFO);
-/* Read index file through the network */
-struct req_format RQF_OBD_IDX_READ =
- DEFINE_REQ_FMT0("OBD_IDX_READ",
- obd_idx_read_client, obd_idx_read_server);
-EXPORT_SYMBOL(RQF_OBD_IDX_READ);
-
struct req_format RQF_SEC_CTX =
DEFINE_REQ_FMT0("SEC_CTX", empty, empty);
EXPORT_SYMBOL(RQF_SEC_CTX);
@@ -1253,16 +1186,6 @@
DEFINE_REQ_FMT0("QC_CALLBACK", quotactl_only, empty);
EXPORT_SYMBOL(RQF_QC_CALLBACK);
-struct req_format RQF_QUOTA_DQACQ =
- DEFINE_REQ_FMT0("QUOTA_DQACQ", quota_body_only, quota_body_only);
-EXPORT_SYMBOL(RQF_QUOTA_DQACQ);
-
-struct req_format RQF_LDLM_INTENT_QUOTA =
- DEFINE_REQ_FMT0("LDLM_INTENT_QUOTA",
- ldlm_intent_quota_client,
- ldlm_intent_quota_server);
-EXPORT_SYMBOL(RQF_LDLM_INTENT_QUOTA);
-
struct req_format RQF_MDS_GETSTATUS =
DEFINE_REQ_FMT0("MDS_GETSTATUS", mdt_body_only, mdt_body_capa);
EXPORT_SYMBOL(RQF_MDS_GETSTATUS);
@@ -1357,11 +1280,6 @@
mds_getinfo_server);
EXPORT_SYMBOL(RQF_MDS_GET_INFO);
-struct req_format RQF_UPDATE_OBJ =
- DEFINE_REQ_FMT0("OBJECT_UPDATE_OBJ", mds_update_client,
- mds_update_server);
-EXPORT_SYMBOL(RQF_UPDATE_OBJ);
-
struct req_format RQF_LDLM_ENQUEUE =
DEFINE_REQ_FMT0("LDLM_ENQUEUE",
ldlm_enqueue_client, ldlm_enqueue_lvb_server);
@@ -1712,7 +1630,7 @@
* high-priority RPC queue getting peeked at before ost_handle()
* handles an OST RPC.
*/
- if (req != NULL && pill == &req->rq_pill && req->rq_pill_init)
+ if (req && pill == &req->rq_pill && req->rq_pill_init)
return;
memset(pill, 0, sizeof(*pill));
@@ -1720,7 +1638,7 @@
pill->rc_loc = location;
req_capsule_init_area(pill);
- if (req != NULL && pill == &req->rq_pill)
+ if (req && pill == &req->rq_pill)
req->rq_pill_init = 1;
}
EXPORT_SYMBOL(req_capsule_init);
@@ -1752,7 +1670,7 @@
*/
void req_capsule_set(struct req_capsule *pill, const struct req_format *fmt)
{
- LASSERT(pill->rc_fmt == NULL || pill->rc_fmt == fmt);
+ LASSERT(!pill->rc_fmt || pill->rc_fmt == fmt);
LASSERT(__req_format_is_sane(fmt));
pill->rc_fmt = fmt;
@@ -1773,8 +1691,6 @@
const struct req_format *fmt = pill->rc_fmt;
int i;
- LASSERT(fmt != NULL);
-
for (i = 0; i < fmt->rf_fields[loc].nr; ++i) {
if (pill->rc_area[loc][i] == -1) {
pill->rc_area[loc][i] =
@@ -1810,7 +1726,7 @@
LASSERT(pill->rc_loc == RCL_SERVER);
fmt = pill->rc_fmt;
- LASSERT(fmt != NULL);
+ LASSERT(fmt);
count = req_capsule_filled_sizes(pill, RCL_SERVER);
rc = lustre_pack_reply(pill->rc_req, count,
@@ -1865,7 +1781,7 @@
swabber = swabber ?: field->rmf_swabber;
if (ptlrpc_buf_need_swab(pill->rc_req, inout, offset) &&
- swabber != NULL && value != NULL)
+ swabber && value)
do_swab = 1;
else
do_swab = 0;
@@ -1947,17 +1863,15 @@
[RCL_SERVER] = "server"
};
- LASSERT(pill != NULL);
- LASSERT(pill != LP_POISON);
fmt = pill->rc_fmt;
- LASSERT(fmt != NULL);
+ LASSERT(fmt);
LASSERT(fmt != LP_POISON);
LASSERT(__req_format_is_sane(fmt));
offset = __req_capsule_offset(pill, field, loc);
msg = __req_msg(pill, loc);
- LASSERT(msg != NULL);
+ LASSERT(msg);
getter = (field->rmf_flags & RMF_F_STRING) ?
(typeof(getter))lustre_msg_string : lustre_msg_buf;
@@ -1980,7 +1894,7 @@
}
value = getter(msg, offset, len);
- if (value == NULL) {
+ if (!value) {
DEBUG_REQ(D_ERROR, pill->rc_req,
"Wrong buffer for field `%s' (%d of %d) in format `%s': %d vs. %d (%s)\n",
field->rmf_name, offset, lustre_msg_bufcount(msg),
@@ -2209,7 +2123,7 @@
const struct req_format *old;
- LASSERT(pill->rc_fmt != NULL);
+ LASSERT(pill->rc_fmt);
LASSERT(__req_format_is_sane(fmt));
old = pill->rc_fmt;
@@ -2222,7 +2136,7 @@
const struct req_msg_field *ofield = FMT_FIELD(old, i, j);
/* "opaque" fields can be transmogrified */
- if (ofield->rmf_swabber == NULL &&
+ if (!ofield->rmf_swabber &&
(ofield->rmf_flags & ~RMF_F_NO_SIZE_CHECK) == 0 &&
(ofield->rmf_size == -1 ||
ofield->rmf_flags == RMF_F_NO_SIZE_CHECK))
@@ -2289,7 +2203,7 @@
int offset;
fmt = pill->rc_fmt;
- LASSERT(fmt != NULL);
+ LASSERT(fmt);
LASSERT(__req_format_is_sane(fmt));
LASSERT(req_capsule_has_field(pill, field, loc));
LASSERT(req_capsule_field_present(pill, field, loc));
diff --git a/drivers/staging/lustre/lustre/ptlrpc/llog_client.c b/drivers/staging/lustre/lustre/ptlrpc/llog_client.c
index e877020..a23ac5f 100644
--- a/drivers/staging/lustre/lustre/ptlrpc/llog_client.c
+++ b/drivers/staging/lustre/lustre/ptlrpc/llog_client.c
@@ -75,7 +75,8 @@
} while (0)
/* This is a callback from the llog_* functions.
- * Assumes caller has already pushed us into the kernel context. */
+ * Assumes caller has already pushed us into the kernel context.
+ */
static int llog_client_open(const struct lu_env *env,
struct llog_handle *lgh, struct llog_logid *logid,
char *name, enum llog_open_param open_param)
@@ -93,7 +94,7 @@
LASSERT(lgh);
req = ptlrpc_request_alloc(imp, &RQF_LLOG_ORIGIN_HANDLE_CREATE);
- if (req == NULL) {
+ if (!req) {
rc = -ENOMEM;
goto out;
}
@@ -130,7 +131,7 @@
goto out;
body = req_capsule_server_get(&req->rq_pill, &RMF_LLOGD_BODY);
- if (body == NULL) {
+ if (!body) {
rc = -EFAULT;
goto out;
}
@@ -158,7 +159,7 @@
req = ptlrpc_request_alloc_pack(imp, &RQF_LLOG_ORIGIN_HANDLE_NEXT_BLOCK,
LUSTRE_LOG_VERSION,
LLOG_ORIGIN_HANDLE_NEXT_BLOCK);
- if (req == NULL) {
+ if (!req) {
rc = -ENOMEM;
goto err_exit;
}
@@ -179,14 +180,14 @@
goto out;
body = req_capsule_server_get(&req->rq_pill, &RMF_LLOGD_BODY);
- if (body == NULL) {
+ if (!body) {
rc = -EFAULT;
goto out;
}
/* The log records are swabbed as they are processed */
ptr = req_capsule_server_get(&req->rq_pill, &RMF_EADATA);
- if (ptr == NULL) {
+ if (!ptr) {
rc = -EFAULT;
goto out;
}
@@ -216,7 +217,7 @@
req = ptlrpc_request_alloc_pack(imp, &RQF_LLOG_ORIGIN_HANDLE_PREV_BLOCK,
LUSTRE_LOG_VERSION,
LLOG_ORIGIN_HANDLE_PREV_BLOCK);
- if (req == NULL) {
+ if (!req) {
rc = -ENOMEM;
goto err_exit;
}
@@ -236,13 +237,13 @@
goto out;
body = req_capsule_server_get(&req->rq_pill, &RMF_LLOGD_BODY);
- if (body == NULL) {
+ if (!body) {
rc = -EFAULT;
goto out;
}
ptr = req_capsule_server_get(&req->rq_pill, &RMF_EADATA);
- if (ptr == NULL) {
+ if (!ptr) {
rc = -EFAULT;
goto out;
}
@@ -269,7 +270,7 @@
req = ptlrpc_request_alloc_pack(imp, &RQF_LLOG_ORIGIN_HANDLE_READ_HEADER,
LUSTRE_LOG_VERSION,
LLOG_ORIGIN_HANDLE_READ_HEADER);
- if (req == NULL) {
+ if (!req) {
rc = -ENOMEM;
goto err_exit;
}
@@ -285,7 +286,7 @@
goto out;
hdr = req_capsule_server_get(&req->rq_pill, &RMF_LLOG_LOG_HDR);
- if (hdr == NULL) {
+ if (!hdr) {
rc = -EFAULT;
goto out;
}
@@ -316,8 +317,9 @@
struct llog_handle *handle)
{
/* this doesn't call LLOG_ORIGIN_HANDLE_CLOSE because
- the servers all close the file at the end of every
- other LLOG_ RPC. */
+ * the servers all close the file at the end of every
+ * other LLOG_ RPC.
+ */
return 0;
}
diff --git a/drivers/staging/lustre/lustre/ptlrpc/llog_net.c b/drivers/staging/lustre/lustre/ptlrpc/llog_net.c
index dac66f5..fbccb62 100644
--- a/drivers/staging/lustre/lustre/ptlrpc/llog_net.c
+++ b/drivers/staging/lustre/lustre/ptlrpc/llog_net.c
@@ -58,7 +58,7 @@
LASSERT(ctxt);
new_imp = ctxt->loc_obd->u.cli.cl_import;
- LASSERTF(ctxt->loc_imp == NULL || ctxt->loc_imp == new_imp,
+ LASSERTF(!ctxt->loc_imp || ctxt->loc_imp == new_imp,
"%p - %p\n", ctxt->loc_imp, new_imp);
mutex_lock(&ctxt->loc_mutex);
if (ctxt->loc_imp != new_imp) {
diff --git a/drivers/staging/lustre/lustre/ptlrpc/lproc_ptlrpc.c b/drivers/staging/lustre/lustre/ptlrpc/lproc_ptlrpc.c
index cc55b79..fcaa289 100644
--- a/drivers/staging/lustre/lustre/ptlrpc/lproc_ptlrpc.c
+++ b/drivers/staging/lustre/lustre/ptlrpc/lproc_ptlrpc.c
@@ -131,7 +131,6 @@
{ SEC_CTX_INIT_CONT, "sec_ctx_init_cont" },
{ SEC_CTX_FINI, "sec_ctx_fini" },
{ FLD_QUERY, "fld_query" },
- { UPDATE_OBJ, "update_obj" },
};
static struct ll_eopcode {
@@ -192,15 +191,15 @@
unsigned int svc_counter_config = LPROCFS_CNTR_AVGMINMAX |
LPROCFS_CNTR_STDDEV;
- LASSERT(*debugfs_root_ret == NULL);
- LASSERT(*stats_ret == NULL);
+ LASSERT(!*debugfs_root_ret);
+ LASSERT(!*stats_ret);
svc_stats = lprocfs_alloc_stats(EXTRA_MAX_OPCODES+LUSTRE_MAX_OPCODES,
0);
- if (svc_stats == NULL)
+ if (!svc_stats)
return;
- if (dir != NULL) {
+ if (dir) {
svc_debugfs_entry = ldebugfs_register(dir, root, NULL, NULL);
if (IS_ERR(svc_debugfs_entry)) {
lprocfs_free_stats(&svc_stats);
@@ -246,11 +245,11 @@
rc = ldebugfs_register_stats(svc_debugfs_entry, name, svc_stats);
if (rc < 0) {
- if (dir != NULL)
+ if (dir)
ldebugfs_remove(&svc_debugfs_entry);
lprocfs_free_stats(&svc_stats);
} else {
- if (dir != NULL)
+ if (dir)
*debugfs_root_ret = svc_debugfs_entry;
*stats_ret = svc_stats;
}
@@ -307,7 +306,8 @@
/* This sanity check is more of an insanity check; we can still
* hose a kernel by allowing the request history to grow too
- * far. */
+ * far.
+ */
bufpages = (svc->srv_buf_size + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT;
if (val > totalram_pages / (2 * bufpages))
return -ERANGE;
@@ -456,8 +456,6 @@
static void nrs_policy_get_info_locked(struct ptlrpc_nrs_policy *policy,
struct ptlrpc_nrs_pol_info *info)
{
- LASSERT(policy != NULL);
- LASSERT(info != NULL);
assert_spin_locked(&policy->pol_nrs->nrs_lock);
memcpy(info->pi_name, policy->pol_desc->pd_name, NRS_POL_NAME_MAX);
@@ -508,7 +506,7 @@
spin_unlock(&nrs->nrs_lock);
infos = kcalloc(num_pols, sizeof(*infos), GFP_NOFS);
- if (infos == NULL) {
+ if (!infos) {
rc = -ENOMEM;
goto unlock;
}
@@ -676,7 +674,7 @@
/**
* No [reg|hp] token has been specified
*/
- if (cmd == NULL)
+ if (!cmd)
goto default_queue;
/**
@@ -733,15 +731,15 @@
struct list_head *e;
struct ptlrpc_request *req;
- if (srhi->srhi_req != NULL &&
- srhi->srhi_seq > svcpt->scp_hist_seq_culled &&
+ if (srhi->srhi_req && srhi->srhi_seq > svcpt->scp_hist_seq_culled &&
srhi->srhi_seq <= seq) {
/* If srhi_req was set previously, hasn't been culled and
* we're searching for a seq on or after it (i.e. more
* recent), search from it onwards.
* Since the service history is LRU (i.e. culled reqs will
* be near the head), we shouldn't have to do long
- * re-scans */
+ * re-scans
+ */
LASSERTF(srhi->srhi_seq == srhi->srhi_req->rq_history_seq,
"%s:%d: seek seq %llu, request seq %llu\n",
svcpt->scp_service->srv_name, svcpt->scp_cpt,
@@ -919,7 +917,8 @@
* here. The request could contain any old crap, so you
* must be just as careful as the service's request
* parser. Currently I only print stuff here I know is OK
- * to look at coz it was set up in request_in_callback()!!! */
+ * to look at coz it was set up in request_in_callback()!!!
+ */
seq_printf(s, "%lld:%s:%s:x%llu:%d:%s:%lld:%lds(%+lds) ",
req->rq_history_seq, nidstr,
libcfs_id2str(req->rq_peer), req->rq_xid,
@@ -927,7 +926,7 @@
(s64)req->rq_arrival_time.tv_sec,
(long)(req->rq_sent - req->rq_arrival_time.tv_sec),
(long)(req->rq_sent - req->rq_deadline));
- if (svc->srv_ops.so_req_printer == NULL)
+ if (!svc->srv_ops.so_req_printer)
seq_putc(s, '\n');
else
svc->srv_ops.so_req_printer(s, srhi->srhi_req);
@@ -1103,7 +1102,7 @@
"stats", &svc->srv_debugfs_entry,
&svc->srv_stats);
- if (svc->srv_debugfs_entry == NULL)
+ if (IS_ERR_OR_NULL(svc->srv_debugfs_entry))
return;
ldebugfs_add_vars(svc->srv_debugfs_entry, lproc_vars, NULL);
@@ -1129,7 +1128,7 @@
int opc = opcode_offset(op);
svc_stats = req->rq_import->imp_obd->obd_svc_stats;
- if (svc_stats == NULL || opc <= 0)
+ if (!svc_stats || opc <= 0)
return;
LASSERT(opc < LUSTRE_MAX_OPCODES);
if (!(op == LDLM_ENQUEUE || op == MDS_REINT))
@@ -1166,7 +1165,7 @@
void ptlrpc_lprocfs_unregister_service(struct ptlrpc_service *svc)
{
- if (svc->srv_debugfs_entry != NULL)
+ if (!IS_ERR_OR_NULL(svc->srv_debugfs_entry))
ldebugfs_remove(&svc->srv_debugfs_entry);
if (svc->srv_stats)
@@ -1198,7 +1197,7 @@
req = ptlrpc_prep_ping(obd->u.cli.cl_import);
up_read(&obd->u.cli.cl_sem);
- if (req == NULL)
+ if (!req)
return -ENOMEM;
req->rq_send_state = LUSTRE_IMP_FULL;
diff --git a/drivers/staging/lustre/lustre/ptlrpc/niobuf.c b/drivers/staging/lustre/lustre/ptlrpc/niobuf.c
index c5d7ff5..c5bbf7b 100644
--- a/drivers/staging/lustre/lustre/ptlrpc/niobuf.c
+++ b/drivers/staging/lustre/lustre/ptlrpc/niobuf.c
@@ -56,7 +56,6 @@
lnet_md_t md;
LASSERT(portal != 0);
- LASSERT(conn != NULL);
CDEBUG(D_INFO, "conn=%p id %s\n", conn, libcfs_id2str(conn->c_peer));
md.start = base;
md.length = len;
@@ -88,7 +87,8 @@
int rc2;
/* We're going to get an UNLINK event when I unlink below,
* which will complete just like any other failed send, so
- * I fall through and return success here! */
+ * I fall through and return success here!
+ */
CERROR("LNetPut(%s, %d, %lld) failed: %d\n",
libcfs_id2str(conn->c_peer), portal, xid, rc);
rc2 = LNetMDUnlink(*mdh);
@@ -130,7 +130,7 @@
LASSERT(desc->bd_md_count == 0);
LASSERT(desc->bd_md_max_brw <= PTLRPC_BULK_OPS_COUNT);
LASSERT(desc->bd_iov_count <= PTLRPC_MAX_BRW_PAGES);
- LASSERT(desc->bd_req != NULL);
+ LASSERT(desc->bd_req);
LASSERT(desc->bd_type == BULK_PUT_SINK ||
desc->bd_type == BULK_GET_SOURCE);
@@ -153,7 +153,8 @@
* using the same RDMA match bits after an error.
*
* For multi-bulk RPCs, rq_xid is the last XID needed for bulks. The
- * first bulk XID is power-of-two aligned before rq_xid. LU-1431 */
+ * first bulk XID is power-of-two aligned before rq_xid. LU-1431
+ */
xid = req->rq_xid & ~((__u64)desc->bd_md_max_brw - 1);
LASSERTF(!(desc->bd_registered &&
req->rq_send_state != LUSTRE_IMP_REPLAY) ||
@@ -209,7 +210,8 @@
}
/* Set rq_xid to matchbits of the final bulk so that server can
- * infer the number of bulks that were prepared */
+ * infer the number of bulks that were prepared
+ */
req->rq_xid = --xid;
LASSERTF(desc->bd_last_xid == (req->rq_xid & PTLRPC_BULK_OPS_MASK),
"bd_last_xid = x%llu, rq_xid = x%llu\n",
@@ -260,7 +262,8 @@
/* the unlink ensures the callback happens ASAP and is the last
* one. If it fails, it must be because completion just happened,
* but we must still l_wait_event() in this case to give liblustre
- * a chance to run client_bulk_callback() */
+ * a chance to run client_bulk_callback()
+ */
mdunlink_iterate_helper(desc->bd_mds, desc->bd_md_max_brw);
if (ptlrpc_client_bulk_active(req) == 0) /* completed or */
@@ -273,14 +276,15 @@
if (async)
return 0;
- if (req->rq_set != NULL)
+ if (req->rq_set)
wq = &req->rq_set->set_waitq;
else
wq = &req->rq_reply_waitq;
for (;;) {
/* Network access will complete in finite time but the HUGE
- * timeout lets us CWARN for visibility of sluggish NALs */
+ * timeout lets us CWARN for visibility of sluggish LNDs
+ */
lwi = LWI_TIMEOUT_INTERVAL(cfs_time_seconds(LONG_UNLINK),
cfs_time_seconds(1), NULL, NULL);
rc = l_wait_event(*wq, !ptlrpc_client_bulk_active(req), &lwi);
@@ -305,13 +309,13 @@
req->rq_arrival_time.tv_sec, 1);
if (!(flags & PTLRPC_REPLY_EARLY) &&
- (req->rq_type != PTL_RPC_MSG_ERR) &&
- (req->rq_reqmsg != NULL) &&
+ (req->rq_type != PTL_RPC_MSG_ERR) && req->rq_reqmsg &&
!(lustre_msg_get_flags(req->rq_reqmsg) &
(MSG_RESENT | MSG_REPLAY |
MSG_REQ_REPLAY_DONE | MSG_LOCK_REPLAY_DONE))) {
/* early replies, errors and recovery requests don't count
- * toward our service time estimate */
+ * toward our service time estimate
+ */
int oldse = at_measured(&svcpt->scp_at_estimate, service_time);
if (oldse != 0) {
@@ -325,7 +329,8 @@
lustre_msg_set_service_time(req->rq_repmsg, service_time);
/* Report service time estimate for future client reqs, but report 0
* (to be ignored by client) if it's a error reply during recovery.
- * (bz15815) */
+ * (bz15815)
+ */
if (req->rq_type == PTL_RPC_MSG_ERR && !req->rq_export)
lustre_msg_set_timeout(req->rq_repmsg, 0);
else
@@ -360,10 +365,10 @@
* target_queue_final_reply().
*/
LASSERT(req->rq_no_reply == 0);
- LASSERT(req->rq_reqbuf != NULL);
- LASSERT(rs != NULL);
+ LASSERT(req->rq_reqbuf);
+ LASSERT(rs);
LASSERT((flags & PTLRPC_REPLY_MAYBE_DIFFICULT) || !rs->rs_difficult);
- LASSERT(req->rq_repmsg != NULL);
+ LASSERT(req->rq_repmsg);
LASSERT(req->rq_repmsg == rs->rs_msg);
LASSERT(rs->rs_cb_id.cbid_fn == reply_out_callback);
LASSERT(rs->rs_cb_id.cbid_arg == rs);
@@ -403,12 +408,12 @@
ptlrpc_at_set_reply(req, flags);
- if (req->rq_export == NULL || req->rq_export->exp_connection == NULL)
+ if (!req->rq_export || !req->rq_export->exp_connection)
conn = ptlrpc_connection_get(req->rq_peer, req->rq_self, NULL);
else
conn = ptlrpc_connection_addref(req->rq_export->exp_connection);
- if (unlikely(conn == NULL)) {
+ if (unlikely(!conn)) {
CERROR("not replying on NULL connection\n"); /* bug 9635 */
return -ENOTCONN;
}
@@ -498,12 +503,13 @@
LASSERT(request->rq_wait_ctx == 0);
/* If this is a re-transmit, we're required to have disengaged
- * cleanly from the previous attempt */
+ * cleanly from the previous attempt
+ */
LASSERT(!request->rq_receiving_reply);
LASSERT(!((lustre_msg_get_flags(request->rq_reqmsg) & MSG_REPLAY) &&
(request->rq_import->imp_state == LUSTRE_IMP_FULL)));
- if (unlikely(obd != NULL && obd->obd_fail)) {
+ if (unlikely(obd && obd->obd_fail)) {
CDEBUG(D_HA, "muting rpc for failed imp obd %s\n",
obd->obd_name);
/* this prevents us from waiting in ptlrpc_queue_wait */
@@ -535,7 +541,7 @@
goto out;
/* bulk register should be done after wrap_request() */
- if (request->rq_bulk != NULL) {
+ if (request->rq_bulk) {
rc = ptlrpc_register_bulk(request);
if (rc != 0)
goto out;
@@ -543,14 +549,15 @@
if (!noreply) {
LASSERT(request->rq_replen != 0);
- if (request->rq_repbuf == NULL) {
- LASSERT(request->rq_repdata == NULL);
- LASSERT(request->rq_repmsg == NULL);
+ if (!request->rq_repbuf) {
+ LASSERT(!request->rq_repdata);
+ LASSERT(!request->rq_repmsg);
rc = sptlrpc_cli_alloc_repbuf(request,
request->rq_replen);
if (rc) {
/* this prevents us from looping in
- * ptlrpc_queue_wait */
+ * ptlrpc_queue_wait
+ */
spin_lock(&request->rq_lock);
request->rq_err = 1;
spin_unlock(&request->rq_lock);
@@ -602,7 +609,8 @@
reply_md.eq_handle = ptlrpc_eq_h;
/* We must see the unlink callback to unset rq_reply_unlink,
- so we can't auto-unlink */
+ * so we can't auto-unlink
+ */
rc = LNetMDAttach(reply_me_h, reply_md, LNET_RETAIN,
&request->rq_reply_md_h);
if (rc != 0) {
@@ -623,7 +631,7 @@
/* add references on request for request_out_callback */
ptlrpc_request_addref(request);
- if (obd != NULL && obd->obd_svc_stats != NULL)
+ if (obd && obd->obd_svc_stats)
lprocfs_counter_add(obd->obd_svc_stats, PTLRPC_REQACTIVE_CNTR,
atomic_read(&request->rq_import->imp_inflight));
@@ -632,7 +640,8 @@
ktime_get_real_ts64(&request->rq_arrival_time);
request->rq_sent = ktime_get_real_seconds();
/* We give the server rq_timeout secs to process the req, and
- add the network latency for our local timeout. */
+ * add the network latency for our local timeout.
+ */
request->rq_deadline = request->rq_sent + request->rq_timeout +
ptlrpc_at_get_net_latency(request);
@@ -656,7 +665,8 @@
cleanup_me:
/* MEUnlink is safe; the PUT didn't even get off the ground, and
* nobody apart from the PUT's target has the right nid+XID to
- * access the reply buffer. */
+ * access the reply buffer.
+ */
rc2 = LNetMEUnlink(reply_me_h);
LASSERT(rc2 == 0);
/* UNLINKED callback called synchronously */
@@ -664,7 +674,8 @@
cleanup_bulk:
/* We do sync unlink here as there was no real transfer here so
- * the chance to have long unlink to sluggish net is smaller here. */
+ * the chance to have long unlink to sluggish net is smaller here.
+ */
ptlrpc_unregister_bulk(request, 0);
out:
if (request->rq_memalloc)
@@ -692,7 +703,8 @@
/* NB: CPT affinity service should use new LNet flag LNET_INS_LOCAL,
* which means buffer can only be attached on local CPT, and LND
- * threads can find it by grabbing a local lock */
+ * threads can find it by grabbing a local lock
+ */
rc = LNetMEAttach(service->srv_req_portal,
match_id, 0, ~0, LNET_UNLINK,
rqbd->rqbd_svcpt->scp_cpt >= 0 ?
diff --git a/drivers/staging/lustre/lustre/ptlrpc/nrs.c b/drivers/staging/lustre/lustre/ptlrpc/nrs.c
index 57acf8c..58e5d86 100644
--- a/drivers/staging/lustre/lustre/ptlrpc/nrs.c
+++ b/drivers/staging/lustre/lustre/ptlrpc/nrs.c
@@ -13,10 +13,6 @@
* GNU General Public License version 2 for more details. A copy is
* included in the COPYING file that accompanied this code.
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
- *
* GPL HEADER END
*/
/*
@@ -47,9 +43,6 @@
#include "../../include/linux/libcfs/libcfs.h"
#include "ptlrpc_internal.h"
-/* XXX: This is just for liblustre. Remove the #if defined directive when the
- * "cfs_" prefix is dropped from cfs_list_head. */
-
/**
* NRS core object.
*/
@@ -57,7 +50,7 @@
static int nrs_policy_init(struct ptlrpc_nrs_policy *policy)
{
- return policy->pol_desc->pd_ops->op_policy_init != NULL ?
+ return policy->pol_desc->pd_ops->op_policy_init ?
policy->pol_desc->pd_ops->op_policy_init(policy) : 0;
}
@@ -66,7 +59,7 @@
LASSERT(policy->pol_ref == 0);
LASSERT(policy->pol_req_queued == 0);
- if (policy->pol_desc->pd_ops->op_policy_fini != NULL)
+ if (policy->pol_desc->pd_ops->op_policy_fini)
policy->pol_desc->pd_ops->op_policy_fini(policy);
}
@@ -82,7 +75,7 @@
if (policy->pol_state == NRS_POL_STATE_STOPPED)
return -ENODEV;
- return policy->pol_desc->pd_ops->op_policy_ctl != NULL ?
+ return policy->pol_desc->pd_ops->op_policy_ctl ?
policy->pol_desc->pd_ops->op_policy_ctl(policy, opc, arg) :
-ENOSYS;
}
@@ -91,7 +84,7 @@
{
struct ptlrpc_nrs *nrs = policy->pol_nrs;
- if (policy->pol_desc->pd_ops->op_policy_stop != NULL) {
+ if (policy->pol_desc->pd_ops->op_policy_stop) {
spin_unlock(&nrs->nrs_lock);
policy->pol_desc->pd_ops->op_policy_stop(policy);
@@ -154,7 +147,7 @@
{
struct ptlrpc_nrs_policy *tmp = nrs->nrs_policy_primary;
- if (tmp == NULL)
+ if (!tmp)
return;
nrs->nrs_policy_primary = NULL;
@@ -220,12 +213,12 @@
* nrs_policy_flags::PTLRPC_NRS_FL_FALLBACK flag set can
* register with NRS core.
*/
- LASSERT(nrs->nrs_policy_fallback == NULL);
+ LASSERT(!nrs->nrs_policy_fallback);
} else {
/**
* Shouldn't start primary policy if w/o fallback policy.
*/
- if (nrs->nrs_policy_fallback == NULL)
+ if (!nrs->nrs_policy_fallback)
return -EPERM;
if (policy->pol_state == NRS_POL_STATE_STARTED)
@@ -348,10 +341,10 @@
{
struct ptlrpc_nrs_policy *policy = res->res_policy;
- if (policy->pol_desc->pd_ops->op_res_put != NULL) {
+ if (policy->pol_desc->pd_ops->op_res_put) {
struct ptlrpc_nrs_resource *parent;
- for (; res != NULL; res = parent) {
+ for (; res; res = parent) {
parent = res->res_parent;
policy->pol_desc->pd_ops->op_res_put(policy, res);
}
@@ -390,12 +383,11 @@
rc = policy->pol_desc->pd_ops->op_res_get(policy, nrq, res,
&tmp, moving_req);
if (rc < 0) {
- if (res != NULL)
+ if (res)
nrs_resource_put(res);
return NULL;
}
- LASSERT(tmp != NULL);
tmp->res_parent = res;
tmp->res_policy = policy;
res = tmp;
@@ -445,7 +437,7 @@
nrs_policy_get_locked(fallback);
primary = nrs->nrs_policy_primary;
- if (primary != NULL)
+ if (primary)
nrs_policy_get_locked(primary);
spin_unlock(&nrs->nrs_lock);
@@ -454,9 +446,9 @@
* Obtain resource hierarchy references.
*/
resp[NRS_RES_FALLBACK] = nrs_resource_get(fallback, nrq, moving_req);
- LASSERT(resp[NRS_RES_FALLBACK] != NULL);
+ LASSERT(resp[NRS_RES_FALLBACK]);
- if (primary != NULL) {
+ if (primary) {
resp[NRS_RES_PRIMARY] = nrs_resource_get(primary, nrq,
moving_req);
/**
@@ -465,7 +457,7 @@
* reference on the policy as it will not be used for this
* request.
*/
- if (resp[NRS_RES_PRIMARY] == NULL)
+ if (!resp[NRS_RES_PRIMARY])
nrs_policy_put(primary);
}
}
@@ -485,7 +477,7 @@
int i;
for (i = 0; i < NRS_RES_MAX; i++) {
- if (resp[i] != NULL) {
+ if (resp[i]) {
pols[i] = resp[i]->res_policy;
nrs_resource_put(resp[i]);
resp[i] = NULL;
@@ -526,7 +518,7 @@
nrq = policy->pol_desc->pd_ops->op_req_get(policy, peek, force);
- LASSERT(ergo(nrq != NULL, nrs_request_policy(nrq) == policy));
+ LASSERT(ergo(nrq, nrs_request_policy(nrq) == policy));
return nrq;
}
@@ -552,7 +544,7 @@
* the preferred choice.
*/
for (i = NRS_RES_MAX - 1; i >= 0; i--) {
- if (nrq->nr_res_ptrs[i] == NULL)
+ if (!nrq->nr_res_ptrs[i])
continue;
nrq->nr_res_idx = i;
@@ -622,7 +614,7 @@
spin_lock(&nrs->nrs_lock);
policy = nrs_policy_find_locked(nrs, name);
- if (policy == NULL) {
+ if (!policy) {
rc = -ENOENT;
goto out;
}
@@ -644,7 +636,7 @@
break;
}
out:
- if (policy != NULL)
+ if (policy)
nrs_policy_put_locked(policy);
spin_unlock(&nrs->nrs_lock);
@@ -669,7 +661,7 @@
spin_lock(&nrs->nrs_lock);
policy = nrs_policy_find_locked(nrs, name);
- if (policy == NULL) {
+ if (!policy) {
spin_unlock(&nrs->nrs_lock);
CERROR("Can't find NRS policy %s\n", name);
@@ -702,7 +694,7 @@
nrs_policy_fini(policy);
- LASSERT(policy->pol_private == NULL);
+ LASSERT(!policy->pol_private);
kfree(policy);
return 0;
@@ -726,18 +718,16 @@
struct ptlrpc_service_part *svcpt = nrs->nrs_svcpt;
int rc;
- LASSERT(svcpt != NULL);
- LASSERT(desc->pd_ops != NULL);
- LASSERT(desc->pd_ops->op_res_get != NULL);
- LASSERT(desc->pd_ops->op_req_get != NULL);
- LASSERT(desc->pd_ops->op_req_enqueue != NULL);
- LASSERT(desc->pd_ops->op_req_dequeue != NULL);
- LASSERT(desc->pd_compat != NULL);
+ LASSERT(desc->pd_ops->op_res_get);
+ LASSERT(desc->pd_ops->op_req_get);
+ LASSERT(desc->pd_ops->op_req_enqueue);
+ LASSERT(desc->pd_ops->op_req_dequeue);
+ LASSERT(desc->pd_compat);
policy = kzalloc_node(sizeof(*policy), GFP_NOFS,
cfs_cpt_spread_node(svcpt->scp_service->srv_cptable,
svcpt->scp_cpt));
- if (policy == NULL)
+ if (!policy)
return -ENOMEM;
policy->pol_nrs = nrs;
@@ -757,7 +747,7 @@
spin_lock(&nrs->nrs_lock);
tmp = nrs_policy_find_locked(nrs, policy->pol_desc->pd_name);
- if (tmp != NULL) {
+ if (tmp) {
CERROR("NRS policy %s has been registered, can't register it for %s\n",
policy->pol_desc->pd_name,
svcpt->scp_service->srv_name);
@@ -947,14 +937,14 @@
/**
* Optionally allocate a high-priority NRS head.
*/
- if (svcpt->scp_service->srv_ops.so_hpreq_handler == NULL)
+ if (!svcpt->scp_service->srv_ops.so_hpreq_handler)
goto out;
svcpt->scp_nrs_hp =
kzalloc_node(sizeof(*svcpt->scp_nrs_hp), GFP_NOFS,
cfs_cpt_spread_node(svcpt->scp_service->srv_cptable,
svcpt->scp_cpt));
- if (svcpt->scp_nrs_hp == NULL) {
+ if (!svcpt->scp_nrs_hp) {
rc = -ENOMEM;
goto out;
}
@@ -1079,7 +1069,7 @@
}
}
- if (desc->pd_ops->op_lprocfs_fini != NULL)
+ if (desc->pd_ops->op_lprocfs_fini)
desc->pd_ops->op_lprocfs_fini(svc);
}
@@ -1107,13 +1097,12 @@
struct ptlrpc_nrs_pol_desc *desc;
int rc = 0;
- LASSERT(conf != NULL);
- LASSERT(conf->nc_ops != NULL);
- LASSERT(conf->nc_compat != NULL);
+ LASSERT(conf->nc_ops);
+ LASSERT(conf->nc_compat);
LASSERT(ergo(conf->nc_compat == nrs_policy_compat_one,
- conf->nc_compat_svc_name != NULL));
+ conf->nc_compat_svc_name));
LASSERT(ergo((conf->nc_flags & PTLRPC_NRS_FL_REG_EXTERN) != 0,
- conf->nc_owner != NULL));
+ conf->nc_owner));
conf->nc_name[NRS_POL_NAME_MAX - 1] = '\0';
@@ -1136,7 +1125,7 @@
mutex_lock(&nrs_core.nrs_mutex);
- if (nrs_policy_find_desc_locked(conf->nc_name) != NULL) {
+ if (nrs_policy_find_desc_locked(conf->nc_name)) {
CERROR("NRS: failing to register policy %s which has already been registered with NRS core!\n",
conf->nc_name);
rc = -EEXIST;
@@ -1214,7 +1203,7 @@
* No need to take a reference to other modules here, as we
* will be calling from the module's init() function.
*/
- if (desc->pd_ops->op_lprocfs_init != NULL) {
+ if (desc->pd_ops->op_lprocfs_init) {
rc = desc->pd_ops->op_lprocfs_init(svc);
if (rc != 0) {
rc2 = nrs_policy_unregister_locked(desc);
@@ -1278,7 +1267,7 @@
if (!nrs_policy_compatible(svc, desc))
continue;
- if (desc->pd_ops->op_lprocfs_init != NULL) {
+ if (desc->pd_ops->op_lprocfs_init) {
rc = desc->pd_ops->op_lprocfs_init(svc);
if (rc != 0)
goto failed;
@@ -1319,7 +1308,7 @@
if (!nrs_policy_compatible(svc, desc))
continue;
- if (desc->pd_ops->op_lprocfs_fini != NULL)
+ if (desc->pd_ops->op_lprocfs_fini)
desc->pd_ops->op_lprocfs_fini(svc);
}
@@ -1366,7 +1355,8 @@
if (req->rq_nrq.nr_initialized) {
nrs_resource_put_safe(req->rq_nrq.nr_res_ptrs);
/* no protection on bit nr_initialized because no
- * contention at this late stage */
+ * contention at this late stage
+ */
req->rq_nrq.nr_finalized = 1;
}
}
@@ -1459,7 +1449,7 @@
list_for_each_entry(policy, &nrs->nrs_policy_queued,
pol_list_queued) {
nrq = nrs_request_get(policy, peek, force);
- if (nrq != NULL) {
+ if (nrq) {
if (likely(!peek)) {
nrq->nr_started = 1;
diff --git a/drivers/staging/lustre/lustre/ptlrpc/nrs_fifo.c b/drivers/staging/lustre/lustre/ptlrpc/nrs_fifo.c
index 8e21f0c..0d8da35 100644
--- a/drivers/staging/lustre/lustre/ptlrpc/nrs_fifo.c
+++ b/drivers/staging/lustre/lustre/ptlrpc/nrs_fifo.c
@@ -13,10 +13,6 @@
* GNU General Public License version 2 for more details. A copy is
* included in the COPYING file that accompanied this code.
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
- *
* GPL HEADER END
*/
/*
@@ -83,7 +79,7 @@
head = kzalloc_node(sizeof(*head), GFP_NOFS,
cfs_cpt_spread_node(nrs_pol2cptab(policy),
nrs_pol2cptid(policy)));
- if (head == NULL)
+ if (!head)
return -ENOMEM;
INIT_LIST_HEAD(&head->fh_list);
@@ -104,7 +100,7 @@
{
struct nrs_fifo_head *head = policy->pol_private;
- LASSERT(head != NULL);
+ LASSERT(head);
LASSERT(list_empty(&head->fh_list));
kfree(head);
@@ -169,7 +165,7 @@
list_entry(head->fh_list.next, struct ptlrpc_nrs_request,
nr_u.fifo.fr_list);
- if (likely(!peek && nrq != NULL)) {
+ if (likely(!peek && nrq)) {
struct ptlrpc_request *req = container_of(nrq,
struct ptlrpc_request,
rq_nrq);
diff --git a/drivers/staging/lustre/lustre/ptlrpc/pack_generic.c b/drivers/staging/lustre/lustre/ptlrpc/pack_generic.c
index f3cb518..f3ac810 100644
--- a/drivers/staging/lustre/lustre/ptlrpc/pack_generic.c
+++ b/drivers/staging/lustre/lustre/ptlrpc/pack_generic.c
@@ -133,7 +133,8 @@
* NOTE: this should only be used for NEW requests, and should always be
* in the form of a v2 request. If this is a connection to a v1
* target then the first buffer will be stripped because the ptlrpc
- * data is part of the lustre_msg_v1 header. b=14043 */
+ * data is part of the lustre_msg_v1 header. b=14043
+ */
int lustre_msg_size(__u32 magic, int count, __u32 *lens)
{
__u32 size[] = { sizeof(struct ptlrpc_body) };
@@ -157,7 +158,8 @@
EXPORT_SYMBOL(lustre_msg_size);
/* This is used to determine the size of a buffer that was already packed
- * and will correctly handle the different message formats. */
+ * and will correctly handle the different message formats.
+ */
int lustre_packed_msg_size(struct lustre_msg *msg)
{
switch (msg->lm_magic) {
@@ -183,7 +185,7 @@
for (i = 0; i < count; i++)
msg->lm_buflens[i] = lens[i];
- if (bufs == NULL)
+ if (!bufs)
return;
ptr = (char *)msg + lustre_msg_hdr_size_v2(count);
@@ -267,7 +269,8 @@
spin_unlock(&svcpt->scp_rep_lock);
/* If we cannot get anything for some long time, we better
- * bail out instead of waiting infinitely */
+ * bail out instead of waiting infinitely
+ */
lwi = LWI_TIMEOUT(cfs_time_seconds(10), NULL, NULL);
rc = l_wait_event(svcpt->scp_rep_waitq,
!list_empty(&svcpt->scp_rep_idle), &lwi);
@@ -306,7 +309,7 @@
struct ptlrpc_reply_state *rs;
int msg_len, rc;
- LASSERT(req->rq_reply_state == NULL);
+ LASSERT(!req->rq_reply_state);
if ((flags & LPRFL_EARLY_REPLY) == 0) {
spin_lock(&req->rq_lock);
@@ -383,7 +386,6 @@
{
int i, offset, buflen, bufcount;
- LASSERT(m != NULL);
LASSERT(n >= 0);
bufcount = m->lm_bufcount;
@@ -488,7 +490,7 @@
LASSERT(!rs->rs_difficult || rs->rs_handled);
LASSERT(!rs->rs_on_net);
LASSERT(!rs->rs_scheduled);
- LASSERT(rs->rs_export == NULL);
+ LASSERT(!rs->rs_export);
LASSERT(rs->rs_nlocks == 0);
LASSERT(list_empty(&rs->rs_exp_list));
LASSERT(list_empty(&rs->rs_obd_list));
@@ -677,7 +679,8 @@
EXPORT_SYMBOL(lustre_msg_buflen);
/* NB return the bufcount for lustre_msg_v2 format, so if message is packed
- * in V1 format, the result is one bigger. (add struct ptlrpc_body). */
+ * in V1 format, the result is one bigger. (add struct ptlrpc_body).
+ */
int lustre_msg_bufcount(struct lustre_msg *m)
{
switch (m->lm_magic) {
@@ -705,7 +708,7 @@
LASSERTF(0, "incorrect message magic: %08x\n", m->lm_magic);
}
- if (str == NULL) {
+ if (!str) {
CERROR("can't unpack string in msg %p buffer[%d]\n", m, index);
return NULL;
}
@@ -740,7 +743,6 @@
{
void *ptr = NULL;
- LASSERT(msg != NULL);
switch (msg->lm_magic) {
case LUSTRE_MSG_MAGIC_V2:
ptr = lustre_msg_buf_v2(msg, index, min_size);
@@ -799,7 +801,8 @@
/* no break */
default:
/* flags might be printed in debug code while message
- * uninitialized */
+ * uninitialized
+ */
return 0;
}
}
@@ -1032,7 +1035,8 @@
/* no break */
default:
/* status might be printed in debug code while message
- * uninitialized */
+ * uninitialized
+ */
return -EINVAL;
}
}
@@ -1368,7 +1372,8 @@
struct ptlrpc_body *pb;
/* Don't set jobid for ldlm ast RPCs, they've been shrunk.
- * See the comment in ptlrpc_request_pack(). */
+ * See the comment in ptlrpc_request_pack().
+ */
if (!opc || opc == LDLM_BL_CALLBACK ||
opc == LDLM_CP_CALLBACK || opc == LDLM_GL_CALLBACK)
return;
@@ -1377,7 +1382,7 @@
sizeof(struct ptlrpc_body));
LASSERTF(pb, "invalid msg %p: no ptlrpc body!\n", msg);
- if (jobid != NULL)
+ if (jobid)
memcpy(pb->pb_jobid, jobid, JOBSTATS_JOBID_SIZE);
else if (pb->pb_jobid[0] == '\0')
lustre_get_jobid(pb->pb_jobid);
@@ -1427,7 +1432,7 @@
int rc;
req = ptlrpc_request_alloc(imp, &RQF_OBD_SET_INFO);
- if (req == NULL)
+ if (!req)
return -ENOMEM;
req_capsule_set_size(&req->rq_pill, &RMF_SETINFO_KEY,
@@ -1488,7 +1493,8 @@
* clients and servers without ptlrpc_body_v2 (< 2.3)
* do not swab any fields beyond pb_jobid, as we are
* using this swab function for both ptlrpc_body
- * and ptlrpc_body_v2. */
+ * and ptlrpc_body_v2.
+ */
CLASSERT(offsetof(typeof(*b), pb_jobid) != 0);
}
EXPORT_SYMBOL(lustre_swab_ptlrpc_body);
@@ -1502,7 +1508,8 @@
__swab32s(&ocd->ocd_index);
__swab32s(&ocd->ocd_brw_size);
/* ocd_blocksize and ocd_inodespace don't need to be swabbed because
- * they are 8-byte values */
+ * they are 8-byte values
+ */
__swab16s(&ocd->ocd_grant_extent);
__swab32s(&ocd->ocd_unused);
__swab64s(&ocd->ocd_transno);
@@ -1512,7 +1519,8 @@
/* Fields after ocd_cksum_types are only accessible by the receiver
* if the corresponding flag in ocd_connect_flags is set. Accessing
* any field after ocd_maxbytes on the receiver without a valid flag
- * may result in out-of-bound memory access and kernel oops. */
+ * may result in out-of-bound memory access and kernel oops.
+ */
if (ocd->ocd_connect_flags & OBD_CONNECT_MAX_EASIZE)
__swab32s(&ocd->ocd_max_easize);
if (ocd->ocd_connect_flags & OBD_CONNECT_MAXBYTES)
@@ -1848,20 +1856,6 @@
}
EXPORT_SYMBOL(lustre_swab_fiemap);
-void lustre_swab_idx_info(struct idx_info *ii)
-{
- __swab32s(&ii->ii_magic);
- __swab32s(&ii->ii_flags);
- __swab16s(&ii->ii_count);
- __swab32s(&ii->ii_attrs);
- lustre_swab_lu_fid(&ii->ii_fid);
- __swab64s(&ii->ii_version);
- __swab64s(&ii->ii_hash_start);
- __swab64s(&ii->ii_hash_end);
- __swab16s(&ii->ii_keysize);
- __swab16s(&ii->ii_recsize);
-}
-
void lustre_swab_mdt_rec_reint (struct mdt_rec_reint *rr)
{
__swab32s(&rr->rr_opcode);
@@ -1986,7 +1980,8 @@
{
/* the lock data is a union and the first two fields are always an
* extent so it's ok to process an LDLM_EXTENT and LDLM_FLOCK lock
- * data the same way. */
+ * data the same way.
+ */
__swab64s(&d->l_extent.start);
__swab64s(&d->l_extent.end);
__swab64s(&d->l_extent.gid);
@@ -2035,16 +2030,6 @@
}
EXPORT_SYMBOL(lustre_swab_ldlm_reply);
-void lustre_swab_quota_body(struct quota_body *b)
-{
- lustre_swab_lu_fid(&b->qb_fid);
- lustre_swab_lu_fid((struct lu_fid *)&b->qb_id);
- __swab32s(&b->qb_flags);
- __swab64s(&b->qb_count);
- __swab64s(&b->qb_usage);
- __swab64s(&b->qb_slv_ver);
-}
-
/* Dump functions */
void dump_ioo(struct obd_ioobj *ioo)
{
@@ -2288,24 +2273,6 @@
}
EXPORT_SYMBOL(lustre_swab_hsm_request);
-void lustre_swab_update_buf(struct update_buf *ub)
-{
- __swab32s(&ub->ub_magic);
- __swab32s(&ub->ub_count);
-}
-EXPORT_SYMBOL(lustre_swab_update_buf);
-
-void lustre_swab_update_reply_buf(struct update_reply *ur)
-{
- int i;
-
- __swab32s(&ur->ur_version);
- __swab32s(&ur->ur_count);
- for (i = 0; i < ur->ur_count; i++)
- __swab32s(&ur->ur_lens[i]);
-}
-EXPORT_SYMBOL(lustre_swab_update_reply_buf);
-
void lustre_swab_swap_layouts(struct mdc_swap_layouts *msl)
{
__swab64s(&msl->msl_flags);
diff --git a/drivers/staging/lustre/lustre/ptlrpc/pinger.c b/drivers/staging/lustre/lustre/ptlrpc/pinger.c
index fb2d523..44bf026 100644
--- a/drivers/staging/lustre/lustre/ptlrpc/pinger.c
+++ b/drivers/staging/lustre/lustre/ptlrpc/pinger.c
@@ -68,7 +68,7 @@
struct ptlrpc_request *req;
req = ptlrpc_prep_ping(obd->u.cli.cl_import);
- if (req == NULL)
+ if (!req)
return -ENOMEM;
req->rq_send_state = LUSTRE_IMP_FULL;
@@ -86,7 +86,7 @@
struct ptlrpc_request *req;
req = ptlrpc_prep_ping(imp);
- if (req == NULL) {
+ if (!req) {
CERROR("OOM trying to ping %s->%s\n",
imp->imp_obd->obd_uuid.uuid,
obd2cli_tgt(imp->imp_obd));
@@ -257,11 +257,12 @@
/* Wait until the next ping time, or until we're stopped. */
time_to_next_wake = pinger_check_timeout(this_ping);
/* The ping sent by ptlrpc_send_rpc may get sent out
- say .01 second after this.
- ptlrpc_pinger_sending_on_import will then set the
- next ping time to next_ping + .01 sec, which means
- we will SKIP the next ping at next_ping, and the
- ping will get sent 2 timeouts from now! Beware. */
+ * say .01 second after this.
+ * ptlrpc_pinger_sending_on_import will then set the
+ * next ping time to next_ping + .01 sec, which means
+ * we will SKIP the next ping at next_ping, and the
+ * ping will get sent 2 timeouts from now! Beware.
+ */
CDEBUG(D_INFO, "next wakeup in " CFS_DURATION_T " (%ld)\n",
time_to_next_wake,
cfs_time_add(this_ping,
@@ -293,6 +294,7 @@
int ptlrpc_start_pinger(void)
{
struct l_wait_info lwi = { 0 };
+ struct task_struct *task;
int rc;
if (!thread_is_init(&pinger_thread) &&
@@ -303,10 +305,11 @@
strcpy(pinger_thread.t_name, "ll_ping");
- rc = PTR_ERR(kthread_run(ptlrpc_pinger_main, &pinger_thread,
- "%s", pinger_thread.t_name));
- if (IS_ERR_VALUE(rc)) {
- CERROR("cannot start thread: %d\n", rc);
+ task = kthread_run(ptlrpc_pinger_main, &pinger_thread,
+ pinger_thread.t_name);
+ if (IS_ERR(task)) {
+ rc = PTR_ERR(task);
+ CERROR("cannot start pinger thread: rc = %d\n", rc);
return rc;
}
l_wait_event(pinger_thread.t_ctl_waitq,
@@ -489,7 +492,6 @@
break;
}
}
- LASSERTF(ti != NULL, "ti is NULL !\n");
if (list_empty(&ti->ti_obd_list)) {
list_del(&ti->ti_chain);
kfree(ti);
diff --git a/drivers/staging/lustre/lustre/ptlrpc/ptlrpc_internal.h b/drivers/staging/lustre/lustre/ptlrpc/ptlrpc_internal.h
index 8f67e05..6ca26c9 100644
--- a/drivers/staging/lustre/lustre/ptlrpc/ptlrpc_internal.h
+++ b/drivers/staging/lustre/lustre/ptlrpc/ptlrpc_internal.h
@@ -101,8 +101,6 @@
* registration/unregistration, and NRS core lprocfs operations.
*/
struct mutex nrs_mutex;
- /* XXX: This is just for liblustre. Remove the #if defined directive
- * when the * "cfs_" prefix is dropped from cfs_list_head. */
/**
* List of all policy descriptors registered with NRS core; protected
* by nrs_core::nrs_mutex.
diff --git a/drivers/staging/lustre/lustre/ptlrpc/ptlrpcd.c b/drivers/staging/lustre/lustre/ptlrpc/ptlrpcd.c
index 60fb0ce..0c36663 100644
--- a/drivers/staging/lustre/lustre/ptlrpc/ptlrpcd.c
+++ b/drivers/staging/lustre/lustre/ptlrpc/ptlrpcd.c
@@ -163,8 +163,6 @@
{
struct ptlrpc_request_set *rq_set = req->rq_set;
- LASSERT(rq_set != NULL);
-
wake_up(&rq_set->set_waitq);
}
EXPORT_SYMBOL(ptlrpcd_wake);
@@ -176,7 +174,7 @@
int cpt;
int idx;
- if (req != NULL && req->rq_send_state != LUSTRE_IMP_FULL)
+ if (req && req->rq_send_state != LUSTRE_IMP_FULL)
return &ptlrpcd_rcv;
cpt = cfs_cpt_current(cfs_cpt_table, 1);
@@ -240,10 +238,11 @@
req->rq_invalid_rqset = 0;
spin_unlock(&req->rq_lock);
- l_wait_event(req->rq_set_waitq, (req->rq_set == NULL), &lwi);
+ l_wait_event(req->rq_set_waitq, !req->rq_set, &lwi);
} else if (req->rq_set) {
/* If we have a valid "rq_set", just reuse it to avoid double
- * linked. */
+ * linked.
+ */
LASSERT(req->rq_phase == RQ_PHASE_NEW);
LASSERT(req->rq_send_state == LUSTRE_IMP_REPLAY);
@@ -321,7 +320,8 @@
rc |= ptlrpc_check_set(env, set);
/* NB: ptlrpc_check_set has already moved completed request at the
- * head of seq::set_requests */
+ * head of seq::set_requests
+ */
list_for_each_safe(pos, tmp, &set->set_requests) {
req = list_entry(pos, struct ptlrpc_request, rq_set_chain);
if (req->rq_phase != RQ_PHASE_COMPLETE)
@@ -339,7 +339,8 @@
rc = atomic_read(&set->set_new_count);
/* If we have nothing to do, check whether we can take some
- * work from our partner threads. */
+ * work from our partner threads.
+ */
if (rc == 0 && pc->pc_npartners > 0) {
struct ptlrpcd_ctl *partner;
struct ptlrpc_request_set *ps;
@@ -349,12 +350,12 @@
partner = pc->pc_partners[pc->pc_cursor++];
if (pc->pc_cursor >= pc->pc_npartners)
pc->pc_cursor = 0;
- if (partner == NULL)
+ if (!partner)
continue;
spin_lock(&partner->pc_lock);
ps = partner->pc_set;
- if (ps == NULL) {
+ if (!ps) {
spin_unlock(&partner->pc_lock);
continue;
}
@@ -580,7 +581,7 @@
return 0;
out_set:
- if (pc->pc_set != NULL) {
+ if (pc->pc_set) {
struct ptlrpc_request_set *set = pc->pc_set;
spin_lock(&pc->pc_lock);
@@ -631,7 +632,7 @@
out:
if (pc->pc_npartners > 0) {
- LASSERT(pc->pc_partners != NULL);
+ LASSERT(pc->pc_partners);
kfree(pc->pc_partners);
pc->pc_partners = NULL;
@@ -645,7 +646,7 @@
int i;
int j;
- if (ptlrpcds != NULL) {
+ if (ptlrpcds) {
for (i = 0; i < ptlrpcds_num; i++) {
if (!ptlrpcds[i])
break;
diff --git a/drivers/staging/lustre/lustre/ptlrpc/recover.c b/drivers/staging/lustre/lustre/ptlrpc/recover.c
index db6626c..e5d344d 100644
--- a/drivers/staging/lustre/lustre/ptlrpc/recover.c
+++ b/drivers/staging/lustre/lustre/ptlrpc/recover.c
@@ -114,7 +114,8 @@
if (req->rq_transno > last_transno) {
/* Since the imp_committed_list is immutable before
* all of it's requests being replayed, it's safe to
- * use a cursor to accelerate the search */
+ * use a cursor to accelerate the search
+ */
imp->imp_replay_cursor = imp->imp_replay_cursor->next;
while (imp->imp_replay_cursor !=
@@ -137,8 +138,9 @@
}
/* All the requests in committed list have been replayed, let's replay
- * the imp_replay_list */
- if (req == NULL) {
+ * the imp_replay_list
+ */
+ if (!req) {
list_for_each_safe(tmp, pos, &imp->imp_replay_list) {
req = list_entry(tmp, struct ptlrpc_request,
rq_replay_list);
@@ -152,15 +154,16 @@
/* If need to resend the last sent transno (because a reconnect
* has occurred), then stop on the matching req and send it again.
* If, however, the last sent transno has been committed then we
- * continue replay from the next request. */
- if (req != NULL && imp->imp_resend_replay)
+ * continue replay from the next request.
+ */
+ if (req && imp->imp_resend_replay)
lustre_msg_add_flags(req->rq_reqmsg, MSG_RESENT);
spin_lock(&imp->imp_lock);
imp->imp_resend_replay = 0;
spin_unlock(&imp->imp_lock);
- if (req != NULL) {
+ if (req) {
rc = ptlrpc_replay_req(req);
if (rc) {
CERROR("recovery replay error %d for req %llu\n",
@@ -249,7 +252,8 @@
}
/* Wait for recovery to complete and resend. If evicted, then
- this request will be errored out later.*/
+ * this request will be errored out later.
+ */
spin_lock(&failed_req->rq_lock);
if (!failed_req->rq_no_resend)
failed_req->rq_resend = 1;
@@ -260,7 +264,7 @@
* Administratively active/deactive a client.
* This should only be called by the ioctl interface, currently
* - the lctl deactivate and activate commands
- * - echo 0/1 >> /proc/osc/XXX/active
+ * - echo 0/1 >> /sys/fs/lustre/osc/XXX/active
* - client umount -f (ll_umount_begin)
*/
int ptlrpc_set_import_active(struct obd_import *imp, int active)
@@ -271,13 +275,15 @@
LASSERT(obd);
/* When deactivating, mark import invalid, and abort in-flight
- * requests. */
+ * requests.
+ */
if (!active) {
LCONSOLE_WARN("setting import %s INACTIVE by administrator request\n",
obd2cli_tgt(imp->imp_obd));
/* set before invalidate to avoid messages about imp_inval
- * set without imp_deactive in ptlrpc_import_delay_req */
+ * set without imp_deactive in ptlrpc_import_delay_req
+ */
spin_lock(&imp->imp_lock);
imp->imp_deactive = 1;
spin_unlock(&imp->imp_lock);
diff --git a/drivers/staging/lustre/lustre/ptlrpc/sec.c b/drivers/staging/lustre/lustre/ptlrpc/sec.c
index 39f5261..85f8f7c 100644
--- a/drivers/staging/lustre/lustre/ptlrpc/sec.c
+++ b/drivers/staging/lustre/lustre/ptlrpc/sec.c
@@ -94,7 +94,7 @@
LASSERT(number < SPTLRPC_POLICY_MAX);
write_lock(&policy_lock);
- if (unlikely(policies[number] == NULL)) {
+ if (unlikely(!policies[number])) {
write_unlock(&policy_lock);
CERROR("%s: already unregistered\n", policy->sp_name);
return -EINVAL;
@@ -126,11 +126,11 @@
policy = policies[number];
if (policy && !try_module_get(policy->sp_owner))
policy = NULL;
- if (policy == NULL)
+ if (!policy)
flag = atomic_read(&loaded);
read_unlock(&policy_lock);
- if (policy != NULL || flag != 0 ||
+ if (policy || flag != 0 ||
number != SPTLRPC_POLICY_GSS)
break;
@@ -327,7 +327,7 @@
}
*sec = sptlrpc_import_sec_ref(imp);
- if (*sec == NULL) {
+ if (!*sec) {
CERROR("import %p (%s) with no sec\n",
imp, ptlrpc_import_state_name(imp->imp_state));
return -EACCES;
@@ -429,7 +429,7 @@
reqmsg_size = req->rq_reqlen;
if (reqmsg_size != 0) {
reqmsg = libcfs_kvzalloc(reqmsg_size, GFP_NOFS);
- if (reqmsg == NULL)
+ if (!reqmsg)
return -ENOMEM;
memcpy(reqmsg, req->rq_reqmsg, reqmsg_size);
}
@@ -445,7 +445,8 @@
/* alloc new request buffer
* we don't need to alloc reply buffer here, leave it to the
- * rest procedure of ptlrpc */
+ * rest procedure of ptlrpc
+ */
if (reqmsg_size != 0) {
rc = sptlrpc_cli_alloc_reqbuf(req, reqmsg_size);
if (!rc) {
@@ -798,7 +799,8 @@
spin_unlock(&sec->ps_lock);
/* force SVC_NULL for context initiation rpc, SVC_INTG for context
- * destruction rpc */
+ * destruction rpc
+ */
if (unlikely(req->rq_ctx_init))
flvr_set_svc(&req->rq_flvr.sf_rpc, SPTLRPC_SVC_NULL);
else if (unlikely(req->rq_ctx_fini))
@@ -938,7 +940,7 @@
LASSERT(ctx->cc_sec);
LASSERT(req->rq_repbuf);
LASSERT(req->rq_repdata);
- LASSERT(req->rq_repmsg == NULL);
+ LASSERT(!req->rq_repmsg);
req->rq_rep_swab_mask = 0;
@@ -1000,8 +1002,8 @@
int sptlrpc_cli_unwrap_reply(struct ptlrpc_request *req)
{
LASSERT(req->rq_repbuf);
- LASSERT(req->rq_repdata == NULL);
- LASSERT(req->rq_repmsg == NULL);
+ LASSERT(!req->rq_repdata);
+ LASSERT(!req->rq_repmsg);
LASSERT(req->rq_reply_off + req->rq_nob_received <= req->rq_repbuf_len);
if (req->rq_reply_off == 0 &&
@@ -1046,13 +1048,13 @@
int rc;
early_req = ptlrpc_request_cache_alloc(GFP_NOFS);
- if (early_req == NULL)
+ if (!early_req)
return -ENOMEM;
early_size = req->rq_nob_received;
early_bufsz = size_roundup_power2(early_size);
early_buf = libcfs_kvzalloc(early_bufsz, GFP_NOFS);
- if (early_buf == NULL) {
+ if (!early_buf) {
rc = -ENOMEM;
goto err_req;
}
@@ -1067,8 +1069,8 @@
}
LASSERT(req->rq_repbuf);
- LASSERT(req->rq_repdata == NULL);
- LASSERT(req->rq_repmsg == NULL);
+ LASSERT(!req->rq_repdata);
+ LASSERT(!req->rq_repmsg);
if (req->rq_reply_off != 0) {
CERROR("early reply with offset %u\n", req->rq_reply_off);
@@ -1354,12 +1356,12 @@
might_sleep();
- if (imp == NULL)
+ if (!imp)
return 0;
conn = imp->imp_connection;
- if (svc_ctx == NULL) {
+ if (!svc_ctx) {
struct client_obd *cliobd = &imp->imp_obd->u.cli;
/*
* normal import, determine flavor from rule set, except
@@ -1447,11 +1449,11 @@
{
struct ptlrpc_sec *sec;
- if (imp == NULL)
+ if (!imp)
return;
sec = sptlrpc_import_sec_ref(imp);
- if (sec == NULL)
+ if (!sec)
return;
sec_cop_flush_ctx_cache(sec, uid, grace, force);
@@ -1484,7 +1486,7 @@
LASSERT(ctx);
LASSERT(ctx->cc_sec);
LASSERT(ctx->cc_sec->ps_policy);
- LASSERT(req->rq_reqmsg == NULL);
+ LASSERT(!req->rq_reqmsg);
LASSERT_ATOMIC_POS(&ctx->cc_refcount);
policy = ctx->cc_sec->ps_policy;
@@ -1515,7 +1517,7 @@
LASSERT(ctx->cc_sec->ps_policy);
LASSERT_ATOMIC_POS(&ctx->cc_refcount);
- if (req->rq_reqbuf == NULL && req->rq_clrbuf == NULL)
+ if (!req->rq_reqbuf && !req->rq_clrbuf)
return;
policy = ctx->cc_sec->ps_policy;
@@ -1632,7 +1634,7 @@
LASSERT(ctx->cc_sec->ps_policy);
LASSERT_ATOMIC_POS(&ctx->cc_refcount);
- if (req->rq_repbuf == NULL)
+ if (!req->rq_repbuf)
return;
LASSERT(req->rq_repbuf_len);
@@ -1684,12 +1686,13 @@
{
struct sptlrpc_flavor flavor;
- if (exp == NULL)
+ if (!exp)
return 0;
/* client side export has no imp_reverse, skip
- * FIXME maybe we should check flavor this as well??? */
- if (exp->exp_imp_reverse == NULL)
+ * FIXME maybe we should check flavor this as well???
+ */
+ if (!exp->exp_imp_reverse)
return 0;
/* don't care about ctx fini rpc */
@@ -1702,11 +1705,13 @@
* the first req with the new flavor, then treat it as current flavor,
* adapt reverse sec according to it.
* note the first rpc with new flavor might not be with root ctx, in
- * which case delay the sec_adapt by leaving exp_flvr_adapt == 1. */
+ * which case delay the sec_adapt by leaving exp_flvr_adapt == 1.
+ */
if (unlikely(exp->exp_flvr_changed) &&
flavor_allowed(&exp->exp_flvr_old[1], req)) {
/* make the new flavor as "current", and old ones as
- * about-to-expire */
+ * about-to-expire
+ */
CDEBUG(D_SEC, "exp %p: just changed: %x->%x\n", exp,
exp->exp_flvr.sf_rpc, exp->exp_flvr_old[1].sf_rpc);
flavor = exp->exp_flvr_old[1];
@@ -1742,10 +1747,12 @@
}
/* if it equals to the current flavor, we accept it, but need to
- * dealing with reverse sec/ctx */
+ * dealing with reverse sec/ctx
+ */
if (likely(flavor_allowed(&exp->exp_flvr, req))) {
/* most cases should return here, we only interested in
- * gss root ctx init */
+ * gss root ctx init
+ */
if (!req->rq_auth_gss || !req->rq_ctx_init ||
(!req->rq_auth_usr_root && !req->rq_auth_usr_mdt &&
!req->rq_auth_usr_ost)) {
@@ -1755,7 +1762,8 @@
/* if flavor just changed, we should not proceed, just leave
* it and current flavor will be discovered and replaced
- * shortly, and let _this_ rpc pass through */
+ * shortly, and let _this_ rpc pass through
+ */
if (exp->exp_flvr_changed) {
LASSERT(exp->exp_flvr_adapt);
spin_unlock(&exp->exp_lock);
@@ -1809,7 +1817,8 @@
}
/* now it doesn't match the current flavor, the only chance we can
- * accept it is match the old flavors which is not expired. */
+ * accept it is match the old flavors which is not expired.
+ */
if (exp->exp_flvr_changed == 0 && exp->exp_flvr_expire[1]) {
if (exp->exp_flvr_expire[1] >= ktime_get_real_seconds()) {
if (flavor_allowed(&exp->exp_flvr_old[1], req)) {
@@ -1915,9 +1924,9 @@
int rc;
LASSERT(msg);
- LASSERT(req->rq_reqmsg == NULL);
- LASSERT(req->rq_repmsg == NULL);
- LASSERT(req->rq_svc_ctx == NULL);
+ LASSERT(!req->rq_reqmsg);
+ LASSERT(!req->rq_repmsg);
+ LASSERT(!req->rq_svc_ctx);
req->rq_req_swab_mask = 0;
@@ -1994,7 +2003,7 @@
/* failed alloc, try emergency pool */
rs = lustre_get_emerg_rs(svcpt);
- if (rs == NULL)
+ if (!rs)
return -ENOMEM;
req->rq_reply_state = rs;
@@ -2059,7 +2068,7 @@
{
struct ptlrpc_svc_ctx *ctx = req->rq_svc_ctx;
- if (ctx != NULL)
+ if (ctx)
atomic_inc(&ctx->sc_refcount);
}
@@ -2067,7 +2076,7 @@
{
struct ptlrpc_svc_ctx *ctx = req->rq_svc_ctx;
- if (ctx == NULL)
+ if (!ctx)
return;
LASSERT_ATOMIC_POS(&ctx->sc_refcount);
diff --git a/drivers/staging/lustre/lustre/ptlrpc/sec_bulk.c b/drivers/staging/lustre/lustre/ptlrpc/sec_bulk.c
index 22621c7..72d5b9b 100644
--- a/drivers/staging/lustre/lustre/ptlrpc/sec_bulk.c
+++ b/drivers/staging/lustre/lustre/ptlrpc/sec_bulk.c
@@ -120,7 +120,7 @@
} page_pools;
/*
- * /proc/fs/lustre/sptlrpc/encrypt_page_pools
+ * /sys/kernel/debug/lustre/sptlrpc/encrypt_page_pools
*/
int sptlrpc_proc_enc_pool_seq_show(struct seq_file *m, void *v)
{
@@ -195,7 +195,7 @@
while (npages--) {
LASSERT(page_pools.epp_pools[p_idx]);
- LASSERT(page_pools.epp_pools[p_idx][g_idx] != NULL);
+ LASSERT(page_pools.epp_pools[p_idx][g_idx]);
__free_page(page_pools.epp_pools[p_idx][g_idx]);
page_pools.epp_pools[p_idx][g_idx] = NULL;
@@ -316,7 +316,7 @@
int p_idx, g_idx;
int i;
- if (desc->bd_enc_iov == NULL)
+ if (!desc->bd_enc_iov)
return;
LASSERT(desc->bd_iov_count > 0);
@@ -331,9 +331,9 @@
LASSERT(page_pools.epp_pools[p_idx]);
for (i = 0; i < desc->bd_iov_count; i++) {
- LASSERT(desc->bd_enc_iov[i].kiov_page != NULL);
+ LASSERT(desc->bd_enc_iov[i].kiov_page);
LASSERT(g_idx != 0 || page_pools.epp_pools[p_idx]);
- LASSERT(page_pools.epp_pools[p_idx][g_idx] == NULL);
+ LASSERT(!page_pools.epp_pools[p_idx][g_idx]);
page_pools.epp_pools[p_idx][g_idx] =
desc->bd_enc_iov[i].kiov_page;
@@ -412,7 +412,7 @@
page_pools.epp_st_max_wait = 0;
enc_pools_alloc();
- if (page_pools.epp_pools == NULL)
+ if (!page_pools.epp_pools)
return -ENOMEM;
register_shrinker(&pools_shrinker);
@@ -475,7 +475,7 @@
int size = msg->lm_buflens[offset];
bsd = lustre_msg_buf(msg, offset, sizeof(*bsd));
- if (bsd == NULL) {
+ if (!bsd) {
CERROR("Invalid bulk sec desc: size %d\n", size);
return -EINVAL;
}
diff --git a/drivers/staging/lustre/lustre/ptlrpc/sec_config.c b/drivers/staging/lustre/lustre/ptlrpc/sec_config.c
index 4b0b81c..93b91bf 100644
--- a/drivers/staging/lustre/lustre/ptlrpc/sec_config.c
+++ b/drivers/staging/lustre/lustre/ptlrpc/sec_config.c
@@ -78,7 +78,7 @@
memset(flvr, 0, sizeof(*flvr));
- if (str == NULL || str[0] == '\0') {
+ if (!str || str[0] == '\0') {
flvr->sf_rpc = SPTLRPC_FLVR_INVALID;
return 0;
}
@@ -103,7 +103,7 @@
* format: plain-hash:<hash_alg>
*/
alg = strchr(bulk, ':');
- if (alg == NULL)
+ if (!alg)
goto err_out;
*alg++ = '\0';
@@ -166,7 +166,7 @@
sptlrpc_rule_init(rule);
flavor = strchr(param, '=');
- if (flavor == NULL) {
+ if (!flavor) {
CERROR("invalid param, no '='\n");
return -EINVAL;
}
@@ -216,7 +216,7 @@
static void sptlrpc_rule_set_free(struct sptlrpc_rule_set *rset)
{
LASSERT(rset->srs_nslot ||
- (rset->srs_nrule == 0 && rset->srs_rules == NULL));
+ (rset->srs_nrule == 0 && !rset->srs_rules));
if (rset->srs_nslot) {
kfree(rset->srs_rules);
@@ -241,7 +241,7 @@
/* better use realloc() if available */
rules = kcalloc(nslot, sizeof(*rset->srs_rules), GFP_NOFS);
- if (rules == NULL)
+ if (!rules)
return -ENOMEM;
if (rset->srs_nrule) {
@@ -450,7 +450,7 @@
}
/* if we didn't find the pattern, treat the whole string as fsname */
- if (ptr == NULL)
+ if (!ptr)
len = strlen(tgt);
else
len = ptr - tgt;
@@ -579,13 +579,13 @@
int rc;
target = lustre_cfg_string(lcfg, 1);
- if (target == NULL) {
+ if (!target) {
CERROR("missing target name\n");
return -EINVAL;
}
param = lustre_cfg_string(lcfg, 2);
- if (param == NULL) {
+ if (!param) {
CERROR("missing parameter\n");
return -EINVAL;
}
@@ -603,12 +603,12 @@
if (rc)
return -EINVAL;
- if (conf == NULL) {
+ if (!conf) {
target2fsname(target, fsname, sizeof(fsname));
mutex_lock(&sptlrpc_conf_lock);
conf = sptlrpc_conf_get(fsname, 0);
- if (conf == NULL) {
+ if (!conf) {
CERROR("can't find conf\n");
rc = -ENOMEM;
} else {
@@ -638,7 +638,7 @@
int len;
ptr = strrchr(logname, '-');
- if (ptr == NULL || strcmp(ptr, "-sptlrpc")) {
+ if (!ptr || strcmp(ptr, "-sptlrpc")) {
CERROR("%s is not a sptlrpc config log\n", logname);
return -EINVAL;
}
@@ -772,7 +772,7 @@
mutex_lock(&sptlrpc_conf_lock);
conf = sptlrpc_conf_get(name, 0);
- if (conf == NULL)
+ if (!conf)
goto out;
/* convert uuid name (supposed end with _UUID) to target name */
diff --git a/drivers/staging/lustre/lustre/ptlrpc/sec_gc.c b/drivers/staging/lustre/lustre/ptlrpc/sec_gc.c
index 6e58d5f..d5afc4f 100644
--- a/drivers/staging/lustre/lustre/ptlrpc/sec_gc.c
+++ b/drivers/staging/lustre/lustre/ptlrpc/sec_gc.c
@@ -166,11 +166,13 @@
* is not optimal. we perhaps want to use balanced binary tree
* to trace each sec as order of expiry time.
* another issue here is we wakeup as fixed interval instead of
- * according to each sec's expiry time */
+ * according to each sec's expiry time
+ */
mutex_lock(&sec_gc_mutex);
list_for_each_entry(sec, &sec_gc_list, ps_gc_list) {
/* if someone is waiting to be deleted, let it
- * proceed as soon as possible. */
+ * proceed as soon as possible.
+ */
if (atomic_read(&sec_gc_wait_del)) {
CDEBUG(D_SEC, "deletion pending, start over\n");
mutex_unlock(&sec_gc_mutex);
diff --git a/drivers/staging/lustre/lustre/ptlrpc/sec_lproc.c b/drivers/staging/lustre/lustre/ptlrpc/sec_lproc.c
index bda9a77..e610a8d 100644
--- a/drivers/staging/lustre/lustre/ptlrpc/sec_lproc.c
+++ b/drivers/staging/lustre/lustre/ptlrpc/sec_lproc.c
@@ -82,7 +82,7 @@
if (cli->cl_import)
sec = sptlrpc_import_sec_ref(cli->cl_import);
- if (sec == NULL)
+ if (!sec)
goto out;
sec_flags2str(sec->ps_flvr.sf_flags, str, sizeof(str));
@@ -121,7 +121,7 @@
if (cli->cl_import)
sec = sptlrpc_import_sec_ref(cli->cl_import);
- if (sec == NULL)
+ if (!sec)
goto out;
if (sec->ps_policy->sp_cops->display)
@@ -178,7 +178,7 @@
{
int rc;
- LASSERT(sptlrpc_debugfs_dir == NULL);
+ LASSERT(!sptlrpc_debugfs_dir);
sptlrpc_debugfs_dir = ldebugfs_register("sptlrpc", debugfs_lustre_root,
sptlrpc_lprocfs_vars, NULL);
diff --git a/drivers/staging/lustre/lustre/ptlrpc/sec_null.c b/drivers/staging/lustre/lustre/ptlrpc/sec_null.c
index ebfa609..40e5349 100644
--- a/drivers/staging/lustre/lustre/ptlrpc/sec_null.c
+++ b/drivers/staging/lustre/lustre/ptlrpc/sec_null.c
@@ -250,7 +250,7 @@
alloc_size = size_roundup_power2(newmsg_size);
newbuf = libcfs_kvzalloc(alloc_size, GFP_NOFS);
- if (newbuf == NULL)
+ if (!newbuf)
return -ENOMEM;
/* Must lock this, so that otherwise unprotected change of
@@ -258,7 +258,8 @@
* imp_replay_list traversing threads. See LU-3333
* This is a bandaid at best, we really need to deal with this
* in request enlarging code before unpacking that's already
- * there */
+ * there
+ */
if (req->rq_import)
spin_lock(&req->rq_import->imp_lock);
memcpy(newbuf, req->rq_reqbuf, req->rq_reqlen);
@@ -319,7 +320,7 @@
LASSERT(rs->rs_size >= rs_size);
} else {
rs = libcfs_kvzalloc(rs_size, GFP_NOFS);
- if (rs == NULL)
+ if (!rs)
return -ENOMEM;
rs->rs_size = rs_size;
diff --git a/drivers/staging/lustre/lustre/ptlrpc/sec_plain.c b/drivers/staging/lustre/lustre/ptlrpc/sec_plain.c
index 905a414..6276bf5 100644
--- a/drivers/staging/lustre/lustre/ptlrpc/sec_plain.c
+++ b/drivers/staging/lustre/lustre/ptlrpc/sec_plain.c
@@ -104,7 +104,7 @@
return -EPROTO;
bsd = lustre_msg_buf(msg, PLAIN_PACK_BULK_OFF, PLAIN_BSD_SIZE);
- if (bsd == NULL) {
+ if (!bsd) {
CERROR("bulk sec desc has short size %d\n",
lustre_msg_buflen(msg, PLAIN_PACK_BULK_OFF));
return -EPROTO;
@@ -227,7 +227,7 @@
swabbed = ptlrpc_rep_need_swab(req);
phdr = lustre_msg_buf(msg, PLAIN_PACK_HDR_OFF, sizeof(*phdr));
- if (phdr == NULL) {
+ if (!phdr) {
CERROR("missing plain header\n");
return -EPROTO;
}
@@ -264,7 +264,8 @@
}
} else {
/* whether we sent with bulk or not, we expect the same
- * in reply, except for early reply */
+ * in reply, except for early reply
+ */
if (!req->rq_early &&
!equi(req->rq_pack_bulk == 1,
phdr->ph_flags & PLAIN_FL_BULK)) {
@@ -419,7 +420,7 @@
LASSERT(sec->ps_import);
LASSERT(atomic_read(&sec->ps_refcount) == 0);
LASSERT(atomic_read(&sec->ps_nctx) == 0);
- LASSERT(plsec->pls_ctx == NULL);
+ LASSERT(!plsec->pls_ctx);
class_import_put(sec->ps_import);
@@ -468,7 +469,7 @@
/* install ctx immediately if this is a reverse sec */
if (svc_ctx) {
ctx = plain_sec_install_ctx(plsec);
- if (ctx == NULL) {
+ if (!ctx) {
plain_destroy_sec(sec);
return NULL;
}
@@ -492,7 +493,7 @@
atomic_inc(&ctx->cc_refcount);
read_unlock(&plsec->pls_lock);
- if (unlikely(ctx == NULL))
+ if (unlikely(!ctx))
ctx = plain_sec_install_ctx(plsec);
return ctx;
@@ -665,7 +666,7 @@
newbuf_size = size_roundup_power2(newbuf_size);
newbuf = libcfs_kvzalloc(newbuf_size, GFP_NOFS);
- if (newbuf == NULL)
+ if (!newbuf)
return -ENOMEM;
/* Must lock this, so that otherwise unprotected change of
@@ -673,7 +674,8 @@
* imp_replay_list traversing threads. See LU-3333
* This is a bandaid at best, we really need to deal with this
* in request enlarging code before unpacking that's already
- * there */
+ * there
+ */
if (req->rq_import)
spin_lock(&req->rq_import->imp_lock);
@@ -732,7 +734,7 @@
swabbed = ptlrpc_req_need_swab(req);
phdr = lustre_msg_buf(msg, PLAIN_PACK_HDR_OFF, sizeof(*phdr));
- if (phdr == NULL) {
+ if (!phdr) {
CERROR("missing plain header\n");
return -EPROTO;
}
@@ -801,7 +803,7 @@
LASSERT(rs->rs_size >= rs_size);
} else {
rs = libcfs_kvzalloc(rs_size, GFP_NOFS);
- if (rs == NULL)
+ if (!rs)
return -ENOMEM;
rs->rs_size = rs_size;
diff --git a/drivers/staging/lustre/lustre/ptlrpc/service.c b/drivers/staging/lustre/lustre/ptlrpc/service.c
index 8598300..6f71ecc 100644
--- a/drivers/staging/lustre/lustre/ptlrpc/service.c
+++ b/drivers/staging/lustre/lustre/ptlrpc/service.c
@@ -77,7 +77,7 @@
rqbd = kzalloc_node(sizeof(*rqbd), GFP_NOFS,
cfs_cpt_spread_node(svc->srv_cptable,
svcpt->scp_cpt));
- if (rqbd == NULL)
+ if (!rqbd)
return NULL;
rqbd->rqbd_svcpt = svcpt;
@@ -89,7 +89,7 @@
svcpt->scp_cpt,
svc->srv_buf_size,
GFP_KERNEL);
- if (rqbd->rqbd_buffer == NULL) {
+ if (!rqbd->rqbd_buffer) {
kfree(rqbd);
return NULL;
}
@@ -144,13 +144,14 @@
for (i = 0; i < svc->srv_nbuf_per_group; i++) {
/* NB: another thread might have recycled enough rqbds, we
- * need to make sure it wouldn't over-allocate, see LU-1212. */
+ * need to make sure it wouldn't over-allocate, see LU-1212.
+ */
if (svcpt->scp_nrqbds_posted >= svc->srv_nbuf_per_group)
break;
rqbd = ptlrpc_alloc_rqbd(svcpt);
- if (rqbd == NULL) {
+ if (!rqbd) {
CERROR("%s: Can't allocate request buffer\n",
svc->srv_name);
rc = -ENOMEM;
@@ -322,7 +323,8 @@
list_add_tail(&rqbd->rqbd_list, &svcpt->scp_rqbd_idle);
/* Don't complain if no request buffers are posted right now; LNET
- * won't drop requests because we set the portal lazy! */
+ * won't drop requests because we set the portal lazy!
+ */
spin_unlock(&svcpt->scp_lock);
@@ -363,13 +365,15 @@
init = max_t(int, init, tc->tc_nthrs_init);
/* NB: please see comments in lustre_lnet.h for definition
- * details of these members */
+ * details of these members
+ */
LASSERT(tc->tc_nthrs_max != 0);
if (tc->tc_nthrs_user != 0) {
/* In case there is a reason to test a service with many
* threads, we give a less strict check here, it can
- * be up to 8 * nthrs_max */
+ * be up to 8 * nthrs_max
+ */
total = min(tc->tc_nthrs_max * 8, tc->tc_nthrs_user);
nthrs = total / svc->srv_ncpts;
init = max(init, nthrs);
@@ -379,7 +383,8 @@
total = tc->tc_nthrs_max;
if (tc->tc_nthrs_base == 0) {
/* don't care about base threads number per partition,
- * this is most for non-affinity service */
+ * this is most for non-affinity service
+ */
nthrs = total / svc->srv_ncpts;
goto out;
}
@@ -390,7 +395,8 @@
/* NB: Increase the base number if it's single partition
* and total number of cores/HTs is larger or equal to 4.
- * result will always < 2 * nthrs_base */
+ * result will always < 2 * nthrs_base
+ */
weight = cfs_cpt_weight(svc->srv_cptable, CFS_CPT_ANY);
for (i = 1; (weight >> (i + 1)) != 0 && /* >= 4 cores/HTs */
(tc->tc_nthrs_base >> i) != 0; i++)
@@ -490,7 +496,7 @@
array->paa_reqs_array =
kzalloc_node(sizeof(struct list_head) * size, GFP_NOFS,
cfs_cpt_spread_node(svc->srv_cptable, cpt));
- if (array->paa_reqs_array == NULL)
+ if (!array->paa_reqs_array)
return -ENOMEM;
for (index = 0; index < size; index++)
@@ -499,14 +505,15 @@
array->paa_reqs_count =
kzalloc_node(sizeof(__u32) * size, GFP_NOFS,
cfs_cpt_spread_node(svc->srv_cptable, cpt));
- if (array->paa_reqs_count == NULL)
+ if (!array->paa_reqs_count)
goto free_reqs_array;
setup_timer(&svcpt->scp_at_timer, ptlrpc_at_timer,
(unsigned long)svcpt);
/* At SOW, service time should be quick; 10s seems generous. If client
- * timeout is less than this, we'll be sending an early reply. */
+ * timeout is less than this, we'll be sending an early reply.
+ */
at_init(&svcpt->scp_at_estimate, 10, 0);
/* assign this before call ptlrpc_grow_req_bufs */
@@ -514,7 +521,8 @@
/* Now allocate the request buffers, but don't post them now */
rc = ptlrpc_grow_req_bufs(svcpt, 0);
/* We shouldn't be under memory pressure at startup, so
- * fail if we can't allocate all our buffers at this time. */
+ * fail if we can't allocate all our buffers at this time.
+ */
if (rc != 0)
goto free_reqs_count;
@@ -556,14 +564,14 @@
LASSERT(conf->psc_thr.tc_ctx_tags != 0);
cptable = cconf->cc_cptable;
- if (cptable == NULL)
+ if (!cptable)
cptable = cfs_cpt_table;
if (!conf->psc_thr.tc_cpu_affinity) {
ncpts = 1;
} else {
ncpts = cfs_cpt_number(cptable);
- if (cconf->cc_pattern != NULL) {
+ if (cconf->cc_pattern) {
struct cfs_expr_list *el;
rc = cfs_expr_list_parse(cconf->cc_pattern,
@@ -632,11 +640,11 @@
if (!conf->psc_thr.tc_cpu_affinity)
cpt = CFS_CPT_ANY;
else
- cpt = cpts != NULL ? cpts[i] : i;
+ cpt = cpts ? cpts[i] : i;
svcpt = kzalloc_node(sizeof(*svcpt), GFP_NOFS,
cfs_cpt_spread_node(cptable, cpt));
- if (svcpt == NULL) {
+ if (!svcpt) {
rc = -ENOMEM;
goto failed;
}
@@ -696,7 +704,8 @@
LASSERT(list_empty(&req->rq_timed_list));
/* DEBUG_REQ() assumes the reply state of a request with a valid
- * ref will not be destroyed until that reference is dropped. */
+ * ref will not be destroyed until that reference is dropped.
+ */
ptlrpc_req_drop_rs(req);
sptlrpc_svc_ctx_decref(req);
@@ -704,7 +713,8 @@
if (req != &req->rq_rqbd->rqbd_req) {
/* NB request buffers use an embedded
* req if the incoming req unlinked the
- * MD; this isn't one of them! */
+ * MD; this isn't one of them!
+ */
ptlrpc_request_cache_free(req);
}
}
@@ -728,7 +738,8 @@
if (req->rq_at_linked) {
spin_lock(&svcpt->scp_at_lock);
/* recheck with lock, in case it's unlinked by
- * ptlrpc_at_check_timed() */
+ * ptlrpc_at_check_timed()
+ */
if (likely(req->rq_at_linked))
ptlrpc_at_remove_timed(req);
spin_unlock(&svcpt->scp_at_lock);
@@ -755,7 +766,8 @@
svcpt->scp_hist_nrqbds++;
/* cull some history?
- * I expect only about 1 or 2 rqbds need to be recycled here */
+ * I expect only about 1 or 2 rqbds need to be recycled here
+ */
while (svcpt->scp_hist_nrqbds > svc->srv_hist_nrqbds_cpt_max) {
rqbd = list_entry(svcpt->scp_hist_rqbds.next,
struct ptlrpc_request_buffer_desc,
@@ -765,7 +777,8 @@
svcpt->scp_hist_nrqbds--;
/* remove rqbd's reqs from svc's req history while
- * I've got the service lock */
+ * I've got the service lock
+ */
list_for_each(tmp, &rqbd->rqbd_reqs) {
req = list_entry(tmp, struct ptlrpc_request,
rq_list);
@@ -846,7 +859,7 @@
ptlrpc_nrs_req_finalize(req);
- if (req->rq_export != NULL)
+ if (req->rq_export)
class_export_rpc_dec(req->rq_export);
ptlrpc_server_finish_request(svcpt, req);
@@ -869,13 +882,13 @@
req->rq_export->exp_conn_cnt);
return -EEXIST;
}
- if (unlikely(obd == NULL || obd->obd_fail)) {
+ if (unlikely(!obd || obd->obd_fail)) {
/*
* Failing over, don't handle any more reqs, send
* error response instead.
*/
CDEBUG(D_RPCTRACE, "Dropping req %p for failed obd %s\n",
- req, (obd != NULL) ? obd->obd_name : "unknown");
+ req, obd ? obd->obd_name : "unknown");
rc = -ENODEV;
} else if (lustre_msg_get_flags(req->rq_reqmsg) &
(MSG_REPLAY | MSG_REQ_REPLAY_DONE)) {
@@ -942,7 +955,8 @@
div_u64_rem(req->rq_deadline, array->paa_size, &index);
if (array->paa_reqs_count[index] > 0) {
/* latest rpcs will have the latest deadlines in the list,
- * so search backward. */
+ * so search backward.
+ */
list_for_each_entry_reverse(rq,
&array->paa_reqs_array[index],
rq_timed_list) {
@@ -1003,7 +1017,8 @@
int rc;
/* deadline is when the client expects us to reply, margin is the
- difference between clients' and servers' expectations */
+ * difference between clients' and servers' expectations
+ */
DEBUG_REQ(D_ADAPTTO, req,
"%ssending early reply (deadline %+lds, margin %+lds) for %d+%d",
AT_OFF ? "AT off - not " : "",
@@ -1027,12 +1042,14 @@
}
/* Fake our processing time into the future to ask the clients
- * for some extra amount of time */
+ * for some extra amount of time
+ */
at_measured(&svcpt->scp_at_estimate, at_extra +
ktime_get_real_seconds() - req->rq_arrival_time.tv_sec);
/* Check to see if we've actually increased the deadline -
- * we may be past adaptive_max */
+ * we may be past adaptive_max
+ */
if (req->rq_deadline >= req->rq_arrival_time.tv_sec +
at_get(&svcpt->scp_at_estimate)) {
DEBUG_REQ(D_WARNING, req, "Couldn't add any time (%ld/%lld), not sending early reply\n",
@@ -1044,7 +1061,7 @@
newdl = ktime_get_real_seconds() + at_get(&svcpt->scp_at_estimate);
reqcopy = ptlrpc_request_cache_alloc(GFP_NOFS);
- if (reqcopy == NULL)
+ if (!reqcopy)
return -ENOMEM;
reqmsg = libcfs_kvzalloc(req->rq_reqlen, GFP_NOFS);
if (!reqmsg) {
@@ -1074,7 +1091,7 @@
/* Connection ref */
reqcopy->rq_export = class_conn2export(
lustre_msg_get_handle(reqcopy->rq_reqmsg));
- if (reqcopy->rq_export == NULL) {
+ if (!reqcopy->rq_export) {
rc = -ENODEV;
goto out;
}
@@ -1102,7 +1119,8 @@
}
/* Free the (early) reply state from lustre_pack_reply.
- (ptlrpc_send_reply takes it's own rs ref, so this is safe here) */
+ * (ptlrpc_send_reply takes it's own rs ref, so this is safe here)
+ */
ptlrpc_req_drop_rs(reqcopy);
out_put:
@@ -1117,7 +1135,8 @@
}
/* Send early replies to everybody expiring within at_early_margin
- asking for at_extra time */
+ * asking for at_extra time
+ */
static int ptlrpc_at_check_timed(struct ptlrpc_service_part *svcpt)
{
struct ptlrpc_at_array *array = &svcpt->scp_at_array;
@@ -1152,7 +1171,8 @@
}
/* We're close to a timeout, and we don't know how much longer the
- server will take. Send early replies to everyone expiring soon. */
+ * server will take. Send early replies to everyone expiring soon.
+ */
INIT_LIST_HEAD(&work_list);
deadline = -1;
div_u64_rem(array->paa_deadline, array->paa_size, &index);
@@ -1194,7 +1214,8 @@
first, at_extra, counter);
if (first < 0) {
/* We're already past request deadlines before we even get a
- chance to send early replies */
+ * chance to send early replies
+ */
LCONSOLE_WARN("%s: This server is not able to keep up with request traffic (cpu-bound).\n",
svcpt->scp_service->srv_name);
CWARN("earlyQ=%d reqQ=%d recA=%d, svcEst=%d, delay=%ld(jiff)\n",
@@ -1204,7 +1225,8 @@
}
/* we took additional refcount so entries can't be deleted from list, no
- * locking is needed */
+ * locking is needed
+ */
while (!list_empty(&work_list)) {
rq = list_entry(work_list.next, struct ptlrpc_request,
rq_timed_list);
@@ -1237,7 +1259,8 @@
if (req->rq_export && req->rq_ops) {
/* Perform request specific check. We should do this check
* before the request is added into exp_hp_rpcs list otherwise
- * it may hit swab race at LU-1044. */
+ * it may hit swab race at LU-1044.
+ */
if (req->rq_ops->hpreq_check) {
rc = req->rq_ops->hpreq_check(req);
/**
@@ -1272,7 +1295,8 @@
{
if (req->rq_export && req->rq_ops) {
/* refresh lock timeout again so that client has more
- * room to send lock cancel RPC. */
+ * room to send lock cancel RPC.
+ */
if (req->rq_ops->hpreq_fini)
req->rq_ops->hpreq_fini(req);
@@ -1316,7 +1340,7 @@
CFS_FAIL_PRECHECK(OBD_FAIL_PTLRPC_CANCEL_RESEND))) {
/* leave just 1 thread for normal RPCs */
running = PTLRPC_NTHRS_INIT;
- if (svcpt->scp_service->srv_ops.so_hpreq_handler != NULL)
+ if (svcpt->scp_service->srv_ops.so_hpreq_handler)
running += 1;
}
@@ -1355,7 +1379,7 @@
CFS_FAIL_PRECHECK(OBD_FAIL_PTLRPC_CANCEL_RESEND))) {
/* leave just 1 thread for normal RPCs */
running = PTLRPC_NTHRS_INIT;
- if (svcpt->scp_service->srv_ops.so_hpreq_handler != NULL)
+ if (svcpt->scp_service->srv_ops.so_hpreq_handler)
running += 1;
}
@@ -1405,7 +1429,7 @@
if (ptlrpc_server_high_pending(svcpt, force)) {
req = ptlrpc_nrs_req_get_nolock(svcpt, true, force);
- if (req != NULL) {
+ if (req) {
svcpt->scp_hreq_count++;
goto got_request;
}
@@ -1413,7 +1437,7 @@
if (ptlrpc_server_normal_pending(svcpt, force)) {
req = ptlrpc_nrs_req_get_nolock(svcpt, false, force);
- if (req != NULL) {
+ if (req) {
svcpt->scp_hreq_count = 0;
goto got_request;
}
@@ -1461,7 +1485,8 @@
list_del_init(&req->rq_list);
svcpt->scp_nreqs_incoming--;
/* Consider this still a "queued" request as far as stats are
- * concerned */
+ * concerned
+ */
spin_unlock(&svcpt->scp_lock);
/* go through security check/transform */
@@ -1598,7 +1623,7 @@
int fail_opc = 0;
request = ptlrpc_server_request_get(svcpt, false);
- if (request == NULL)
+ if (!request)
return 0;
if (OBD_FAIL_CHECK(OBD_FAIL_PTLRPC_HPREQ_NOTIMEOUT))
@@ -1620,7 +1645,7 @@
timediff = timespec64_sub(work_start, request->rq_arrival_time);
timediff_usecs = timediff.tv_sec * USEC_PER_SEC +
timediff.tv_nsec / NSEC_PER_USEC;
- if (likely(svc->srv_stats != NULL)) {
+ if (likely(svc->srv_stats)) {
lprocfs_counter_add(svc->srv_stats, PTLRPC_REQWAIT_CNTR,
timediff_usecs);
lprocfs_counter_add(svc->srv_stats, PTLRPC_REQQDEPTH_CNTR,
@@ -1652,7 +1677,8 @@
}
/* Discard requests queued for longer than the deadline.
- The deadline is increased if we send an early reply. */
+ * The deadline is increased if we send an early reply.
+ */
if (ktime_get_real_seconds() > request->rq_deadline) {
DEBUG_REQ(D_ERROR, request, "Dropping timed-out request from %s: deadline " CFS_DURATION_T ":" CFS_DURATION_T "s ago\n",
libcfs_id2str(request->rq_peer),
@@ -1718,7 +1744,7 @@
request->rq_status,
(request->rq_repmsg ?
lustre_msg_get_status(request->rq_repmsg) : -999));
- if (likely(svc->srv_stats != NULL && request->rq_reqmsg != NULL)) {
+ if (likely(svc->srv_stats && request->rq_reqmsg)) {
__u32 op = lustre_msg_get_opc(request->rq_reqmsg);
int opc = opcode_offset(op);
@@ -1804,7 +1830,8 @@
if (nlocks == 0 && !been_handled) {
/* If we see this, we should already have seen the warning
- * in mds_steal_ack_locks() */
+ * in mds_steal_ack_locks()
+ */
CDEBUG(D_HA, "All locks stolen from rs %p x%lld.t%lld o%d NID %s\n",
rs,
rs->rs_xid, rs->rs_transno, rs->rs_opc,
@@ -1858,7 +1885,8 @@
/* CAVEAT EMPTOR: We might be allocating buffers here because we've
* allowed the request history to grow out of control. We could put a
* sanity check on that here and cull some history if we need the
- * space. */
+ * space.
+ */
if (avail <= low_water)
ptlrpc_grow_req_bufs(svcpt, 1);
@@ -1992,7 +2020,8 @@
/* NB: we will call cfs_cpt_bind() for all threads, because we
* might want to run lustre server only on a subset of system CPUs,
- * in that case ->scp_cpt is CFS_CPT_ANY */
+ * in that case ->scp_cpt is CFS_CPT_ANY
+ */
rc = cfs_cpt_bind(svc->srv_cptable, svcpt->scp_cpt);
if (rc != 0) {
CWARN("%s: failed to bind %s on CPT %d\n",
@@ -2008,7 +2037,7 @@
set_current_groups(ginfo);
put_group_info(ginfo);
- if (svc->srv_ops.so_thr_init != NULL) {
+ if (svc->srv_ops.so_thr_init) {
rc = svc->srv_ops.so_thr_init(thread);
if (rc)
goto out;
@@ -2057,7 +2086,8 @@
/* SVC_STOPPING may already be set here if someone else is trying
* to stop the service while this new thread has been dynamically
* forked. We still set SVC_RUNNING to let our creator know that
- * we are now running, however we will exit as soon as possible */
+ * we are now running, however we will exit as soon as possible
+ */
thread_add_flags(thread, SVC_RUNNING);
svcpt->scp_nthrs_running++;
spin_unlock(&svcpt->scp_lock);
@@ -2116,7 +2146,8 @@
ptlrpc_server_post_idle_rqbds(svcpt) < 0) {
/* I just failed to repost request buffers.
* Wait for a timeout (unless something else
- * happens) before I try again */
+ * happens) before I try again
+ */
svcpt->scp_rqbd_timeout = cfs_time_seconds(1) / 10;
CDEBUG(D_RPCTRACE, "Posted buffers: %d\n",
svcpt->scp_nrqbds_posted);
@@ -2132,10 +2163,10 @@
/*
* deconstruct service specific state created by ptlrpc_start_thread()
*/
- if (svc->srv_ops.so_thr_done != NULL)
+ if (svc->srv_ops.so_thr_done)
svc->srv_ops.so_thr_done(thread);
- if (env != NULL) {
+ if (env) {
lu_context_fini(&env->le_ctx);
kfree(env);
}
@@ -2229,14 +2260,14 @@
ptlrpc_hr.hr_stopping = 1;
cfs_percpt_for_each(hrp, i, ptlrpc_hr.hr_partitions) {
- if (hrp->hrp_thrs == NULL)
+ if (!hrp->hrp_thrs)
continue; /* uninitialized */
for (j = 0; j < hrp->hrp_nthrs; j++)
wake_up_all(&hrp->hrp_thrs[j].hrt_waitq);
}
cfs_percpt_for_each(hrp, i, ptlrpc_hr.hr_partitions) {
- if (hrp->hrp_thrs == NULL)
+ if (!hrp->hrp_thrs)
continue; /* uninitialized */
wait_event(ptlrpc_hr.hr_waitq,
atomic_read(&hrp->hrp_nstopped) ==
@@ -2255,24 +2286,27 @@
for (j = 0; j < hrp->hrp_nthrs; j++) {
struct ptlrpc_hr_thread *hrt = &hrp->hrp_thrs[j];
+ struct task_struct *task;
- rc = PTR_ERR(kthread_run(ptlrpc_hr_main,
+ task = kthread_run(ptlrpc_hr_main,
&hrp->hrp_thrs[j],
"ptlrpc_hr%02d_%03d",
hrp->hrp_cpt,
- hrt->hrt_id));
- if (IS_ERR_VALUE(rc))
+ hrt->hrt_id);
+ if (IS_ERR(task)) {
+ rc = PTR_ERR(task);
break;
+ }
}
wait_event(ptlrpc_hr.hr_waitq,
atomic_read(&hrp->hrp_nstarted) == j);
- if (!IS_ERR_VALUE(rc))
- continue;
- CERROR("Reply handling thread %d:%d Failed on starting: rc = %d\n",
- i, j, rc);
- ptlrpc_stop_hr_threads();
- return rc;
+ if (rc < 0) {
+ CERROR("cannot start reply handler thread %d:%d: rc = %d\n",
+ i, j, rc);
+ ptlrpc_stop_hr_threads();
+ return rc;
+ }
}
return 0;
}
@@ -2333,7 +2367,7 @@
int i;
ptlrpc_service_for_each_part(svcpt, i, svc) {
- if (svcpt->scp_service != NULL)
+ if (svcpt->scp_service)
ptlrpc_svcpt_stop_threads(svcpt);
}
}
@@ -2374,10 +2408,9 @@
struct l_wait_info lwi = { 0 };
struct ptlrpc_thread *thread;
struct ptlrpc_service *svc;
+ struct task_struct *task;
int rc;
- LASSERT(svcpt != NULL);
-
svc = svcpt->scp_service;
CDEBUG(D_RPCTRACE, "%s[%d] started %d min %d max %d\n",
@@ -2396,7 +2429,7 @@
thread = kzalloc_node(sizeof(*thread), GFP_NOFS,
cfs_cpt_spread_node(svc->srv_cptable,
svcpt->scp_cpt));
- if (thread == NULL)
+ if (!thread)
return -ENOMEM;
init_waitqueue_head(&thread->t_ctl_waitq);
@@ -2409,7 +2442,8 @@
if (svcpt->scp_nthrs_starting != 0) {
/* serialize starting because some modules (obdfilter)
- * might require unique and contiguous t_id */
+ * might require unique and contiguous t_id
+ */
LASSERT(svcpt->scp_nthrs_starting == 1);
spin_unlock(&svcpt->scp_lock);
kfree(thread);
@@ -2442,9 +2476,10 @@
}
CDEBUG(D_RPCTRACE, "starting thread '%s'\n", thread->t_name);
- rc = PTR_ERR(kthread_run(ptlrpc_main, thread, "%s", thread->t_name));
- if (IS_ERR_VALUE(rc)) {
- CERROR("cannot start thread '%s': rc %d\n",
+ task = kthread_run(ptlrpc_main, thread, "%s", thread->t_name);
+ if (IS_ERR(task)) {
+ rc = PTR_ERR(task);
+ CERROR("cannot start thread '%s': rc = %d\n",
thread->t_name, rc);
spin_lock(&svcpt->scp_lock);
--svcpt->scp_nthrs_starting;
@@ -2488,7 +2523,7 @@
ptlrpc_hr.hr_partitions = cfs_percpt_alloc(ptlrpc_hr.hr_cpt_table,
sizeof(*hrp));
- if (ptlrpc_hr.hr_partitions == NULL)
+ if (!ptlrpc_hr.hr_partitions)
return -ENOMEM;
init_waitqueue_head(&ptlrpc_hr.hr_waitq);
@@ -2509,7 +2544,7 @@
kzalloc_node(hrp->hrp_nthrs * sizeof(*hrt), GFP_NOFS,
cfs_cpt_spread_node(ptlrpc_hr.hr_cpt_table,
i));
- if (hrp->hrp_thrs == NULL) {
+ if (!hrp->hrp_thrs) {
rc = -ENOMEM;
goto out;
}
@@ -2537,7 +2572,7 @@
struct ptlrpc_hr_partition *hrp;
int i;
- if (ptlrpc_hr.hr_partitions == NULL)
+ if (!ptlrpc_hr.hr_partitions)
return;
ptlrpc_stop_hr_threads();
@@ -2577,7 +2612,7 @@
/* early disarm AT timer... */
ptlrpc_service_for_each_part(svcpt, i, svc) {
- if (svcpt->scp_service != NULL)
+ if (svcpt->scp_service)
del_timer(&svcpt->scp_at_timer);
}
}
@@ -2592,18 +2627,20 @@
int i;
/* All history will be culled when the next request buffer is
- * freed in ptlrpc_service_purge_all() */
+ * freed in ptlrpc_service_purge_all()
+ */
svc->srv_hist_nrqbds_cpt_max = 0;
rc = LNetClearLazyPortal(svc->srv_req_portal);
LASSERT(rc == 0);
ptlrpc_service_for_each_part(svcpt, i, svc) {
- if (svcpt->scp_service == NULL)
+ if (!svcpt->scp_service)
break;
/* Unlink all the request buffers. This forces a 'final'
- * event with its 'unlink' flag set for each posted rqbd */
+ * event with its 'unlink' flag set for each posted rqbd
+ */
list_for_each_entry(rqbd, &svcpt->scp_rqbd_posted,
rqbd_list) {
rc = LNetMDUnlink(rqbd->rqbd_md_h);
@@ -2612,17 +2649,19 @@
}
ptlrpc_service_for_each_part(svcpt, i, svc) {
- if (svcpt->scp_service == NULL)
+ if (!svcpt->scp_service)
break;
/* Wait for the network to release any buffers
- * it's currently filling */
+ * it's currently filling
+ */
spin_lock(&svcpt->scp_lock);
while (svcpt->scp_nrqbds_posted != 0) {
spin_unlock(&svcpt->scp_lock);
/* Network access will complete in finite time but
* the HUGE timeout lets us CWARN for visibility
- * of sluggish NALs */
+ * of sluggish LNDs
+ */
lwi = LWI_TIMEOUT_INTERVAL(
cfs_time_seconds(LONG_UNLINK),
cfs_time_seconds(1), NULL, NULL);
@@ -2648,7 +2687,7 @@
int i;
ptlrpc_service_for_each_part(svcpt, i, svc) {
- if (svcpt->scp_service == NULL)
+ if (!svcpt->scp_service)
break;
spin_lock(&svcpt->scp_rep_lock);
@@ -2663,7 +2702,8 @@
/* purge the request queue. NB No new replies (rqbds
* all unlinked) and no service threads, so I'm the only
- * thread noodling the request queue now */
+ * thread noodling the request queue now
+ */
while (!list_empty(&svcpt->scp_req_incoming)) {
req = list_entry(svcpt->scp_req_incoming.next,
struct ptlrpc_request, rq_list);
@@ -2682,11 +2722,13 @@
LASSERT(svcpt->scp_nreqs_incoming == 0);
LASSERT(svcpt->scp_nreqs_active == 0);
/* history should have been culled by
- * ptlrpc_server_finish_request */
+ * ptlrpc_server_finish_request
+ */
LASSERT(svcpt->scp_hist_nrqbds == 0);
/* Now free all the request buffers since nothing
- * references them any more... */
+ * references them any more...
+ */
while (!list_empty(&svcpt->scp_rqbd_idle)) {
rqbd = list_entry(svcpt->scp_rqbd_idle.next,
@@ -2714,7 +2756,7 @@
int i;
ptlrpc_service_for_each_part(svcpt, i, svc) {
- if (svcpt->scp_service == NULL)
+ if (!svcpt->scp_service)
break;
/* In case somebody rearmed this in the meantime */
@@ -2730,7 +2772,7 @@
ptlrpc_service_for_each_part(svcpt, i, svc)
kfree(svcpt);
- if (svc->srv_cpts != NULL)
+ if (svc->srv_cpts)
cfs_expr_list_values_free(svc->srv_cpts, svc->srv_ncpts);
kfree(svc);
diff --git a/drivers/staging/lustre/lustre/ptlrpc/wiretest.c b/drivers/staging/lustre/lustre/ptlrpc/wiretest.c
index 61d9ca9..3ffd2d9 100644
--- a/drivers/staging/lustre/lustre/ptlrpc/wiretest.c
+++ b/drivers/staging/lustre/lustre/ptlrpc/wiretest.c
@@ -333,17 +333,9 @@
CLASSERT(LDLM_MAX_TYPE == 14);
CLASSERT(LUSTRE_RES_ID_SEQ_OFF == 0);
CLASSERT(LUSTRE_RES_ID_VER_OID_OFF == 1);
- LASSERTF(UPDATE_OBJ == 1000, "found %lld\n",
- (long long)UPDATE_OBJ);
- LASSERTF(UPDATE_LAST_OPC == 1001, "found %lld\n",
- (long long)UPDATE_LAST_OPC);
CLASSERT(LUSTRE_RES_ID_QUOTA_SEQ_OFF == 2);
CLASSERT(LUSTRE_RES_ID_QUOTA_VER_OID_OFF == 3);
CLASSERT(LUSTRE_RES_ID_HSH_OFF == 3);
- CLASSERT(LQUOTA_TYPE_USR == 0);
- CLASSERT(LQUOTA_TYPE_GRP == 1);
- CLASSERT(LQUOTA_RES_MD == 1);
- CLASSERT(LQUOTA_RES_DT == 2);
LASSERTF(OBD_PING == 400, "found %lld\n",
(long long)OBD_PING);
LASSERTF(OBD_LOG_CANCEL == 401, "found %lld\n",
@@ -437,30 +429,6 @@
(unsigned)LMAC_NOT_IN_OI);
LASSERTF(LMAC_FID_ON_OST == 0x00000008UL, "found 0x%.8xUL\n",
(unsigned)LMAC_FID_ON_OST);
- LASSERTF(OBJ_CREATE == 1, "found %lld\n",
- (long long)OBJ_CREATE);
- LASSERTF(OBJ_DESTROY == 2, "found %lld\n",
- (long long)OBJ_DESTROY);
- LASSERTF(OBJ_REF_ADD == 3, "found %lld\n",
- (long long)OBJ_REF_ADD);
- LASSERTF(OBJ_REF_DEL == 4, "found %lld\n",
- (long long)OBJ_REF_DEL);
- LASSERTF(OBJ_ATTR_SET == 5, "found %lld\n",
- (long long)OBJ_ATTR_SET);
- LASSERTF(OBJ_ATTR_GET == 6, "found %lld\n",
- (long long)OBJ_ATTR_GET);
- LASSERTF(OBJ_XATTR_SET == 7, "found %lld\n",
- (long long)OBJ_XATTR_SET);
- LASSERTF(OBJ_XATTR_GET == 8, "found %lld\n",
- (long long)OBJ_XATTR_GET);
- LASSERTF(OBJ_INDEX_LOOKUP == 9, "found %lld\n",
- (long long)OBJ_INDEX_LOOKUP);
- LASSERTF(OBJ_INDEX_LOOKUP == 9, "found %lld\n",
- (long long)OBJ_INDEX_LOOKUP);
- LASSERTF(OBJ_INDEX_INSERT == 10, "found %lld\n",
- (long long)OBJ_INDEX_INSERT);
- LASSERTF(OBJ_INDEX_DELETE == 11, "found %lld\n",
- (long long)OBJ_INDEX_DELETE);
/* Checks for struct ost_id */
LASSERTF((int)sizeof(struct ost_id) == 16, "found %lld\n",
@@ -587,9 +555,6 @@
(long long)LDF_COLLIDE);
LASSERTF(LU_PAGE_SIZE == 4096, "found %lld\n",
(long long)LU_PAGE_SIZE);
- /* Checks for union lu_page */
- LASSERTF((int)sizeof(union lu_page) == 4096, "found %lld\n",
- (long long)(int)sizeof(union lu_page));
/* Checks for struct lustre_handle */
LASSERTF((int)sizeof(struct lustre_handle) == 8, "found %lld\n",
@@ -1535,11 +1500,6 @@
LASSERTF((int)sizeof(union lquota_id) == 16, "found %lld\n",
(long long)(int)sizeof(union lquota_id));
- LASSERTF(QUOTABLOCK_BITS == 10, "found %lld\n",
- (long long)QUOTABLOCK_BITS);
- LASSERTF(QUOTABLOCK_SIZE == 1024, "found %lld\n",
- (long long)QUOTABLOCK_SIZE);
-
/* Checks for struct obd_quotactl */
LASSERTF((int)sizeof(struct obd_quotactl) == 112, "found %lld\n",
(long long)(int)sizeof(struct obd_quotactl));
@@ -1642,138 +1602,6 @@
LASSERTF(Q_FINVALIDATE == 0x800104, "found 0x%.8x\n",
Q_FINVALIDATE);
- /* Checks for struct lquota_acct_rec */
- LASSERTF((int)sizeof(struct lquota_acct_rec) == 16, "found %lld\n",
- (long long)(int)sizeof(struct lquota_acct_rec));
- LASSERTF((int)offsetof(struct lquota_acct_rec, bspace) == 0, "found %lld\n",
- (long long)(int)offsetof(struct lquota_acct_rec, bspace));
- LASSERTF((int)sizeof(((struct lquota_acct_rec *)0)->bspace) == 8, "found %lld\n",
- (long long)(int)sizeof(((struct lquota_acct_rec *)0)->bspace));
- LASSERTF((int)offsetof(struct lquota_acct_rec, ispace) == 8, "found %lld\n",
- (long long)(int)offsetof(struct lquota_acct_rec, ispace));
- LASSERTF((int)sizeof(((struct lquota_acct_rec *)0)->ispace) == 8, "found %lld\n",
- (long long)(int)sizeof(((struct lquota_acct_rec *)0)->ispace));
-
- /* Checks for struct lquota_glb_rec */
- LASSERTF((int)sizeof(struct lquota_glb_rec) == 32, "found %lld\n",
- (long long)(int)sizeof(struct lquota_glb_rec));
- LASSERTF((int)offsetof(struct lquota_glb_rec, qbr_hardlimit) == 0, "found %lld\n",
- (long long)(int)offsetof(struct lquota_glb_rec, qbr_hardlimit));
- LASSERTF((int)sizeof(((struct lquota_glb_rec *)0)->qbr_hardlimit) == 8, "found %lld\n",
- (long long)(int)sizeof(((struct lquota_glb_rec *)0)->qbr_hardlimit));
- LASSERTF((int)offsetof(struct lquota_glb_rec, qbr_softlimit) == 8, "found %lld\n",
- (long long)(int)offsetof(struct lquota_glb_rec, qbr_softlimit));
- LASSERTF((int)sizeof(((struct lquota_glb_rec *)0)->qbr_softlimit) == 8, "found %lld\n",
- (long long)(int)sizeof(((struct lquota_glb_rec *)0)->qbr_softlimit));
- LASSERTF((int)offsetof(struct lquota_glb_rec, qbr_time) == 16, "found %lld\n",
- (long long)(int)offsetof(struct lquota_glb_rec, qbr_time));
- LASSERTF((int)sizeof(((struct lquota_glb_rec *)0)->qbr_time) == 8, "found %lld\n",
- (long long)(int)sizeof(((struct lquota_glb_rec *)0)->qbr_time));
- LASSERTF((int)offsetof(struct lquota_glb_rec, qbr_granted) == 24, "found %lld\n",
- (long long)(int)offsetof(struct lquota_glb_rec, qbr_granted));
- LASSERTF((int)sizeof(((struct lquota_glb_rec *)0)->qbr_granted) == 8, "found %lld\n",
- (long long)(int)sizeof(((struct lquota_glb_rec *)0)->qbr_granted));
-
- /* Checks for struct lquota_slv_rec */
- LASSERTF((int)sizeof(struct lquota_slv_rec) == 8, "found %lld\n",
- (long long)(int)sizeof(struct lquota_slv_rec));
- LASSERTF((int)offsetof(struct lquota_slv_rec, qsr_granted) == 0, "found %lld\n",
- (long long)(int)offsetof(struct lquota_slv_rec, qsr_granted));
- LASSERTF((int)sizeof(((struct lquota_slv_rec *)0)->qsr_granted) == 8, "found %lld\n",
- (long long)(int)sizeof(((struct lquota_slv_rec *)0)->qsr_granted));
-
- /* Checks for struct idx_info */
- LASSERTF((int)sizeof(struct idx_info) == 80, "found %lld\n",
- (long long)(int)sizeof(struct idx_info));
- LASSERTF((int)offsetof(struct idx_info, ii_magic) == 0, "found %lld\n",
- (long long)(int)offsetof(struct idx_info, ii_magic));
- LASSERTF((int)sizeof(((struct idx_info *)0)->ii_magic) == 4, "found %lld\n",
- (long long)(int)sizeof(((struct idx_info *)0)->ii_magic));
- LASSERTF((int)offsetof(struct idx_info, ii_flags) == 4, "found %lld\n",
- (long long)(int)offsetof(struct idx_info, ii_flags));
- LASSERTF((int)sizeof(((struct idx_info *)0)->ii_flags) == 4, "found %lld\n",
- (long long)(int)sizeof(((struct idx_info *)0)->ii_flags));
- LASSERTF((int)offsetof(struct idx_info, ii_count) == 8, "found %lld\n",
- (long long)(int)offsetof(struct idx_info, ii_count));
- LASSERTF((int)sizeof(((struct idx_info *)0)->ii_count) == 2, "found %lld\n",
- (long long)(int)sizeof(((struct idx_info *)0)->ii_count));
- LASSERTF((int)offsetof(struct idx_info, ii_pad0) == 10, "found %lld\n",
- (long long)(int)offsetof(struct idx_info, ii_pad0));
- LASSERTF((int)sizeof(((struct idx_info *)0)->ii_pad0) == 2, "found %lld\n",
- (long long)(int)sizeof(((struct idx_info *)0)->ii_pad0));
- LASSERTF((int)offsetof(struct idx_info, ii_attrs) == 12, "found %lld\n",
- (long long)(int)offsetof(struct idx_info, ii_attrs));
- LASSERTF((int)sizeof(((struct idx_info *)0)->ii_attrs) == 4, "found %lld\n",
- (long long)(int)sizeof(((struct idx_info *)0)->ii_attrs));
- LASSERTF((int)offsetof(struct idx_info, ii_fid) == 16, "found %lld\n",
- (long long)(int)offsetof(struct idx_info, ii_fid));
- LASSERTF((int)sizeof(((struct idx_info *)0)->ii_fid) == 16, "found %lld\n",
- (long long)(int)sizeof(((struct idx_info *)0)->ii_fid));
- LASSERTF((int)offsetof(struct idx_info, ii_version) == 32, "found %lld\n",
- (long long)(int)offsetof(struct idx_info, ii_version));
- LASSERTF((int)sizeof(((struct idx_info *)0)->ii_version) == 8, "found %lld\n",
- (long long)(int)sizeof(((struct idx_info *)0)->ii_version));
- LASSERTF((int)offsetof(struct idx_info, ii_hash_start) == 40, "found %lld\n",
- (long long)(int)offsetof(struct idx_info, ii_hash_start));
- LASSERTF((int)sizeof(((struct idx_info *)0)->ii_hash_start) == 8, "found %lld\n",
- (long long)(int)sizeof(((struct idx_info *)0)->ii_hash_start));
- LASSERTF((int)offsetof(struct idx_info, ii_hash_end) == 48, "found %lld\n",
- (long long)(int)offsetof(struct idx_info, ii_hash_end));
- LASSERTF((int)sizeof(((struct idx_info *)0)->ii_hash_end) == 8, "found %lld\n",
- (long long)(int)sizeof(((struct idx_info *)0)->ii_hash_end));
- LASSERTF((int)offsetof(struct idx_info, ii_keysize) == 56, "found %lld\n",
- (long long)(int)offsetof(struct idx_info, ii_keysize));
- LASSERTF((int)sizeof(((struct idx_info *)0)->ii_keysize) == 2, "found %lld\n",
- (long long)(int)sizeof(((struct idx_info *)0)->ii_keysize));
- LASSERTF((int)offsetof(struct idx_info, ii_recsize) == 58, "found %lld\n",
- (long long)(int)offsetof(struct idx_info, ii_recsize));
- LASSERTF((int)sizeof(((struct idx_info *)0)->ii_recsize) == 2, "found %lld\n",
- (long long)(int)sizeof(((struct idx_info *)0)->ii_recsize));
- LASSERTF((int)offsetof(struct idx_info, ii_pad1) == 60, "found %lld\n",
- (long long)(int)offsetof(struct idx_info, ii_pad1));
- LASSERTF((int)sizeof(((struct idx_info *)0)->ii_pad1) == 4, "found %lld\n",
- (long long)(int)sizeof(((struct idx_info *)0)->ii_pad1));
- LASSERTF((int)offsetof(struct idx_info, ii_pad2) == 64, "found %lld\n",
- (long long)(int)offsetof(struct idx_info, ii_pad2));
- LASSERTF((int)sizeof(((struct idx_info *)0)->ii_pad2) == 8, "found %lld\n",
- (long long)(int)sizeof(((struct idx_info *)0)->ii_pad2));
- LASSERTF((int)offsetof(struct idx_info, ii_pad3) == 72, "found %lld\n",
- (long long)(int)offsetof(struct idx_info, ii_pad3));
- LASSERTF((int)sizeof(((struct idx_info *)0)->ii_pad3) == 8, "found %lld\n",
- (long long)(int)sizeof(((struct idx_info *)0)->ii_pad3));
- CLASSERT(IDX_INFO_MAGIC == 0x3D37CC37);
-
- /* Checks for struct lu_idxpage */
- LASSERTF((int)sizeof(struct lu_idxpage) == 16, "found %lld\n",
- (long long)(int)sizeof(struct lu_idxpage));
- LASSERTF((int)offsetof(struct lu_idxpage, lip_magic) == 0, "found %lld\n",
- (long long)(int)offsetof(struct lu_idxpage, lip_magic));
- LASSERTF((int)sizeof(((struct lu_idxpage *)0)->lip_magic) == 4, "found %lld\n",
- (long long)(int)sizeof(((struct lu_idxpage *)0)->lip_magic));
- LASSERTF((int)offsetof(struct lu_idxpage, lip_flags) == 4, "found %lld\n",
- (long long)(int)offsetof(struct lu_idxpage, lip_flags));
- LASSERTF((int)sizeof(((struct lu_idxpage *)0)->lip_flags) == 2, "found %lld\n",
- (long long)(int)sizeof(((struct lu_idxpage *)0)->lip_flags));
- LASSERTF((int)offsetof(struct lu_idxpage, lip_nr) == 6, "found %lld\n",
- (long long)(int)offsetof(struct lu_idxpage, lip_nr));
- LASSERTF((int)sizeof(((struct lu_idxpage *)0)->lip_nr) == 2, "found %lld\n",
- (long long)(int)sizeof(((struct lu_idxpage *)0)->lip_nr));
- LASSERTF((int)offsetof(struct lu_idxpage, lip_pad0) == 8, "found %lld\n",
- (long long)(int)offsetof(struct lu_idxpage, lip_pad0));
- LASSERTF((int)sizeof(((struct lu_idxpage *)0)->lip_pad0) == 8, "found %lld\n",
- (long long)(int)sizeof(((struct lu_idxpage *)0)->lip_pad0));
- CLASSERT(LIP_MAGIC == 0x8A6D6B6C);
- LASSERTF(LIP_HDR_SIZE == 16, "found %lld\n",
- (long long)LIP_HDR_SIZE);
- LASSERTF(II_FL_NOHASH == 1, "found %lld\n",
- (long long)II_FL_NOHASH);
- LASSERTF(II_FL_VARKEY == 2, "found %lld\n",
- (long long)II_FL_VARKEY);
- LASSERTF(II_FL_VARREC == 4, "found %lld\n",
- (long long)II_FL_VARREC);
- LASSERTF(II_FL_NONUNQ == 8, "found %lld\n",
- (long long)II_FL_NONUNQ);
-
/* Checks for struct niobuf_remote */
LASSERTF((int)sizeof(struct niobuf_remote) == 16, "found %lld\n",
(long long)(int)sizeof(struct niobuf_remote));
@@ -3753,50 +3581,6 @@
LASSERTF((int)sizeof(((struct ll_fiemap_info_key *)0)->fiemap) == 32, "found %lld\n",
(long long)(int)sizeof(((struct ll_fiemap_info_key *)0)->fiemap));
- /* Checks for struct quota_body */
- LASSERTF((int)sizeof(struct quota_body) == 112, "found %lld\n",
- (long long)(int)sizeof(struct quota_body));
- LASSERTF((int)offsetof(struct quota_body, qb_fid) == 0, "found %lld\n",
- (long long)(int)offsetof(struct quota_body, qb_fid));
- LASSERTF((int)sizeof(((struct quota_body *)0)->qb_fid) == 16, "found %lld\n",
- (long long)(int)sizeof(((struct quota_body *)0)->qb_fid));
- LASSERTF((int)offsetof(struct quota_body, qb_id) == 16, "found %lld\n",
- (long long)(int)offsetof(struct quota_body, qb_id));
- LASSERTF((int)sizeof(((struct quota_body *)0)->qb_id) == 16, "found %lld\n",
- (long long)(int)sizeof(((struct quota_body *)0)->qb_id));
- LASSERTF((int)offsetof(struct quota_body, qb_flags) == 32, "found %lld\n",
- (long long)(int)offsetof(struct quota_body, qb_flags));
- LASSERTF((int)sizeof(((struct quota_body *)0)->qb_flags) == 4, "found %lld\n",
- (long long)(int)sizeof(((struct quota_body *)0)->qb_flags));
- LASSERTF((int)offsetof(struct quota_body, qb_padding) == 36, "found %lld\n",
- (long long)(int)offsetof(struct quota_body, qb_padding));
- LASSERTF((int)sizeof(((struct quota_body *)0)->qb_padding) == 4, "found %lld\n",
- (long long)(int)sizeof(((struct quota_body *)0)->qb_padding));
- LASSERTF((int)offsetof(struct quota_body, qb_count) == 40, "found %lld\n",
- (long long)(int)offsetof(struct quota_body, qb_count));
- LASSERTF((int)sizeof(((struct quota_body *)0)->qb_count) == 8, "found %lld\n",
- (long long)(int)sizeof(((struct quota_body *)0)->qb_count));
- LASSERTF((int)offsetof(struct quota_body, qb_usage) == 48, "found %lld\n",
- (long long)(int)offsetof(struct quota_body, qb_usage));
- LASSERTF((int)sizeof(((struct quota_body *)0)->qb_usage) == 8, "found %lld\n",
- (long long)(int)sizeof(((struct quota_body *)0)->qb_usage));
- LASSERTF((int)offsetof(struct quota_body, qb_slv_ver) == 56, "found %lld\n",
- (long long)(int)offsetof(struct quota_body, qb_slv_ver));
- LASSERTF((int)sizeof(((struct quota_body *)0)->qb_slv_ver) == 8, "found %lld\n",
- (long long)(int)sizeof(((struct quota_body *)0)->qb_slv_ver));
- LASSERTF((int)offsetof(struct quota_body, qb_lockh) == 64, "found %lld\n",
- (long long)(int)offsetof(struct quota_body, qb_lockh));
- LASSERTF((int)sizeof(((struct quota_body *)0)->qb_lockh) == 8, "found %lld\n",
- (long long)(int)sizeof(((struct quota_body *)0)->qb_lockh));
- LASSERTF((int)offsetof(struct quota_body, qb_glb_lockh) == 72, "found %lld\n",
- (long long)(int)offsetof(struct quota_body, qb_glb_lockh));
- LASSERTF((int)sizeof(((struct quota_body *)0)->qb_glb_lockh) == 8, "found %lld\n",
- (long long)(int)sizeof(((struct quota_body *)0)->qb_glb_lockh));
- LASSERTF((int)offsetof(struct quota_body, qb_padding1[4]) == 112, "found %lld\n",
- (long long)(int)offsetof(struct quota_body, qb_padding1[4]));
- LASSERTF((int)sizeof(((struct quota_body *)0)->qb_padding1[4]) == 8, "found %lld\n",
- (long long)(int)sizeof(((struct quota_body *)0)->qb_padding1[4]));
-
/* Checks for struct mgs_target_info */
LASSERTF((int)sizeof(struct mgs_target_info) == 4544, "found %lld\n",
(long long)(int)sizeof(struct mgs_target_info));
@@ -4431,60 +4215,4 @@
LASSERTF(sizeof(((struct hsm_user_import *)0)->hui_archive_id) == 4,
"found %lld\n",
(long long)sizeof(((struct hsm_user_import *)0)->hui_archive_id));
-
- /* Checks for struct update_buf */
- LASSERTF((int)sizeof(struct update_buf) == 8, "found %lld\n",
- (long long)(int)sizeof(struct update_buf));
- LASSERTF((int)offsetof(struct update_buf, ub_magic) == 0, "found %lld\n",
- (long long)(int)offsetof(struct update_buf, ub_magic));
- LASSERTF((int)sizeof(((struct update_buf *)0)->ub_magic) == 4, "found %lld\n",
- (long long)(int)sizeof(((struct update_buf *)0)->ub_magic));
- LASSERTF((int)offsetof(struct update_buf, ub_count) == 4, "found %lld\n",
- (long long)(int)offsetof(struct update_buf, ub_count));
- LASSERTF((int)sizeof(((struct update_buf *)0)->ub_count) == 4, "found %lld\n",
- (long long)(int)sizeof(((struct update_buf *)0)->ub_count));
- LASSERTF((int)offsetof(struct update_buf, ub_bufs) == 8, "found %lld\n",
- (long long)(int)offsetof(struct update_buf, ub_bufs));
- LASSERTF((int)sizeof(((struct update_buf *)0)->ub_bufs) == 0, "found %lld\n",
- (long long)(int)sizeof(((struct update_buf *)0)->ub_bufs));
-
- /* Checks for struct update_reply */
- LASSERTF((int)sizeof(struct update_reply) == 8, "found %lld\n",
- (long long)(int)sizeof(struct update_reply));
- LASSERTF((int)offsetof(struct update_reply, ur_version) == 0, "found %lld\n",
- (long long)(int)offsetof(struct update_reply, ur_version));
- LASSERTF((int)sizeof(((struct update_reply *)0)->ur_version) == 4, "found %lld\n",
- (long long)(int)sizeof(((struct update_reply *)0)->ur_version));
- LASSERTF((int)offsetof(struct update_reply, ur_count) == 4, "found %lld\n",
- (long long)(int)offsetof(struct update_reply, ur_count));
- LASSERTF((int)sizeof(((struct update_reply *)0)->ur_count) == 4, "found %lld\n",
- (long long)(int)sizeof(((struct update_reply *)0)->ur_count));
- LASSERTF((int)offsetof(struct update_reply, ur_lens) == 8, "found %lld\n",
- (long long)(int)offsetof(struct update_reply, ur_lens));
- LASSERTF((int)sizeof(((struct update_reply *)0)->ur_lens) == 0, "found %lld\n",
- (long long)(int)sizeof(((struct update_reply *)0)->ur_lens));
-
- /* Checks for struct update */
- LASSERTF((int)sizeof(struct update) == 56, "found %lld\n",
- (long long)(int)sizeof(struct update));
- LASSERTF((int)offsetof(struct update, u_type) == 0, "found %lld\n",
- (long long)(int)offsetof(struct update, u_type));
- LASSERTF((int)sizeof(((struct update *)0)->u_type) == 4, "found %lld\n",
- (long long)(int)sizeof(((struct update *)0)->u_type));
- LASSERTF((int)offsetof(struct update, u_batchid) == 4, "found %lld\n",
- (long long)(int)offsetof(struct update, u_batchid));
- LASSERTF((int)sizeof(((struct update *)0)->u_batchid) == 4, "found %lld\n",
- (long long)(int)sizeof(((struct update *)0)->u_batchid));
- LASSERTF((int)offsetof(struct update, u_fid) == 8, "found %lld\n",
- (long long)(int)offsetof(struct update, u_fid));
- LASSERTF((int)sizeof(((struct update *)0)->u_fid) == 16, "found %lld\n",
- (long long)(int)sizeof(((struct update *)0)->u_fid));
- LASSERTF((int)offsetof(struct update, u_lens) == 24, "found %lld\n",
- (long long)(int)offsetof(struct update, u_lens));
- LASSERTF((int)sizeof(((struct update *)0)->u_lens) == 32, "found %lld\n",
- (long long)(int)sizeof(((struct update *)0)->u_lens));
- LASSERTF((int)offsetof(struct update, u_bufs) == 56, "found %lld\n",
- (long long)(int)offsetof(struct update, u_bufs));
- LASSERTF((int)sizeof(((struct update *)0)->u_bufs) == 0, "found %lld\n",
- (long long)(int)sizeof(((struct update *)0)->u_bufs));
}
diff --git a/drivers/staging/media/davinci_vpfe/davinci_vpfe_user.h b/drivers/staging/media/davinci_vpfe/davinci_vpfe_user.h
index 7b7e7b2..f4f35c9 100644
--- a/drivers/staging/media/davinci_vpfe/davinci_vpfe_user.h
+++ b/drivers/staging/media/davinci_vpfe/davinci_vpfe_user.h
@@ -538,8 +538,8 @@
};
/**********************************************************************
- IPIPE API Structures
-**********************************************************************/
+ * IPIPE API Structures
+ **********************************************************************/
/* IPIPE module configurations */
diff --git a/drivers/staging/media/davinci_vpfe/dm365_ipipe.c b/drivers/staging/media/davinci_vpfe/dm365_ipipe.c
index ac78ed2..ff47a8f3 100644
--- a/drivers/staging/media/davinci_vpfe/dm365_ipipe.c
+++ b/drivers/staging/media/davinci_vpfe/dm365_ipipe.c
@@ -1350,21 +1350,16 @@
*/
static long ipipe_ioctl(struct v4l2_subdev *sd, unsigned int cmd, void *arg)
{
- int ret = 0;
-
switch (cmd) {
case VIDIOC_VPFE_IPIPE_S_CONFIG:
- ret = ipipe_s_config(sd, arg);
- break;
+ return ipipe_s_config(sd, arg);
case VIDIOC_VPFE_IPIPE_G_CONFIG:
- ret = ipipe_g_config(sd, arg);
- break;
+ return ipipe_g_config(sd, arg);
default:
- ret = -ENOIOCTLCMD;
+ return -ENOIOCTLCMD;
}
- return ret;
}
void vpfe_ipipe_enable(struct vpfe_device *vpfe_dev, int en)
diff --git a/drivers/staging/media/davinci_vpfe/dm365_ipipe_hw.c b/drivers/staging/media/davinci_vpfe/dm365_ipipe_hw.c
index b1d5e23..0792fbd 100644
--- a/drivers/staging/media/davinci_vpfe/dm365_ipipe_hw.c
+++ b/drivers/staging/media/davinci_vpfe/dm365_ipipe_hw.c
@@ -682,8 +682,10 @@
ipipe_clock_enable(base_addr);
if (id == IPIPE_RGB2RGB_2) {
- /* For second RGB module, gain integer is 3 bits instead
- of 4, offset has 11 bits insread of 13 */
+ /*
+ * For second RGB module, gain integer is 3 bits instead
+ * of 4, offset has 11 bits insread of 13
+ */
offset = RGB2_MUL_BASE;
integ_mask = 0x7;
offset_mask = RGB2RGB_2_OFST_MASK;
@@ -792,8 +794,10 @@
/* valied table */
tbl = lut_3d->table;
for (i = 0; i < VPFE_IPIPE_MAX_SIZE_3D_LUT; i++) {
- /* Each entry has 0-9 (B), 10-19 (G) and
- 20-29 R values */
+ /*
+ * Each entry has 0-9 (B), 10-19 (G) and
+ * 20-29 R values
+ */
val = tbl[i].b & D3_LUT_ENTRY_MASK;
val |= (tbl[i].g & D3_LUT_ENTRY_MASK) <<
D3_LUT_ENTRY_G_SHIFT;
diff --git a/drivers/staging/media/davinci_vpfe/dm365_ipipe_hw.h b/drivers/staging/media/davinci_vpfe/dm365_ipipe_hw.h
index 2bf2f7a..7ee1572 100644
--- a/drivers/staging/media/davinci_vpfe/dm365_ipipe_hw.h
+++ b/drivers/staging/media/davinci_vpfe/dm365_ipipe_hw.h
@@ -278,9 +278,10 @@
/* Resizer Rescale Parameters */
#define RSZ_EN_A 0x58
#define RSZ_EN_B 0xe8
-/* offset of the registers to be added with base register of
- either RSZ0 or RSZ1
-*/
+/*
+ * offset of the registers to be added with base register of
+ * either RSZ0 or RSZ1
+ */
#define RSZ_MODE 0x4
#define RSZ_420 0x8
#define RSZ_I_VPS 0xc
diff --git a/drivers/staging/media/davinci_vpfe/dm365_ipipeif.c b/drivers/staging/media/davinci_vpfe/dm365_ipipeif.c
index 633d645..6e4c87f 100644
--- a/drivers/staging/media/davinci_vpfe/dm365_ipipeif.c
+++ b/drivers/staging/media/davinci_vpfe/dm365_ipipeif.c
@@ -641,8 +641,9 @@
}
static int
-ipipeif_enum_frame_size(struct v4l2_subdev *sd, struct v4l2_subdev_pad_config *cfg,
- struct v4l2_subdev_frame_size_enum *fse)
+ipipeif_enum_frame_size(struct v4l2_subdev *sd,
+ struct v4l2_subdev_pad_config *cfg,
+ struct v4l2_subdev_frame_size_enum *fse)
{
struct vpfe_ipipeif_device *ipipeif = v4l2_get_subdevdata(sd);
struct v4l2_mbus_framefmt format;
diff --git a/drivers/staging/media/davinci_vpfe/dm365_isif.c b/drivers/staging/media/davinci_vpfe/dm365_isif.c
index cfad426..ae9202d 100644
--- a/drivers/staging/media/davinci_vpfe/dm365_isif.c
+++ b/drivers/staging/media/davinci_vpfe/dm365_isif.c
@@ -282,7 +282,8 @@
* @fmt: pointer to v4l2 subdev format structure
*/
static void
-isif_try_format(struct vpfe_isif_device *isif, struct v4l2_subdev_pad_config *cfg,
+isif_try_format(struct vpfe_isif_device *isif,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
unsigned int width = fmt->format.width;
@@ -625,21 +626,16 @@
*/
static long isif_ioctl(struct v4l2_subdev *sd, unsigned int cmd, void *arg)
{
- int ret;
-
switch (cmd) {
case VIDIOC_VPFE_ISIF_S_RAW_PARAMS:
- ret = isif_set_params(sd, arg);
- break;
+ return isif_set_params(sd, arg);
case VIDIOC_VPFE_ISIF_G_RAW_PARAMS:
- ret = isif_get_params(sd, arg);
- break;
+ return isif_get_params(sd, arg);
default:
- ret = -ENOIOCTLCMD;
+ return -ENOIOCTLCMD;
}
- return ret;
}
static void isif_config_gain_offset(struct vpfe_isif_device *isif)
@@ -1399,8 +1395,9 @@
* @which: wanted subdev format.
*/
static struct v4l2_mbus_framefmt *
-__isif_get_format(struct vpfe_isif_device *isif, struct v4l2_subdev_pad_config *cfg,
- unsigned int pad, enum v4l2_subdev_format_whence which)
+__isif_get_format(struct vpfe_isif_device *isif,
+ struct v4l2_subdev_pad_config *cfg, unsigned int pad,
+ enum v4l2_subdev_format_whence which)
{
if (which == V4L2_SUBDEV_FORMAT_TRY) {
struct v4l2_subdev_format fmt;
diff --git a/drivers/staging/media/davinci_vpfe/dm365_resizer.c b/drivers/staging/media/davinci_vpfe/dm365_resizer.c
index 42de95e..3cd56cc 100644
--- a/drivers/staging/media/davinci_vpfe/dm365_resizer.c
+++ b/drivers/staging/media/davinci_vpfe/dm365_resizer.c
@@ -1387,8 +1387,9 @@
* @fmt: pointer to v4l2 subdev format structure
* return -EINVAL or zero on success
*/
-static int resizer_set_format(struct v4l2_subdev *sd, struct v4l2_subdev_pad_config *cfg,
- struct v4l2_subdev_format *fmt)
+static int resizer_set_format(struct v4l2_subdev *sd,
+ struct v4l2_subdev_pad_config *cfg,
+ struct v4l2_subdev_format *fmt)
{
struct vpfe_resizer_device *resizer = v4l2_get_subdevdata(sd);
struct v4l2_mbus_framefmt *format;
@@ -1447,8 +1448,9 @@
* @fmt: pointer to v4l2 subdev format structure
* return -EINVAL or zero on success
*/
-static int resizer_get_format(struct v4l2_subdev *sd, struct v4l2_subdev_pad_config *cfg,
- struct v4l2_subdev_format *fmt)
+static int resizer_get_format(struct v4l2_subdev *sd,
+ struct v4l2_subdev_pad_config *cfg,
+ struct v4l2_subdev_format *fmt)
{
struct v4l2_mbus_framefmt *format;
@@ -1670,7 +1672,7 @@
resizer->crop_resizer.input =
RESIZER_CROP_INPUT_IPIPEIF;
else if (ipipe_source == IPIPE_OUTPUT_RESIZER)
- resizer->crop_resizer.input =
+ resizer->crop_resizer.input =
RESIZER_CROP_INPUT_IPIPE;
else
return -EINVAL;
diff --git a/drivers/staging/media/davinci_vpfe/vpfe_mc_capture.c b/drivers/staging/media/davinci_vpfe/vpfe_mc_capture.c
index ec46f36..bf077f8 100644
--- a/drivers/staging/media/davinci_vpfe/vpfe_mc_capture.c
+++ b/drivers/staging/media/davinci_vpfe/vpfe_mc_capture.c
@@ -442,8 +442,10 @@
/* create links now, starting with external(i2c) entities */
for (i = 0; i < vpfe_dev->num_ext_subdevs; i++)
- /* if entity has no pads (ex: amplifier),
- cant establish link */
+ /*
+ * if entity has no pads (ex: amplifier),
+ * cant establish link
+ */
if (vpfe_dev->sd[i]->entity.num_pads) {
ret = media_create_pad_link(&vpfe_dev->sd[i]->entity,
0, &vpfe_dev->vpfe_isif.subdev.entity,
diff --git a/drivers/staging/media/davinci_vpfe/vpfe_video.c b/drivers/staging/media/davinci_vpfe/vpfe_video.c
index 3ec7e65..0a65405 100644
--- a/drivers/staging/media/davinci_vpfe/vpfe_video.c
+++ b/drivers/staging/media/davinci_vpfe/vpfe_video.c
@@ -172,21 +172,19 @@
static int vpfe_update_pipe_state(struct vpfe_video_device *video)
{
struct vpfe_pipeline *pipe = &video->pipe;
- int ret;
- ret = vpfe_prepare_pipeline(video);
- if (ret)
- return ret;
+ if (vpfe_prepare_pipeline(video))
+ return vpfe_prepare_pipeline(video);
- /* Find out if there is any input video
- if yes, it is single shot.
- */
+ /*
+ * Find out if there is any input video
+ * if yes, it is single shot.
+ */
if (pipe->input_num == 0) {
pipe->state = VPFE_PIPELINE_STREAM_CONTINUOUS;
- ret = vpfe_update_current_ext_subdev(video);
- if (ret) {
+ if (vpfe_update_current_ext_subdev(video)) {
pr_err("Invalid external subdev\n");
- return ret;
+ return vpfe_update_current_ext_subdev(video);
}
} else {
pipe->state = VPFE_PIPELINE_STREAM_SINGLESHOT;
@@ -460,7 +458,7 @@
video->next_frm = list_entry(video->dma_queue.next,
struct vpfe_cap_buffer, list);
- if (VPFE_PIPELINE_STREAM_SINGLESHOT == video->pipe.state)
+ if (video->pipe.state == VPFE_PIPELINE_STREAM_SINGLESHOT)
video->cur_frm = video->next_frm;
list_del(&video->next_frm->list);
@@ -529,10 +527,11 @@
if (fh->io_allowed) {
if (video->started) {
vpfe_stop_capture(video);
- /* mark pipe state as stopped in vpfe_release(),
- as app might call streamon() after streamoff()
- in which case driver has to start streaming.
- */
+ /*
+ * mark pipe state as stopped in vpfe_release(),
+ * as app might call streamon() after streamoff()
+ * in which case driver has to start streaming.
+ */
video->pipe.state = VPFE_PIPELINE_STREAM_STOPPED;
vb2_streamoff(&video->buffer_queue,
video->buffer_queue.type);
@@ -668,12 +667,13 @@
struct v4l2_subdev *subdev;
struct v4l2_format format;
struct media_pad *remote;
- int ret;
v4l2_dbg(1, debug, &vpfe_dev->v4l2_dev, "vpfe_enum_fmt\n");
- /* since already subdev pad format is set,
- only one pixel format is available */
+ /*
+ * since already subdev pad format is set,
+ * only one pixel format is available
+ */
if (fmt->index > 0) {
v4l2_err(&vpfe_dev->v4l2_dev, "Invalid index\n");
return -EINVAL;
@@ -695,11 +695,10 @@
sd_fmt.pad = remote->index;
sd_fmt.which = V4L2_SUBDEV_FORMAT_ACTIVE;
/* get output format of remote subdev */
- ret = v4l2_subdev_call(subdev, pad, get_fmt, NULL, &sd_fmt);
- if (ret) {
+ if (v4l2_subdev_call(subdev, pad, get_fmt, NULL, &sd_fmt)) {
v4l2_err(&vpfe_dev->v4l2_dev,
"invalid remote subdev for video node\n");
- return ret;
+ return v4l2_subdev_call(subdev, pad, get_fmt, NULL, &sd_fmt);
}
/* convert to pix format */
mbus.code = sd_fmt.format.code;
@@ -726,7 +725,6 @@
struct vpfe_video_device *video = video_drvdata(file);
struct vpfe_device *vpfe_dev = video->vpfe_dev;
struct v4l2_format format;
- int ret;
v4l2_dbg(1, debug, &vpfe_dev->v4l2_dev, "vpfe_s_fmt\n");
/* If streaming is started, return error */
@@ -735,9 +733,8 @@
return -EBUSY;
}
/* get adjacent subdev's output pad format */
- ret = __vpfe_video_get_format(video, &format);
- if (ret)
- return ret;
+ if (__vpfe_video_get_format(video, &format))
+ return __vpfe_video_get_format(video, &format);
*fmt = format;
video->fmt = *fmt;
return 0;
@@ -760,13 +757,11 @@
struct vpfe_video_device *video = video_drvdata(file);
struct vpfe_device *vpfe_dev = video->vpfe_dev;
struct v4l2_format format;
- int ret;
v4l2_dbg(1, debug, &vpfe_dev->v4l2_dev, "vpfe_try_fmt\n");
/* get adjacent subdev's output pad format */
- ret = __vpfe_video_get_format(video, &format);
- if (ret)
- return ret;
+ if (__vpfe_video_get_format(video, &format))
+ return __vpfe_video_get_format(video, &format);
*fmt = format;
return 0;
@@ -843,9 +838,8 @@
v4l2_dbg(1, debug, &vpfe_dev->v4l2_dev, "vpfe_s_input\n");
- ret = mutex_lock_interruptible(&video->lock);
- if (ret)
- return ret;
+ if (mutex_lock_interruptible(&video->lock))
+ return mutex_lock_interruptible(&video->lock);
/*
* If streaming is started return device busy
* error
@@ -946,9 +940,8 @@
v4l2_dbg(1, debug, &vpfe_dev->v4l2_dev, "vpfe_s_std\n");
/* Call decoder driver function to set the standard */
- ret = mutex_lock_interruptible(&video->lock);
- if (ret)
- return ret;
+ if (mutex_lock_interruptible(&video->lock))
+ return mutex_lock_interruptible(&video->lock);
sdinfo = video->current_ext_subdev;
/* If streaming is started, return device busy error */
if (video->started) {
@@ -1328,15 +1321,14 @@
v4l2_dbg(1, debug, &vpfe_dev->v4l2_dev, "vpfe_reqbufs\n");
- if (V4L2_BUF_TYPE_VIDEO_CAPTURE != req_buf->type &&
- V4L2_BUF_TYPE_VIDEO_OUTPUT != req_buf->type) {
+ if (req_buf->type != V4L2_BUF_TYPE_VIDEO_CAPTURE &&
+ req_buf->type != V4L2_BUF_TYPE_VIDEO_OUTPUT){
v4l2_err(&vpfe_dev->v4l2_dev, "Invalid buffer type\n");
return -EINVAL;
}
- ret = mutex_lock_interruptible(&video->lock);
- if (ret)
- return ret;
+ if (mutex_lock_interruptible(&video->lock))
+ return mutex_lock_interruptible(&video->lock);
if (video->io_usrs != 0) {
v4l2_err(&vpfe_dev->v4l2_dev, "Only one IO user allowed\n");
@@ -1362,11 +1354,10 @@
q->buf_struct_size = sizeof(struct vpfe_cap_buffer);
q->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC;
- ret = vb2_queue_init(q);
- if (ret) {
+ if (vb2_queue_init(q)) {
v4l2_err(&vpfe_dev->v4l2_dev, "vb2_queue_init() failed\n");
vb2_dma_contig_cleanup_ctx(vpfe_dev->pdev);
- return ret;
+ return vb2_queue_init(q);
}
fh->io_allowed = 1;
@@ -1390,8 +1381,8 @@
v4l2_dbg(1, debug, &vpfe_dev->v4l2_dev, "vpfe_querybuf\n");
- if (V4L2_BUF_TYPE_VIDEO_CAPTURE != buf->type &&
- V4L2_BUF_TYPE_VIDEO_OUTPUT != buf->type) {
+ if (buf->type != V4L2_BUF_TYPE_VIDEO_CAPTURE &&
+ buf->type != V4L2_BUF_TYPE_VIDEO_OUTPUT) {
v4l2_err(&vpfe_dev->v4l2_dev, "Invalid buf type\n");
return -EINVAL;
}
@@ -1417,8 +1408,8 @@
v4l2_dbg(1, debug, &vpfe_dev->v4l2_dev, "vpfe_qbuf\n");
- if (V4L2_BUF_TYPE_VIDEO_CAPTURE != p->type &&
- V4L2_BUF_TYPE_VIDEO_OUTPUT != p->type) {
+ if (p->type != V4L2_BUF_TYPE_VIDEO_CAPTURE &&
+ p->type != V4L2_BUF_TYPE_VIDEO_OUTPUT) {
v4l2_err(&vpfe_dev->v4l2_dev, "Invalid buf type\n");
return -EINVAL;
}
@@ -1445,8 +1436,8 @@
v4l2_dbg(1, debug, &vpfe_dev->v4l2_dev, "vpfe_dqbuf\n");
- if (V4L2_BUF_TYPE_VIDEO_CAPTURE != buf->type &&
- V4L2_BUF_TYPE_VIDEO_OUTPUT != buf->type) {
+ if (buf->type != V4L2_BUF_TYPE_VIDEO_CAPTURE &&
+ buf->type != V4L2_BUF_TYPE_VIDEO_OUTPUT) {
v4l2_err(&vpfe_dev->v4l2_dev, "Invalid buf type\n");
return -EINVAL;
}
@@ -1478,8 +1469,8 @@
v4l2_dbg(1, debug, &vpfe_dev->v4l2_dev, "vpfe_streamon\n");
- if (V4L2_BUF_TYPE_VIDEO_CAPTURE != buf_type &&
- V4L2_BUF_TYPE_VIDEO_OUTPUT != buf_type) {
+ if (buf_type != V4L2_BUF_TYPE_VIDEO_CAPTURE &&
+ buf_type != V4L2_BUF_TYPE_VIDEO_OUTPUT) {
v4l2_err(&vpfe_dev->v4l2_dev, "Invalid buf type\n");
return ret;
}
@@ -1495,7 +1486,7 @@
return -EIO;
}
/* Validate the pipeline */
- if (V4L2_BUF_TYPE_VIDEO_CAPTURE == buf_type) {
+ if (buf_type == V4L2_BUF_TYPE_VIDEO_CAPTURE) {
ret = vpfe_video_validate_pipeline(pipe);
if (ret < 0)
return ret;
@@ -1542,9 +1533,8 @@
return -EINVAL;
}
- ret = mutex_lock_interruptible(&video->lock);
- if (ret)
- return ret;
+ if (mutex_lock_interruptible(&video->lock))
+ return mutex_lock_interruptible(&video->lock);
vpfe_stop_capture(video);
ret = vb2_streamoff(&video->buffer_queue, buf_type);
diff --git a/drivers/staging/most/hdm-dim2/dim2_hdm.c b/drivers/staging/most/hdm-dim2/dim2_hdm.c
index 6296aa5..c736310 100644
--- a/drivers/staging/most/hdm-dim2/dim2_hdm.c
+++ b/drivers/staging/most/hdm-dim2/dim2_hdm.c
@@ -736,7 +736,7 @@
int ret, i;
struct kobject *kobj;
- dev = kzalloc(sizeof(*dev), GFP_KERNEL);
+ dev = devm_kzalloc(&pdev->dev, sizeof(*dev), GFP_KERNEL);
if (!dev)
return -ENOMEM;
@@ -747,47 +747,31 @@
test_dev = dev;
#else
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
- if (!res) {
- pr_err("no memory region defined\n");
- ret = -ENOENT;
- goto err_free_dev;
- }
-
- if (!request_mem_region(res->start, resource_size(res), pdev->name)) {
- pr_err("failed to request mem region\n");
- ret = -EBUSY;
- goto err_free_dev;
- }
-
- dev->io_base = ioremap(res->start, resource_size(res));
- if (!dev->io_base) {
- pr_err("failed to ioremap\n");
- ret = -ENOMEM;
- goto err_release_mem;
- }
+ dev->io_base = devm_ioremap_resource(&pdev->dev, res);
+ if (IS_ERR(dev->io_base))
+ return PTR_ERR(dev->io_base);
ret = platform_get_irq(pdev, 0);
if (ret < 0) {
- pr_err("failed to get irq\n");
- goto err_unmap_io;
+ dev_err(&pdev->dev, "failed to get irq\n");
+ return -ENODEV;
}
dev->irq_ahb0 = ret;
- ret = request_irq(dev->irq_ahb0, dim2_ahb_isr, 0, "mlb_ahb0", dev);
+ ret = devm_request_irq(&pdev->dev, dev->irq_ahb0, dim2_ahb_isr, 0,
+ "mlb_ahb0", dev);
if (ret) {
- pr_err("failed to request IRQ: %d, err: %d\n",
- dev->irq_ahb0, ret);
- goto err_unmap_io;
+ dev_err(&pdev->dev, "failed to request IRQ: %d, err: %d\n",
+ dev->irq_ahb0, ret);
+ return ret;
}
#endif
init_waitqueue_head(&dev->netinfo_waitq);
dev->deliver_netinfo = 0;
dev->netinfo_task = kthread_run(&deliver_netinfo_thread, (void *)dev,
"dim2_netinfo");
- if (IS_ERR(dev->netinfo_task)) {
+ if (IS_ERR(dev->netinfo_task))
ret = PTR_ERR(dev->netinfo_task);
- goto err_free_irq;
- }
for (i = 0; i < DMA_CHANNELS; i++) {
struct most_channel_capability *cap = dev->capabilities + i;
@@ -855,16 +839,6 @@
most_deregister_interface(&dev->most_iface);
err_stop_thread:
kthread_stop(dev->netinfo_task);
-err_free_irq:
-#if !defined(ENABLE_HDM_TEST)
- free_irq(dev->irq_ahb0, dev);
-err_unmap_io:
- iounmap(dev->io_base);
-err_release_mem:
- release_mem_region(res->start, resource_size(res));
-err_free_dev:
-#endif
- kfree(dev);
return ret;
}
@@ -878,7 +852,6 @@
static int dim2_remove(struct platform_device *pdev)
{
struct dim2_hdm *dev = platform_get_drvdata(pdev);
- struct resource *res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
struct dim2_platform_data *pdata = pdev->dev.platform_data;
unsigned long flags;
@@ -892,13 +865,6 @@
dim2_sysfs_destroy(&dev->bus);
most_deregister_interface(&dev->most_iface);
kthread_stop(dev->netinfo_task);
-#if !defined(ENABLE_HDM_TEST)
- free_irq(dev->irq_ahb0, dev);
- iounmap(dev->io_base);
- release_mem_region(res->start, resource_size(res));
-#endif
- kfree(dev);
- platform_set_drvdata(pdev, NULL);
/*
* break link to local platform_device_id struct
diff --git a/drivers/staging/most/hdm-dim2/dim2_sysfs.c b/drivers/staging/most/hdm-dim2/dim2_sysfs.c
index c5b10c7..2b28e4a 100644
--- a/drivers/staging/most/hdm-dim2/dim2_sysfs.c
+++ b/drivers/staging/most/hdm-dim2/dim2_sysfs.c
@@ -63,7 +63,6 @@
static ssize_t bus_kobj_attr_store(struct kobject *kobj, struct attribute *attr,
const char *buf, size_t count)
{
- ssize_t ret;
struct medialb_bus *bus =
container_of(kobj, struct medialb_bus, kobj_group);
struct bus_attr *xattr = container_of(attr, struct bus_attr, attr);
@@ -71,8 +70,7 @@
if (!xattr->store)
return -EIO;
- ret = xattr->store(bus, buf, count);
- return ret;
+ return xattr->store(bus, buf, count);
}
static struct sysfs_ops const bus_kobj_sysfs_ops = {
diff --git a/drivers/staging/most/hdm-usb/hdm_usb.c b/drivers/staging/most/hdm-usb/hdm_usb.c
index 41690f8..9d5555d 100644
--- a/drivers/staging/most/hdm-usb/hdm_usb.c
+++ b/drivers/staging/most/hdm-usb/hdm_usb.c
@@ -40,7 +40,6 @@
#define MAX_SUFFIX_LEN 10
#define MAX_STRING_LEN 80
#define MAX_BUF_SIZE 0xFFFF
-#define CEILING(x, y) (((x) + (y) - 1) / (y))
#define USB_VENDOR_ID_SMSC 0x0424 /* VID: SMSC */
#define USB_DEV_ID_BRDG 0xC001 /* PID: USB Bridge */
@@ -137,7 +136,6 @@
#define to_mdev(d) container_of(d, struct most_dev, iface)
#define to_mdev_from_work(w) container_of(w, struct most_dev, poll_work_obj)
-static struct workqueue_struct *schedule_usb_work;
static void wq_clear_halt(struct work_struct *wq_obj);
static void wq_netinfo(struct work_struct *wq_obj);
@@ -226,6 +224,8 @@
kfree(anchor);
}
spin_unlock_irqrestore(&mdev->anchor_list_lock[channel], flags);
+
+ cancel_work_sync(&anchor->clear_work_obj);
}
/**
@@ -411,7 +411,7 @@
mbo->status = MBO_E_INVAL;
usb_unlink_urb(urb);
INIT_WORK(&anchor->clear_work_obj, wq_clear_halt);
- queue_work(schedule_usb_work, &anchor->clear_work_obj);
+ schedule_work(&anchor->clear_work_obj);
return;
case -ENODEV:
case -EPROTO:
@@ -575,7 +575,7 @@
mbo->status = MBO_E_INVAL;
usb_unlink_urb(urb);
INIT_WORK(&anchor->clear_work_obj, wq_clear_halt);
- queue_work(schedule_usb_work, &anchor->clear_work_obj);
+ schedule_work(&anchor->clear_work_obj);
return;
case -ENODEV:
case -EPROTO:
@@ -785,7 +785,7 @@
temp_size += tail_space;
/* calculate extra length to comply w/ HW padding */
- conf->extra_len = (CEILING(temp_size, USB_MTU) * USB_MTU)
+ conf->extra_len = (DIV_ROUND_UP(temp_size, USB_MTU) * USB_MTU)
- conf->buffer_size;
exit:
mdev->conf[channel] = *conf;
@@ -872,7 +872,7 @@
{
struct most_dev *mdev = (struct most_dev *)data;
- queue_work(schedule_usb_work, &mdev->poll_work_obj);
+ schedule_work(&mdev->poll_work_obj);
mdev->link_stat_timer.expires = jiffies + (2 * HZ);
add_timer(&mdev->link_stat_timer);
}
@@ -1415,19 +1415,13 @@
pr_err("could not register hdm_usb driver\n");
return -EIO;
}
- schedule_usb_work = create_workqueue("hdmu_work");
- if (!schedule_usb_work) {
- pr_err("could not create workqueue\n");
- usb_deregister(&hdm_usb);
- return -ENOMEM;
- }
+
return 0;
}
static void __exit hdm_usb_exit(void)
{
pr_info("hdm_usb_exit()\n");
- destroy_workqueue(schedule_usb_work);
usb_deregister(&hdm_usb);
}
diff --git a/drivers/staging/most/mostcore/core.c b/drivers/staging/most/mostcore/core.c
index 322ee01..7c619fe 100644
--- a/drivers/staging/most/mostcore/core.c
+++ b/drivers/staging/most/mostcore/core.c
@@ -1259,7 +1259,6 @@
for (i = 0; i < c->cfg.num_buffers; i++) {
mbo = kzalloc(sizeof(*mbo), GFP_KERNEL);
if (!mbo) {
- pr_info("WARN: Allocation of MBO failed.\n");
retval = i;
goto _exit;
}
diff --git a/drivers/staging/mt29f_spinand/mt29f_spinand.c b/drivers/staging/mt29f_spinand/mt29f_spinand.c
index 3d9a426..75fe61c 100644
--- a/drivers/staging/mt29f_spinand/mt29f_spinand.c
+++ b/drivers/staging/mt29f_spinand/mt29f_spinand.c
@@ -175,7 +175,7 @@
retval = spinand_read_status(spi_nand, &stat);
if (retval < 0)
return -1;
- else if (!(stat & 0x1))
+ if (!(stat & 0x1))
break;
cond_resched();
diff --git a/drivers/staging/netlogic/platform_net.c b/drivers/staging/netlogic/platform_net.c
index 9b52154..abf4c71 100644
--- a/drivers/staging/netlogic/platform_net.c
+++ b/drivers/staging/netlogic/platform_net.c
@@ -170,7 +170,7 @@
xlr_net_dev0.num_resources = 2;
xlr_resource_init(&xlr_net0_res[0], xlr_gmac_offsets[0],
- xlr_gmac_irqs[0]);
+ xlr_gmac_irqs[0]);
platform_device_register(&xlr_net_dev0);
/* second block is XAUI, not supported yet */
@@ -183,7 +183,7 @@
ndata0.phy_addr[mac] = mac + 0x10;
xlr_resource_init(&xlr_net0_res[mac * 2],
- xlr_gmac_offsets[mac],
+ xlr_gmac_offsets[mac],
xlr_gmac_irqs[mac]);
}
xlr_net_dev0.num_resources = 8;
@@ -223,7 +223,7 @@
ndata0.tx_stnid[mac] = FMN_STNID_GMAC0_TX0 + mac;
ndata0.phy_addr[mac] = mac;
xlr_resource_init(&xlr_net0_res[mac * 2], xlr_gmac_offsets[mac],
- xlr_gmac_irqs[mac]);
+ xlr_gmac_irqs[mac]);
}
xlr_net_dev0.num_resources = 8;
xlr_net_dev0.resource = xlr_net0_res;
diff --git a/drivers/staging/netlogic/xlr_net.c b/drivers/staging/netlogic/xlr_net.c
index 0b4e819..0015847 100644
--- a/drivers/staging/netlogic/xlr_net.c
+++ b/drivers/staging/netlogic/xlr_net.c
@@ -69,8 +69,7 @@
return __raw_readl(base + reg);
}
-static inline void xlr_reg_update(u32 *base_addr,
- u32 off, u32 val, u32 mask)
+static inline void xlr_reg_update(u32 *base_addr, u32 off, u32 val, u32 mask)
{
u32 tmp;
@@ -122,8 +121,8 @@
return skb->data;
}
-static void xlr_net_fmn_handler(int bkt, int src_stnid, int size,
- int code, struct nlm_fmn_msg *msg, void *arg)
+static void xlr_net_fmn_handler(int bkt, int src_stnid, int size, int code,
+ struct nlm_fmn_msg *msg, void *arg)
{
struct sk_buff *skb;
void *skb_data = NULL;
@@ -131,13 +130,13 @@
struct xlr_net_priv *priv;
u32 port, length;
unsigned char *addr;
- struct xlr_adapter *adapter = (struct xlr_adapter *) arg;
+ struct xlr_adapter *adapter = (struct xlr_adapter *)arg;
length = (msg->msg0 >> 40) & 0x3fff;
if (length == 0) {
addr = bus_to_virt(msg->msg0 & 0xffffffffffULL);
addr = addr - MAC_SKB_BACK_PTR_SIZE;
- skb = (struct sk_buff *) *(unsigned long *)addr;
+ skb = (struct sk_buff *)(*(unsigned long *)addr);
dev_kfree_skb_any((struct sk_buff *)addr);
} else {
addr = (unsigned char *)
@@ -145,9 +144,9 @@
length = length - BYTE_OFFSET - MAC_CRC_LEN;
port = ((int)msg->msg0) & 0x0f;
addr = addr - MAC_SKB_BACK_PTR_SIZE;
- skb = (struct sk_buff *) *(unsigned long *)addr;
+ skb = (struct sk_buff *)(*(unsigned long *)addr);
skb->dev = adapter->netdev[port];
- if (skb->dev == NULL)
+ if (!skb->dev)
return;
ndev = skb->dev;
priv = netdev_priv(ndev);
@@ -207,7 +206,7 @@
struct xlr_net_priv *priv = netdev_priv(ndev);
int i;
- for (i = 0; i < MAX_FRIN_SPILL/4; i++) {
+ for (i = 0; i < MAX_FRIN_SPILL / 4; i++) {
skb_data = xlr_alloc_skb();
if (!skb_data) {
pr_err("SKB allocation failed\n");
@@ -252,7 +251,7 @@
}
static void xlr_make_tx_desc(struct nlm_fmn_msg *msg, unsigned long addr,
- struct sk_buff *skb)
+ struct sk_buff *skb)
{
unsigned long physkb = virt_to_phys(skb);
int cpu_core = nlm_core_id();
@@ -266,12 +265,13 @@
((u64)fr_stn_id << 54) | /* Free back id */
(u64)0 << 40 | /* Set len to 0 */
((u64)physkb & 0xffffffff)); /* 32bit address */
- msg->msg2 = msg->msg3 = 0;
+ msg->msg2 = 0;
+ msg->msg3 = 0;
}
static void __maybe_unused xlr_wakeup_queue(unsigned long dev)
{
- struct net_device *ndev = (struct net_device *) dev;
+ struct net_device *ndev = (struct net_device *)dev;
struct xlr_net_priv *priv = netdev_priv(ndev);
struct phy_device *phydev = xlr_get_phydev(priv);
@@ -280,7 +280,7 @@
}
static netdev_tx_t xlr_net_start_xmit(struct sk_buff *skb,
- struct net_device *ndev)
+ struct net_device *ndev)
{
struct nlm_fmn_msg msg;
struct xlr_net_priv *priv = netdev_priv(ndev);
@@ -309,10 +309,10 @@
/* set mac station address */
xlr_nae_wreg(priv->base_addr, R_MAC_ADDR0,
- ((ndev->dev_addr[5] << 24) | (ndev->dev_addr[4] << 16) |
- (ndev->dev_addr[3] << 8) | (ndev->dev_addr[2])));
+ ((ndev->dev_addr[5] << 24) | (ndev->dev_addr[4] << 16) |
+ (ndev->dev_addr[3] << 8) | (ndev->dev_addr[2])));
xlr_nae_wreg(priv->base_addr, R_MAC_ADDR0 + 1,
- ((ndev->dev_addr[1] << 24) | (ndev->dev_addr[0] << 16)));
+ ((ndev->dev_addr[1] << 24) | (ndev->dev_addr[0] << 16)));
xlr_nae_wreg(priv->base_addr, R_MAC_ADDR_MASK2, 0xffffffff);
xlr_nae_wreg(priv->base_addr, R_MAC_ADDR_MASK2 + 1, 0xffffffff);
@@ -320,12 +320,12 @@
xlr_nae_wreg(priv->base_addr, R_MAC_ADDR_MASK3 + 1, 0xffffffff);
xlr_nae_wreg(priv->base_addr, R_MAC_FILTER_CONFIG,
- (1 << O_MAC_FILTER_CONFIG__BROADCAST_EN) |
- (1 << O_MAC_FILTER_CONFIG__ALL_MCAST_EN) |
- (1 << O_MAC_FILTER_CONFIG__MAC_ADDR0_VALID));
+ (1 << O_MAC_FILTER_CONFIG__BROADCAST_EN) |
+ (1 << O_MAC_FILTER_CONFIG__ALL_MCAST_EN) |
+ (1 << O_MAC_FILTER_CONFIG__MAC_ADDR0_VALID));
if (priv->nd->phy_interface == PHY_INTERFACE_MODE_RGMII ||
- priv->nd->phy_interface == PHY_INTERFACE_MODE_SGMII)
+ priv->nd->phy_interface == PHY_INTERFACE_MODE_SGMII)
xlr_reg_update(priv->base_addr, R_IPG_IFG, MAC_B2B_IPG, 0x7f);
}
@@ -406,7 +406,8 @@
}
static struct rtnl_link_stats64 *xlr_get_stats64(struct net_device *ndev,
- struct rtnl_link_stats64 *stats)
+ struct rtnl_link_stats64 *stats
+ )
{
xlr_stats(ndev, stats);
return stats;
@@ -426,7 +427,7 @@
* Gmac init
*/
static void *xlr_config_spill(struct xlr_net_priv *priv, int reg_start_0,
- int reg_start_1, int reg_size, int size)
+ int reg_start_1, int reg_size, int size)
{
void *spill;
u32 *base;
@@ -436,13 +437,15 @@
base = priv->base_addr;
spill_size = size;
spill = kmalloc(spill_size + SMP_CACHE_BYTES, GFP_ATOMIC);
- if (!spill)
+ if (!spill) {
pr_err("Unable to allocate memory for spill area!\n");
+ return ZERO_SIZE_PTR;
+ }
spill = PTR_ALIGN(spill, SMP_CACHE_BYTES);
phys_addr = virt_to_phys(spill);
dev_dbg(&priv->ndev->dev, "Allocated spill %d bytes at %lx\n",
- size, phys_addr);
+ size, phys_addr);
xlr_nae_wreg(base, reg_start_0, (phys_addr >> 5) & 0xffffffff);
xlr_nae_wreg(base, reg_start_1, ((u64)phys_addr >> 37) & 0x07);
xlr_nae_wreg(base, reg_size, spill_size);
@@ -511,19 +514,19 @@
xlr_nae_wreg(priv->base_addr, R_PDE_CLASS_0, (bkt_map & 0xffffffff));
xlr_nae_wreg(priv->base_addr, R_PDE_CLASS_0 + 1,
- ((bkt_map >> 32) & 0xffffffff));
+ ((bkt_map >> 32) & 0xffffffff));
xlr_nae_wreg(priv->base_addr, R_PDE_CLASS_1, (bkt_map & 0xffffffff));
xlr_nae_wreg(priv->base_addr, R_PDE_CLASS_1 + 1,
- ((bkt_map >> 32) & 0xffffffff));
+ ((bkt_map >> 32) & 0xffffffff));
xlr_nae_wreg(priv->base_addr, R_PDE_CLASS_2, (bkt_map & 0xffffffff));
xlr_nae_wreg(priv->base_addr, R_PDE_CLASS_2 + 1,
- ((bkt_map >> 32) & 0xffffffff));
+ ((bkt_map >> 32) & 0xffffffff));
xlr_nae_wreg(priv->base_addr, R_PDE_CLASS_3, (bkt_map & 0xffffffff));
xlr_nae_wreg(priv->base_addr, R_PDE_CLASS_3 + 1,
- ((bkt_map >> 32) & 0xffffffff));
+ ((bkt_map >> 32) & 0xffffffff));
}
/*
@@ -541,8 +544,8 @@
/* Setting non-core MsgBktSize(0x321 - 0x325) */
for (i = start_stn_id; i <= end_stn_id; i++) {
xlr_nae_wreg(priv->base_addr,
- R_GMAC_RFR0_BUCKET_SIZE + i - start_stn_id,
- bucket_size[i]);
+ R_GMAC_RFR0_BUCKET_SIZE + i - start_stn_id,
+ bucket_size[i]);
}
/*
@@ -552,8 +555,8 @@
for (i = 0; i < 8; i++) {
for (j = 0; j < 8; j++)
xlr_nae_wreg(priv->base_addr,
- (R_CC_CPU0_0 + (i * 8)) + j,
- gmac->credit_config[(i * 8) + j]);
+ (R_CC_CPU0_0 + (i * 8)) + j,
+ gmac->credit_config[(i * 8) + j]);
}
xlr_nae_wreg(priv->base_addr, R_MSG_TX_THRESHOLD, 3);
@@ -567,7 +570,7 @@
if (err)
return err;
nlm_register_fmn_handler(start_stn_id, end_stn_id, xlr_net_fmn_handler,
- priv->adapter);
+ priv->adapter);
return 0;
}
@@ -583,7 +586,7 @@
cpu_mask = priv->nd->cpu_mask;
pr_info("Using %s-based distribution\n",
- (use_bkt) ? "bucket" : "class");
+ (use_bkt) ? "bucket" : "class");
j = 0;
for (i = 0; i < 32; i++) {
if ((1 << i) & cpu_mask) {
@@ -614,7 +617,7 @@
val = ((c1 << 23) | (b1 << 17) | (use_bkt << 16) |
(c2 << 7) | (b2 << 1) | (use_bkt << 0));
dev_dbg(&priv->ndev->dev, "Table[%d] b1=%d b2=%d c1=%d c2=%d\n",
- i, b1, b2, c1, c2);
+ i, b1, b2, c1, c2);
xlr_nae_wreg(priv->base_addr, R_TRANSLATETABLE + i, val);
c1 = c2;
}
@@ -629,16 +632,16 @@
/* Use 7bit CRChash for flow classification with 127 as CRC polynomial*/
xlr_nae_wreg(priv->base_addr, R_PARSERCONFIGREG,
- ((0x7f << 8) | (1 << 1)));
+ ((0x7f << 8) | (1 << 1)));
/* configure the parser : L2 Type is configured in the bootloader */
/* extract IP: src, dest protocol */
xlr_nae_wreg(priv->base_addr, R_L3CTABLE,
- (9 << 20) | (1 << 19) | (1 << 18) | (0x01 << 16) |
- (0x0800 << 0));
+ (9 << 20) | (1 << 19) | (1 << 18) | (0x01 << 16) |
+ (0x0800 << 0));
xlr_nae_wreg(priv->base_addr, R_L3CTABLE + 1,
- (9 << 25) | (1 << 21) | (12 << 14) | (4 << 10) |
- (16 << 4) | 4);
+ (9 << 25) | (1 << 21) | (12 << 14) | (4 << 10) |
+ (16 << 4) | 4);
/* Configure to extract SRC port and Dest port for TCP and UDP pkts */
xlr_nae_wreg(priv->base_addr, R_L4CTABLE, 6);
@@ -663,7 +666,7 @@
xlr_nae_wreg(base_addr, R_MII_MGMT_ADDRESS, (phy_addr << 8) | regnum);
/* Write the data which starts the write cycle */
- xlr_nae_wreg(base_addr, R_MII_MGMT_WRITE_DATA, (u32) val);
+ xlr_nae_wreg(base_addr, R_MII_MGMT_WRITE_DATA, (u32)val);
/* poll for the read cycle to complete */
while (!timedout) {
@@ -692,11 +695,11 @@
/* setup the phy reg to be used */
xlr_nae_wreg(base_addr, R_MII_MGMT_ADDRESS,
- (phy_addr << 8) | (regnum << 0));
+ (phy_addr << 8) | (regnum << 0));
/* Issue the read command */
xlr_nae_wreg(base_addr, R_MII_MGMT_COMMAND,
- (1 << O_MII_MGMT_COMMAND__rstat));
+ (1 << O_MII_MGMT_COMMAND__rstat));
/* poll for the read cycle to complete */
while (!timedout) {
@@ -724,7 +727,7 @@
ret = xlr_phy_write(priv->mii_addr, phy_addr, regnum, val);
dev_dbg(&priv->ndev->dev, "mii_write phy %d : %d <- %x [%x]\n",
- phy_addr, regnum, val, ret);
+ phy_addr, regnum, val, ret);
return ret;
}
@@ -735,7 +738,7 @@
ret = xlr_phy_read(priv->mii_addr, phy_addr, regnum);
dev_dbg(&priv->ndev->dev, "mii_read phy %d : %d [%x]\n",
- phy_addr, regnum, ret);
+ phy_addr, regnum, ret);
return ret;
}
@@ -797,13 +800,16 @@
if (phydev->interface == PHY_INTERFACE_MODE_SGMII) {
if (speed == SPEED_10)
xlr_nae_wreg(priv->base_addr,
- R_INTERFACE_CONTROL, SGMII_SPEED_10);
+ R_INTERFACE_CONTROL,
+ SGMII_SPEED_10);
if (speed == SPEED_100)
xlr_nae_wreg(priv->base_addr,
- R_INTERFACE_CONTROL, SGMII_SPEED_100);
+ R_INTERFACE_CONTROL,
+ SGMII_SPEED_100);
if (speed == SPEED_1000)
xlr_nae_wreg(priv->base_addr,
- R_INTERFACE_CONTROL, SGMII_SPEED_1000);
+ R_INTERFACE_CONTROL,
+ SGMII_SPEED_1000);
}
if (speed == SPEED_10)
xlr_nae_wreg(priv->base_addr, R_CORECONTROL, 0x2);
@@ -864,7 +870,7 @@
}
static int xlr_setup_mdio(struct xlr_net_priv *priv,
- struct platform_device *pdev)
+ struct platform_device *pdev)
{
int err;
@@ -877,7 +883,7 @@
priv->mii_bus->priv = priv;
priv->mii_bus->name = "xlr-mdio";
snprintf(priv->mii_bus->id, MII_BUS_ID_SIZE, "%s-%d",
- priv->mii_bus->name, priv->port_id);
+ priv->mii_bus->name, priv->port_id);
priv->mii_bus->read = xlr_mii_read;
priv->mii_bus->write = xlr_mii_write;
priv->mii_bus->parent = &pdev->dev;
@@ -910,25 +916,31 @@
/* Setup MAC_CONFIG reg if (xls & rgmii) */
if ((prid == 0x8000 || prid == 0x4000 || prid == 0xc000) &&
- priv->nd->phy_interface == PHY_INTERFACE_MODE_RGMII)
+ priv->nd->phy_interface == PHY_INTERFACE_MODE_RGMII)
xlr_reg_update(priv->base_addr, R_RX_CONTROL,
- (1 << O_RX_CONTROL__RGMII), (1 << O_RX_CONTROL__RGMII));
+ (1 << O_RX_CONTROL__RGMII),
+ (1 << O_RX_CONTROL__RGMII));
/* Rx Tx enable */
xlr_reg_update(priv->base_addr, R_MAC_CONFIG_1,
- ((1 << O_MAC_CONFIG_1__rxen) | (1 << O_MAC_CONFIG_1__txen) |
- (1 << O_MAC_CONFIG_1__rxfc) | (1 << O_MAC_CONFIG_1__txfc)),
- ((1 << O_MAC_CONFIG_1__rxen) | (1 << O_MAC_CONFIG_1__txen) |
- (1 << O_MAC_CONFIG_1__rxfc) | (1 << O_MAC_CONFIG_1__txfc)));
+ ((1 << O_MAC_CONFIG_1__rxen) |
+ (1 << O_MAC_CONFIG_1__txen) |
+ (1 << O_MAC_CONFIG_1__rxfc) |
+ (1 << O_MAC_CONFIG_1__txfc)),
+ ((1 << O_MAC_CONFIG_1__rxen) |
+ (1 << O_MAC_CONFIG_1__txen) |
+ (1 << O_MAC_CONFIG_1__rxfc) |
+ (1 << O_MAC_CONFIG_1__txfc)));
/* Setup tx control reg */
xlr_reg_update(priv->base_addr, R_TX_CONTROL,
- ((1 << O_TX_CONTROL__TxEnable) |
- (512 << O_TX_CONTROL__TxThreshold)), 0x3fff);
+ ((1 << O_TX_CONTROL__TXENABLE) |
+ (512 << O_TX_CONTROL__TXTHRESHOLD)), 0x3fff);
/* Setup rx control reg */
xlr_reg_update(priv->base_addr, R_RX_CONTROL,
- 1 << O_RX_CONTROL__RxEnable, 1 << O_RX_CONTROL__RxEnable);
+ 1 << O_RX_CONTROL__RXENABLE,
+ 1 << O_RX_CONTROL__RXENABLE);
}
static void xlr_port_disable(struct xlr_net_priv *priv)
@@ -936,25 +948,26 @@
/* Setup MAC_CONFIG reg */
/* Rx Tx disable*/
xlr_reg_update(priv->base_addr, R_MAC_CONFIG_1,
- ((1 << O_MAC_CONFIG_1__rxen) | (1 << O_MAC_CONFIG_1__txen) |
- (1 << O_MAC_CONFIG_1__rxfc) | (1 << O_MAC_CONFIG_1__txfc)),
- 0x0);
+ ((1 << O_MAC_CONFIG_1__rxen) |
+ (1 << O_MAC_CONFIG_1__txen) |
+ (1 << O_MAC_CONFIG_1__rxfc) |
+ (1 << O_MAC_CONFIG_1__txfc)), 0x0);
/* Setup tx control reg */
xlr_reg_update(priv->base_addr, R_TX_CONTROL,
- ((1 << O_TX_CONTROL__TxEnable) |
- (512 << O_TX_CONTROL__TxThreshold)), 0);
+ ((1 << O_TX_CONTROL__TXENABLE) |
+ (512 << O_TX_CONTROL__TXTHRESHOLD)), 0);
/* Setup rx control reg */
xlr_reg_update(priv->base_addr, R_RX_CONTROL,
- 1 << O_RX_CONTROL__RxEnable, 0);
+ 1 << O_RX_CONTROL__RXENABLE, 0);
}
/*
* Initialization of gmac
*/
static int xlr_gmac_init(struct xlr_net_priv *priv,
- struct platform_device *pdev)
+ struct platform_device *pdev)
{
int ret;
@@ -963,9 +976,9 @@
xlr_port_disable(priv);
xlr_nae_wreg(priv->base_addr, R_DESC_PACK_CTRL,
- (1 << O_DESC_PACK_CTRL__MaxEntry)
- | (BYTE_OFFSET << O_DESC_PACK_CTRL__ByteOffset)
- | (1600 << O_DESC_PACK_CTRL__RegularSize));
+ (1 << O_DESC_PACK_CTRL__MAXENTRY) |
+ (BYTE_OFFSET << O_DESC_PACK_CTRL__BYTEOFFSET) |
+ (1600 << O_DESC_PACK_CTRL__REGULARSIZE));
ret = xlr_setup_mdio(priv, pdev);
if (ret)
@@ -977,21 +990,14 @@
/* speed 2.5Mhz */
xlr_nae_wreg(priv->base_addr, R_CORECONTROL, 0x02);
/* Setup Interrupt mask reg */
- xlr_nae_wreg(priv->base_addr, R_INTMASK,
- (1 << O_INTMASK__TxIllegal) |
- (1 << O_INTMASK__MDInt) |
- (1 << O_INTMASK__TxFetchError) |
- (1 << O_INTMASK__P2PSpillEcc) |
- (1 << O_INTMASK__TagFull) |
- (1 << O_INTMASK__Underrun) |
- (1 << O_INTMASK__Abort)
- );
+ xlr_nae_wreg(priv->base_addr, R_INTMASK, (1 << O_INTMASK__TXILLEGAL) |
+ (1 << O_INTMASK__MDINT) | (1 << O_INTMASK__TXFETCHERROR) |
+ (1 << O_INTMASK__P2PSPILLECC) | (1 << O_INTMASK__TAGFULL) |
+ (1 << O_INTMASK__UNDERRUN) | (1 << O_INTMASK__ABORT));
/* Clear all stats */
- xlr_reg_update(priv->base_addr, R_STATCTRL,
- 0, 1 << O_STATCTRL__ClrCnt);
- xlr_reg_update(priv->base_addr, R_STATCTRL, 1 << 2,
- 1 << 2);
+ xlr_reg_update(priv->base_addr, R_STATCTRL, 0, 1 << O_STATCTRL__CLRCNT);
+ xlr_reg_update(priv->base_addr, R_STATCTRL, 1 << 2, 1 << 2);
return 0;
}
@@ -1019,7 +1025,7 @@
* Each controller has 4 gmac ports, mapping each controller
* under one parent device, 4 gmac ports under one device.
*/
- for (port = 0; port < pdev->num_resources/2; port++) {
+ for (port = 0; port < pdev->num_resources / 2; port++) {
ndev = alloc_etherdev_mq(sizeof(struct xlr_net_priv), 32);
if (!ndev) {
pr_err("Allocation of Ethernet device failed\n");
@@ -1035,7 +1041,7 @@
if (res == NULL) {
pr_err("No memory resource for MAC %d\n",
- priv->port_id);
+ priv->port_id);
err = -ENODEV;
goto err_gmac;
}
@@ -1048,7 +1054,7 @@
adapter->netdev[port] = ndev;
res = platform_get_resource(pdev, IORESOURCE_IRQ, port);
- if (res == NULL) {
+ if (!res) {
pr_err("No irq resource for MAC %d\n", priv->port_id);
err = -ENODEV;
goto err_gmac;
@@ -1098,7 +1104,7 @@
err = register_netdev(ndev);
if (err) {
pr_err("Registering netdev failed for gmac%d\n",
- priv->port_id);
+ priv->port_id);
goto err_netdev;
}
platform_set_drvdata(pdev, priv);
diff --git a/drivers/staging/netlogic/xlr_net.h b/drivers/staging/netlogic/xlr_net.h
index 7ae8874..f76e16c 100644
--- a/drivers/staging/netlogic/xlr_net.h
+++ b/drivers/staging/netlogic/xlr_net.h
@@ -277,332 +277,332 @@
#define O_MAC_FILTER_CONFIG__MAC_ADDR0_VALID 0
#define R_HASH_TABLE_VECTOR 0x30
#define R_TX_CONTROL 0x0A0
-#define O_TX_CONTROL__Tx15Halt 31
-#define O_TX_CONTROL__Tx14Halt 30
-#define O_TX_CONTROL__Tx13Halt 29
-#define O_TX_CONTROL__Tx12Halt 28
-#define O_TX_CONTROL__Tx11Halt 27
-#define O_TX_CONTROL__Tx10Halt 26
-#define O_TX_CONTROL__Tx9Halt 25
-#define O_TX_CONTROL__Tx8Halt 24
-#define O_TX_CONTROL__Tx7Halt 23
-#define O_TX_CONTROL__Tx6Halt 22
-#define O_TX_CONTROL__Tx5Halt 21
-#define O_TX_CONTROL__Tx4Halt 20
-#define O_TX_CONTROL__Tx3Halt 19
-#define O_TX_CONTROL__Tx2Halt 18
-#define O_TX_CONTROL__Tx1Halt 17
-#define O_TX_CONTROL__Tx0Halt 16
-#define O_TX_CONTROL__TxIdle 15
-#define O_TX_CONTROL__TxEnable 14
-#define O_TX_CONTROL__TxThreshold 0
-#define W_TX_CONTROL__TxThreshold 14
+#define O_TX_CONTROL__TX15HALT 31
+#define O_TX_CONTROL__TX14HALT 30
+#define O_TX_CONTROL__TX13HALT 29
+#define O_TX_CONTROL__TX12HALT 28
+#define O_TX_CONTROL__TX11HALT 27
+#define O_TX_CONTROL__TX10HALT 26
+#define O_TX_CONTROL__TX9HALT 25
+#define O_TX_CONTROL__TX8HALT 24
+#define O_TX_CONTROL__TX7HALT 23
+#define O_TX_CONTROL__TX6HALT 22
+#define O_TX_CONTROL__TX5HALT 21
+#define O_TX_CONTROL__TX4HALT 20
+#define O_TX_CONTROL__TX3HALT 19
+#define O_TX_CONTROL__TX2HALT 18
+#define O_TX_CONTROL__TX1HALT 17
+#define O_TX_CONTROL__TX0HALT 16
+#define O_TX_CONTROL__TXIDLE 15
+#define O_TX_CONTROL__TXENABLE 14
+#define O_TX_CONTROL__TXTHRESHOLD 0
+#define W_TX_CONTROL__TXTHRESHOLD 14
#define R_RX_CONTROL 0x0A1
#define O_RX_CONTROL__RGMII 10
-#define O_RX_CONTROL__SoftReset 2
-#define O_RX_CONTROL__RxHalt 1
-#define O_RX_CONTROL__RxEnable 0
+#define O_RX_CONTROL__SOFTRESET 2
+#define O_RX_CONTROL__RXHALT 1
+#define O_RX_CONTROL__RXENABLE 0
#define R_DESC_PACK_CTRL 0x0A2
-#define O_DESC_PACK_CTRL__ByteOffset 17
-#define W_DESC_PACK_CTRL__ByteOffset 3
-#define O_DESC_PACK_CTRL__PrePadEnable 16
-#define O_DESC_PACK_CTRL__MaxEntry 14
-#define W_DESC_PACK_CTRL__MaxEntry 2
-#define O_DESC_PACK_CTRL__RegularSize 0
-#define W_DESC_PACK_CTRL__RegularSize 14
+#define O_DESC_PACK_CTRL__BYTEOFFSET 17
+#define W_DESC_PACK_CTRL__BYTEOFFSET 3
+#define O_DESC_PACK_CTRL__PREPADENABLE 16
+#define O_DESC_PACK_CTRL__MAXENTRY 14
+#define W_DESC_PACK_CTRL__MAXENTRY 2
+#define O_DESC_PACK_CTRL__REGULARSIZE 0
+#define W_DESC_PACK_CTRL__REGULARSIZE 14
#define R_STATCTRL 0x0A3
-#define O_STATCTRL__OverFlowEn 4
+#define O_STATCTRL__OVERFLOWEN 4
#define O_STATCTRL__GIG 3
-#define O_STATCTRL__Sten 2
-#define O_STATCTRL__ClrCnt 1
-#define O_STATCTRL__AutoZ 0
+#define O_STATCTRL__STEN 2
+#define O_STATCTRL__CLRCNT 1
+#define O_STATCTRL__AUTOZ 0
#define R_L2ALLOCCTRL 0x0A4
-#define O_L2ALLOCCTRL__TxL2Allocate 9
-#define W_L2ALLOCCTRL__TxL2Allocate 9
-#define O_L2ALLOCCTRL__RxL2Allocate 0
-#define W_L2ALLOCCTRL__RxL2Allocate 9
+#define O_L2ALLOCCTRL__TXL2ALLOCATE 9
+#define W_L2ALLOCCTRL__TXL2ALLOCATE 9
+#define O_L2ALLOCCTRL__RXL2ALLOCATE 0
+#define W_L2ALLOCCTRL__RXL2ALLOCATE 9
#define R_INTMASK 0x0A5
-#define O_INTMASK__Spi4TxError 28
-#define O_INTMASK__Spi4RxError 27
-#define O_INTMASK__RGMIIHalfDupCollision 27
-#define O_INTMASK__Abort 26
-#define O_INTMASK__Underrun 25
-#define O_INTMASK__DiscardPacket 24
-#define O_INTMASK__AsyncFifoFull 23
-#define O_INTMASK__TagFull 22
-#define O_INTMASK__Class3Full 21
-#define O_INTMASK__C3EarlyFull 20
-#define O_INTMASK__Class2Full 19
-#define O_INTMASK__C2EarlyFull 18
-#define O_INTMASK__Class1Full 17
-#define O_INTMASK__C1EarlyFull 16
-#define O_INTMASK__Class0Full 15
-#define O_INTMASK__C0EarlyFull 14
-#define O_INTMASK__RxDataFull 13
-#define O_INTMASK__RxEarlyFull 12
-#define O_INTMASK__RFreeEmpty 9
-#define O_INTMASK__RFEarlyEmpty 8
-#define O_INTMASK__P2PSpillEcc 7
-#define O_INTMASK__FreeDescFull 5
-#define O_INTMASK__FreeEarlyFull 4
-#define O_INTMASK__TxFetchError 3
-#define O_INTMASK__StatCarry 2
-#define O_INTMASK__MDInt 1
-#define O_INTMASK__TxIllegal 0
+#define O_INTMASK__SPI4TXERROR 28
+#define O_INTMASK__SPI4RXERROR 27
+#define O_INTMASK__RGMIIHALFDUPCOLLISION 27
+#define O_INTMASK__ABORT 26
+#define O_INTMASK__UNDERRUN 25
+#define O_INTMASK__DISCARDPACKET 24
+#define O_INTMASK__ASYNCFIFOFULL 23
+#define O_INTMASK__TAGFULL 22
+#define O_INTMASK__CLASS3FULL 21
+#define O_INTMASK__C3EARLYFULL 20
+#define O_INTMASK__CLASS2FULL 19
+#define O_INTMASK__C2EARLYFULL 18
+#define O_INTMASK__CLASS1FULL 17
+#define O_INTMASK__C1EARLYFULL 16
+#define O_INTMASK__CLASS0FULL 15
+#define O_INTMASK__C0EARLYFULL 14
+#define O_INTMASK__RXDATAFULL 13
+#define O_INTMASK__RXEARLYFULL 12
+#define O_INTMASK__RFREEEMPTY 9
+#define O_INTMASK__RFEARLYEMPTY 8
+#define O_INTMASK__P2PSPILLECC 7
+#define O_INTMASK__FREEDESCFULL 5
+#define O_INTMASK__FREEEARLYFULL 4
+#define O_INTMASK__TXFETCHERROR 3
+#define O_INTMASK__STATCARRY 2
+#define O_INTMASK__MDINT 1
+#define O_INTMASK__TXILLEGAL 0
#define R_INTREG 0x0A6
-#define O_INTREG__Spi4TxError 28
-#define O_INTREG__Spi4RxError 27
-#define O_INTREG__RGMIIHalfDupCollision 27
-#define O_INTREG__Abort 26
-#define O_INTREG__Underrun 25
-#define O_INTREG__DiscardPacket 24
-#define O_INTREG__AsyncFifoFull 23
-#define O_INTREG__TagFull 22
-#define O_INTREG__Class3Full 21
-#define O_INTREG__C3EarlyFull 20
-#define O_INTREG__Class2Full 19
-#define O_INTREG__C2EarlyFull 18
-#define O_INTREG__Class1Full 17
-#define O_INTREG__C1EarlyFull 16
-#define O_INTREG__Class0Full 15
-#define O_INTREG__C0EarlyFull 14
-#define O_INTREG__RxDataFull 13
-#define O_INTREG__RxEarlyFull 12
-#define O_INTREG__RFreeEmpty 9
-#define O_INTREG__RFEarlyEmpty 8
-#define O_INTREG__P2PSpillEcc 7
-#define O_INTREG__FreeDescFull 5
-#define O_INTREG__FreeEarlyFull 4
-#define O_INTREG__TxFetchError 3
-#define O_INTREG__StatCarry 2
-#define O_INTREG__MDInt 1
-#define O_INTREG__TxIllegal 0
+#define O_INTREG__SPI4TXERROR 28
+#define O_INTREG__SPI4RXERROR 27
+#define O_INTREG__RGMIIHALFDUPCOLLISION 27
+#define O_INTREG__ABORT 26
+#define O_INTREG__UNDERRUN 25
+#define O_INTREG__DISCARDPACKET 24
+#define O_INTREG__ASYNCFIFOFULL 23
+#define O_INTREG__TAGFULL 22
+#define O_INTREG__CLASS3FULL 21
+#define O_INTREG__C3EARLYFULL 20
+#define O_INTREG__CLASS2FULL 19
+#define O_INTREG__C2EARLYFULL 18
+#define O_INTREG__CLASS1FULL 17
+#define O_INTREG__C1EARLYFULL 16
+#define O_INTREG__CLASS0FULL 15
+#define O_INTREG__C0EARLYFULL 14
+#define O_INTREG__RXDATAFULL 13
+#define O_INTREG__RXEARLYFULL 12
+#define O_INTREG__RFREEEMPTY 9
+#define O_INTREG__RFEARLYEMPTY 8
+#define O_INTREG__P2PSPILLECC 7
+#define O_INTREG__FREEDESCFULL 5
+#define O_INTREG__FREEEARLYFULL 4
+#define O_INTREG__TXFETCHERROR 3
+#define O_INTREG__STATCARRY 2
+#define O_INTREG__MDINT 1
+#define O_INTREG__TXILLEGAL 0
#define R_TXRETRY 0x0A7
-#define O_TXRETRY__CollisionRetry 6
-#define O_TXRETRY__BusErrorRetry 5
-#define O_TXRETRY__UnderRunRetry 4
-#define O_TXRETRY__Retries 0
-#define W_TXRETRY__Retries 4
+#define O_TXRETRY__COLLISIONRETRY 6
+#define O_TXRETRY__BUSERRORRETRY 5
+#define O_TXRETRY__UNDERRUNRETRY 4
+#define O_TXRETRY__RETRIES 0
+#define W_TXRETRY__RETRIES 4
#define R_CORECONTROL 0x0A8
-#define O_CORECONTROL__ErrorThread 4
-#define W_CORECONTROL__ErrorThread 7
-#define O_CORECONTROL__Shutdown 2
-#define O_CORECONTROL__Speed 0
-#define W_CORECONTROL__Speed 2
+#define O_CORECONTROL__ERRORTHREAD 4
+#define W_CORECONTROL__ERRORTHREAD 7
+#define O_CORECONTROL__SHUTDOWN 2
+#define O_CORECONTROL__SPEED 0
+#define W_CORECONTROL__SPEED 2
#define R_BYTEOFFSET0 0x0A9
#define R_BYTEOFFSET1 0x0AA
#define R_L2TYPE_0 0x0F0
-#define O_L2TYPE__ExtraHdrProtoSize 26
-#define W_L2TYPE__ExtraHdrProtoSize 5
-#define O_L2TYPE__ExtraHdrProtoOffset 20
-#define W_L2TYPE__ExtraHdrProtoOffset 6
-#define O_L2TYPE__ExtraHeaderSize 14
-#define W_L2TYPE__ExtraHeaderSize 6
-#define O_L2TYPE__ProtoOffset 8
-#define W_L2TYPE__ProtoOffset 6
-#define O_L2TYPE__L2HdrOffset 2
-#define W_L2TYPE__L2HdrOffset 6
-#define O_L2TYPE__L2Proto 0
-#define W_L2TYPE__L2Proto 2
+#define O_L2TYPE__EXTRAHDRPROTOSIZE 26
+#define W_L2TYPE__EXTRAHDRPROTOSIZE 5
+#define O_L2TYPE__EXTRAHDRPROTOOFFSET 20
+#define W_L2TYPE__EXTRAHDRPROTOOFFSET 6
+#define O_L2TYPE__EXTRAHEADERSIZE 14
+#define W_L2TYPE__EXTRAHEADERSIZE 6
+#define O_L2TYPE__PROTOOFFSET 8
+#define W_L2TYPE__PROTOOFFSET 6
+#define O_L2TYPE__L2HDROFFSET 2
+#define W_L2TYPE__L2HDROFFSET 6
+#define O_L2TYPE__L2PROTO 0
+#define W_L2TYPE__L2PROTO 2
#define R_L2TYPE_1 0xF0
#define R_L2TYPE_2 0xF0
#define R_L2TYPE_3 0xF0
#define R_PARSERCONFIGREG 0x100
-#define O_PARSERCONFIGREG__CRCHashPoly 8
-#define W_PARSERCONFIGREG__CRCHashPoly 7
-#define O_PARSERCONFIGREG__PrePadOffset 4
-#define W_PARSERCONFIGREG__PrePadOffset 4
-#define O_PARSERCONFIGREG__UseCAM 2
-#define O_PARSERCONFIGREG__UseHASH 1
-#define O_PARSERCONFIGREG__UseProto 0
+#define O_PARSERCONFIGREG__CRCHASHPOLY 8
+#define W_PARSERCONFIGREG__CRCHASHPOLY 7
+#define O_PARSERCONFIGREG__PREPADOFFSET 4
+#define W_PARSERCONFIGREG__PREPADOFFSET 4
+#define O_PARSERCONFIGREG__USECAM 2
+#define O_PARSERCONFIGREG__USEHASH 1
+#define O_PARSERCONFIGREG__USEPROTO 0
#define R_L3CTABLE 0x140
-#define O_L3CTABLE__Offset0 25
-#define W_L3CTABLE__Offset0 7
-#define O_L3CTABLE__Len0 21
-#define W_L3CTABLE__Len0 4
-#define O_L3CTABLE__Offset1 14
-#define W_L3CTABLE__Offset1 7
-#define O_L3CTABLE__Len1 10
-#define W_L3CTABLE__Len1 4
-#define O_L3CTABLE__Offset2 4
-#define W_L3CTABLE__Offset2 6
-#define O_L3CTABLE__Len2 0
-#define W_L3CTABLE__Len2 4
-#define O_L3CTABLE__L3HdrOffset 26
-#define W_L3CTABLE__L3HdrOffset 6
-#define O_L3CTABLE__L4ProtoOffset 20
-#define W_L3CTABLE__L4ProtoOffset 6
-#define O_L3CTABLE__IPChksumCompute 19
-#define O_L3CTABLE__L4Classify 18
-#define O_L3CTABLE__L2Proto 16
-#define W_L3CTABLE__L2Proto 2
-#define O_L3CTABLE__L3ProtoKey 0
-#define W_L3CTABLE__L3ProtoKey 16
+#define O_L3CTABLE__OFFSET0 25
+#define W_L3CTABLE__OFFSET0 7
+#define O_L3CTABLE__LEN0 21
+#define W_L3CTABLE__LEN0 4
+#define O_L3CTABLE__OFFSET1 14
+#define W_L3CTABLE__OFFSET1 7
+#define O_L3CTABLE__LEN1 10
+#define W_L3CTABLE__LEN1 4
+#define O_L3CTABLE__OFFSET2 4
+#define W_L3CTABLE__OFFSET2 6
+#define O_L3CTABLE__LEN2 0
+#define W_L3CTABLE__LEN2 4
+#define O_L3CTABLE__L3HDROFFSET 26
+#define W_L3CTABLE__L3HDROFFSET 6
+#define O_L3CTABLE__L4PROTOOFFSET 20
+#define W_L3CTABLE__L4PROTOOFFSET 6
+#define O_L3CTABLE__IPCHKSUMCOMPUTE 19
+#define O_L3CTABLE__L4CLASSIFY 18
+#define O_L3CTABLE__L2PROTO 16
+#define W_L3CTABLE__L2PROTO 2
+#define O_L3CTABLE__L3PROTOKEY 0
+#define W_L3CTABLE__L3PROTOKEY 16
#define R_L4CTABLE 0x160
-#define O_L4CTABLE__Offset0 21
-#define W_L4CTABLE__Offset0 6
-#define O_L4CTABLE__Len0 17
-#define W_L4CTABLE__Len0 4
-#define O_L4CTABLE__Offset1 11
-#define W_L4CTABLE__Offset1 6
-#define O_L4CTABLE__Len1 7
-#define W_L4CTABLE__Len1 4
-#define O_L4CTABLE__TCPChksumEnable 0
+#define O_L4CTABLE__OFFSET0 21
+#define W_L4CTABLE__OFFSET0 6
+#define O_L4CTABLE__LEN0 17
+#define W_L4CTABLE__LEN0 4
+#define O_L4CTABLE__OFFSET1 11
+#define W_L4CTABLE__OFFSET1 6
+#define O_L4CTABLE__LEN1 7
+#define W_L4CTABLE__LEN1 4
+#define O_L4CTABLE__TCPCHKSUMENABLE 0
#define R_CAM4X128TABLE 0x172
-#define O_CAM4X128TABLE__ClassId 7
-#define W_CAM4X128TABLE__ClassId 2
-#define O_CAM4X128TABLE__BucketId 1
-#define W_CAM4X128TABLE__BucketId 6
-#define O_CAM4X128TABLE__UseBucket 0
+#define O_CAM4X128TABLE__CLASSID 7
+#define W_CAM4X128TABLE__CLASSID 2
+#define O_CAM4X128TABLE__BUCKETID 1
+#define W_CAM4X128TABLE__BUCKETID 6
+#define O_CAM4X128TABLE__USEBUCKET 0
#define R_CAM4X128KEY 0x180
#define R_TRANSLATETABLE 0x1A0
#define R_DMACR0 0x200
-#define O_DMACR0__Data0WrMaxCr 27
-#define W_DMACR0__Data0WrMaxCr 3
-#define O_DMACR0__Data0RdMaxCr 24
-#define W_DMACR0__Data0RdMaxCr 3
-#define O_DMACR0__Data1WrMaxCr 21
-#define W_DMACR0__Data1WrMaxCr 3
-#define O_DMACR0__Data1RdMaxCr 18
-#define W_DMACR0__Data1RdMaxCr 3
-#define O_DMACR0__Data2WrMaxCr 15
-#define W_DMACR0__Data2WrMaxCr 3
-#define O_DMACR0__Data2RdMaxCr 12
-#define W_DMACR0__Data2RdMaxCr 3
-#define O_DMACR0__Data3WrMaxCr 9
-#define W_DMACR0__Data3WrMaxCr 3
-#define O_DMACR0__Data3RdMaxCr 6
-#define W_DMACR0__Data3RdMaxCr 3
-#define O_DMACR0__Data4WrMaxCr 3
-#define W_DMACR0__Data4WrMaxCr 3
-#define O_DMACR0__Data4RdMaxCr 0
-#define W_DMACR0__Data4RdMaxCr 3
+#define O_DMACR0__DATA0WRMAXCR 27
+#define W_DMACR0__DATA0WRMAXCR 3
+#define O_DMACR0__DATA0RDMAXCR 24
+#define W_DMACR0__DATA0RDMAXCR 3
+#define O_DMACR0__DATA1WRMAXCR 21
+#define W_DMACR0__DATA1WRMAXCR 3
+#define O_DMACR0__DATA1RDMAXCR 18
+#define W_DMACR0__DATA1RDMAXCR 3
+#define O_DMACR0__DATA2WRMAXCR 15
+#define W_DMACR0__DATA2WRMAXCR 3
+#define O_DMACR0__DATA2RDMAXCR 12
+#define W_DMACR0__DATA2RDMAXCR 3
+#define O_DMACR0__DATA3WRMAXCR 9
+#define W_DMACR0__DATA3WRMAXCR 3
+#define O_DMACR0__DATA3RDMAXCR 6
+#define W_DMACR0__DATA3RDMAXCR 3
+#define O_DMACR0__DATA4WRMAXCR 3
+#define W_DMACR0__DATA4WRMAXCR 3
+#define O_DMACR0__DATA4RDMAXCR 0
+#define W_DMACR0__DATA4RDMAXCR 3
#define R_DMACR1 0x201
-#define O_DMACR1__Data5WrMaxCr 27
-#define W_DMACR1__Data5WrMaxCr 3
-#define O_DMACR1__Data5RdMaxCr 24
-#define W_DMACR1__Data5RdMaxCr 3
-#define O_DMACR1__Data6WrMaxCr 21
-#define W_DMACR1__Data6WrMaxCr 3
-#define O_DMACR1__Data6RdMaxCr 18
-#define W_DMACR1__Data6RdMaxCr 3
-#define O_DMACR1__Data7WrMaxCr 15
-#define W_DMACR1__Data7WrMaxCr 3
-#define O_DMACR1__Data7RdMaxCr 12
-#define W_DMACR1__Data7RdMaxCr 3
-#define O_DMACR1__Data8WrMaxCr 9
-#define W_DMACR1__Data8WrMaxCr 3
-#define O_DMACR1__Data8RdMaxCr 6
-#define W_DMACR1__Data8RdMaxCr 3
-#define O_DMACR1__Data9WrMaxCr 3
-#define W_DMACR1__Data9WrMaxCr 3
-#define O_DMACR1__Data9RdMaxCr 0
-#define W_DMACR1__Data9RdMaxCr 3
+#define O_DMACR1__DATA5WRMAXCR 27
+#define W_DMACR1__DATA5WRMAXCR 3
+#define O_DMACR1__DATA5RDMAXCR 24
+#define W_DMACR1__DATA5RDMAXCR 3
+#define O_DMACR1__DATA6WRMAXCR 21
+#define W_DMACR1__DATA6WRMAXCR 3
+#define O_DMACR1__DATA6RDMAXCR 18
+#define W_DMACR1__DATA6RDMAXCR 3
+#define O_DMACR1__DATA7WRMAXCR 15
+#define W_DMACR1__DATA7WRMAXCR 3
+#define O_DMACR1__DATA7RDMAXCR 12
+#define W_DMACR1__DATA7RDMAXCR 3
+#define O_DMACR1__DATA8WRMAXCR 9
+#define W_DMACR1__DATA8WRMAXCR 3
+#define O_DMACR1__DATA8RDMAXCR 6
+#define W_DMACR1__DATA8RDMAXCR 3
+#define O_DMACR1__DATA9WRMAXCR 3
+#define W_DMACR1__DATA9WRMAXCR 3
+#define O_DMACR1__DATA9RDMAXCR 0
+#define W_DMACR1__DATA9RDMAXCR 3
#define R_DMACR2 0x202
-#define O_DMACR2__Data10WrMaxCr 27
-#define W_DMACR2__Data10WrMaxCr 3
-#define O_DMACR2__Data10RdMaxCr 24
-#define W_DMACR2__Data10RdMaxCr 3
-#define O_DMACR2__Data11WrMaxCr 21
-#define W_DMACR2__Data11WrMaxCr 3
-#define O_DMACR2__Data11RdMaxCr 18
-#define W_DMACR2__Data11RdMaxCr 3
-#define O_DMACR2__Data12WrMaxCr 15
-#define W_DMACR2__Data12WrMaxCr 3
-#define O_DMACR2__Data12RdMaxCr 12
-#define W_DMACR2__Data12RdMaxCr 3
-#define O_DMACR2__Data13WrMaxCr 9
-#define W_DMACR2__Data13WrMaxCr 3
-#define O_DMACR2__Data13RdMaxCr 6
-#define W_DMACR2__Data13RdMaxCr 3
-#define O_DMACR2__Data14WrMaxCr 3
-#define W_DMACR2__Data14WrMaxCr 3
-#define O_DMACR2__Data14RdMaxCr 0
-#define W_DMACR2__Data14RdMaxCr 3
+#define O_DMACR2__DATA10WRMAXCR 27
+#define W_DMACR2__DATA10WRMAXCR 3
+#define O_DMACR2__DATA10RDMAXCR 24
+#define W_DMACR2__DATA10RDMAXCR 3
+#define O_DMACR2__DATA11WRMAXCR 21
+#define W_DMACR2__DATA11WRMAXCR 3
+#define O_DMACR2__DATA11RDMAXCR 18
+#define W_DMACR2__DATA11RDMAXCR 3
+#define O_DMACR2__DATA12WRMAXCR 15
+#define W_DMACR2__DATA12WRMAXCR 3
+#define O_DMACR2__DATA12RDMAXCR 12
+#define W_DMACR2__DATA12RDMAXCR 3
+#define O_DMACR2__DATA13WRMAXCR 9
+#define W_DMACR2__DATA13WRMAXCR 3
+#define O_DMACR2__DATA13RDMAXCR 6
+#define W_DMACR2__DATA13RDMAXCR 3
+#define O_DMACR2__DATA14WRMAXCR 3
+#define W_DMACR2__DATA14WRMAXCR 3
+#define O_DMACR2__DATA14RDMAXCR 0
+#define W_DMACR2__DATA14RDMAXCR 3
#define R_DMACR3 0x203
-#define O_DMACR3__Data15WrMaxCr 27
-#define W_DMACR3__Data15WrMaxCr 3
-#define O_DMACR3__Data15RdMaxCr 24
-#define W_DMACR3__Data15RdMaxCr 3
-#define O_DMACR3__SpClassWrMaxCr 21
-#define W_DMACR3__SpClassWrMaxCr 3
-#define O_DMACR3__SpClassRdMaxCr 18
-#define W_DMACR3__SpClassRdMaxCr 3
-#define O_DMACR3__JumFrInWrMaxCr 15
-#define W_DMACR3__JumFrInWrMaxCr 3
-#define O_DMACR3__JumFrInRdMaxCr 12
-#define W_DMACR3__JumFrInRdMaxCr 3
-#define O_DMACR3__RegFrInWrMaxCr 9
-#define W_DMACR3__RegFrInWrMaxCr 3
-#define O_DMACR3__RegFrInRdMaxCr 6
-#define W_DMACR3__RegFrInRdMaxCr 3
-#define O_DMACR3__FrOutWrMaxCr 3
-#define W_DMACR3__FrOutWrMaxCr 3
-#define O_DMACR3__FrOutRdMaxCr 0
-#define W_DMACR3__FrOutRdMaxCr 3
+#define O_DMACR3__DATA15WRMAXCR 27
+#define W_DMACR3__DATA15WRMAXCR 3
+#define O_DMACR3__DATA15RDMAXCR 24
+#define W_DMACR3__DATA15RDMAXCR 3
+#define O_DMACR3__SPCLASSWRMAXCR 21
+#define W_DMACR3__SPCLASSWRMAXCR 3
+#define O_DMACR3__SPCLASSRDMAXCR 18
+#define W_DMACR3__SPCLASSRDMAXCR 3
+#define O_DMACR3__JUMFRINWRMAXCR 15
+#define W_DMACR3__JUMFRINWRMAXCR 3
+#define O_DMACR3__JUMFRINRDMAXCR 12
+#define W_DMACR3__JUMFRINRDMAXCR 3
+#define O_DMACR3__REGFRINWRMAXCR 9
+#define W_DMACR3__REGFRINWRMAXCR 3
+#define O_DMACR3__REGFRINRDMAXCR 6
+#define W_DMACR3__REGFRINRDMAXCR 3
+#define O_DMACR3__FROUTWRMAXCR 3
+#define W_DMACR3__FROUTWRMAXCR 3
+#define O_DMACR3__FROUTRDMAXCR 0
+#define W_DMACR3__FROUTRDMAXCR 3
#define R_REG_FRIN_SPILL_MEM_START_0 0x204
-#define O_REG_FRIN_SPILL_MEM_START_0__RegFrInSpillMemStart0 0
-#define W_REG_FRIN_SPILL_MEM_START_0__RegFrInSpillMemStart0 32
+#define O_REG_FRIN_SPILL_MEM_START_0__REGFRINSPILLMEMSTART0 0
+#define W_REG_FRIN_SPILL_MEM_START_0__REGFRINSPILLMEMSTART0 32
#define R_REG_FRIN_SPILL_MEM_START_1 0x205
-#define O_REG_FRIN_SPILL_MEM_START_1__RegFrInSpillMemStart1 0
-#define W_REG_FRIN_SPILL_MEM_START_1__RegFrInSpillMemStart1 3
+#define O_REG_FRIN_SPILL_MEM_START_1__REGFRINSPILLMEMSTART1 0
+#define W_REG_FRIN_SPILL_MEM_START_1__REGFRINSPILLMEMSTART1 3
#define R_REG_FRIN_SPILL_MEM_SIZE 0x206
-#define O_REG_FRIN_SPILL_MEM_SIZE__RegFrInSpillMemSize 0
-#define W_REG_FRIN_SPILL_MEM_SIZE__RegFrInSpillMemSize 32
+#define O_REG_FRIN_SPILL_MEM_SIZE__REGFRINSPILLMEMSIZE 0
+#define W_REG_FRIN_SPILL_MEM_SIZE__REGFRINSPILLMEMSIZE 32
#define R_FROUT_SPILL_MEM_START_0 0x207
-#define O_FROUT_SPILL_MEM_START_0__FrOutSpillMemStart0 0
-#define W_FROUT_SPILL_MEM_START_0__FrOutSpillMemStart0 32
+#define O_FROUT_SPILL_MEM_START_0__FROUTSPILLMEMSTART0 0
+#define W_FROUT_SPILL_MEM_START_0__FROUTSPILLMEMSTART0 32
#define R_FROUT_SPILL_MEM_START_1 0x208
-#define O_FROUT_SPILL_MEM_START_1__FrOutSpillMemStart1 0
-#define W_FROUT_SPILL_MEM_START_1__FrOutSpillMemStart1 3
+#define O_FROUT_SPILL_MEM_START_1__FROUTSPILLMEMSTART1 0
+#define W_FROUT_SPILL_MEM_START_1__FROUTSPILLMEMSTART1 3
#define R_FROUT_SPILL_MEM_SIZE 0x209
-#define O_FROUT_SPILL_MEM_SIZE__FrOutSpillMemSize 0
-#define W_FROUT_SPILL_MEM_SIZE__FrOutSpillMemSize 32
+#define O_FROUT_SPILL_MEM_SIZE__FROUTSPILLMEMSIZE 0
+#define W_FROUT_SPILL_MEM_SIZE__FROUTSPILLMEMSIZE 32
#define R_CLASS0_SPILL_MEM_START_0 0x20A
-#define O_CLASS0_SPILL_MEM_START_0__Class0SpillMemStart0 0
-#define W_CLASS0_SPILL_MEM_START_0__Class0SpillMemStart0 32
+#define O_CLASS0_SPILL_MEM_START_0__CLASS0SPILLMEMSTART0 0
+#define W_CLASS0_SPILL_MEM_START_0__CLASS0SPILLMEMSTART0 32
#define R_CLASS0_SPILL_MEM_START_1 0x20B
-#define O_CLASS0_SPILL_MEM_START_1__Class0SpillMemStart1 0
-#define W_CLASS0_SPILL_MEM_START_1__Class0SpillMemStart1 3
+#define O_CLASS0_SPILL_MEM_START_1__CLASS0SPILLMEMSTART1 0
+#define W_CLASS0_SPILL_MEM_START_1__CLASS0SPILLMEMSTART1 3
#define R_CLASS0_SPILL_MEM_SIZE 0x20C
-#define O_CLASS0_SPILL_MEM_SIZE__Class0SpillMemSize 0
-#define W_CLASS0_SPILL_MEM_SIZE__Class0SpillMemSize 32
+#define O_CLASS0_SPILL_MEM_SIZE__CLASS0SPILLMEMSIZE 0
+#define W_CLASS0_SPILL_MEM_SIZE__CLASS0SPILLMEMSIZE 32
#define R_JUMFRIN_SPILL_MEM_START_0 0x20D
-#define O_JUMFRIN_SPILL_MEM_START_0__JumFrInSpillMemStar0 0
-#define W_JUMFRIN_SPILL_MEM_START_0__JumFrInSpillMemStar0 32
+#define O_JUMFRIN_SPILL_MEM_START_0__JUMFRINSPILLMEMSTART0 0
+#define W_JUMFRIN_SPILL_MEM_START_0__JUMFRINSPILLMEMSTART0 32
#define R_JUMFRIN_SPILL_MEM_START_1 0x20E
-#define O_JUMFRIN_SPILL_MEM_START_1__JumFrInSpillMemStart1 0
-#define W_JUMFRIN_SPILL_MEM_START_1__JumFrInSpillMemStart1 3
+#define O_JUMFRIN_SPILL_MEM_START_1__JUMFRINSPILLMEMSTART1 0
+#define W_JUMFRIN_SPILL_MEM_START_1__JUMFRINSPILLMEMSTART1 3
#define R_JUMFRIN_SPILL_MEM_SIZE 0x20F
-#define O_JUMFRIN_SPILL_MEM_SIZE__JumFrInSpillMemSize 0
-#define W_JUMFRIN_SPILL_MEM_SIZE__JumFrInSpillMemSize 32
+#define O_JUMFRIN_SPILL_MEM_SIZE__JUMFRINSPILLMEMSIZE 0
+#define W_JUMFRIN_SPILL_MEM_SIZE__JUMFRINSPILLMEMSIZE 32
#define R_CLASS1_SPILL_MEM_START_0 0x210
-#define O_CLASS1_SPILL_MEM_START_0__Class1SpillMemStart0 0
-#define W_CLASS1_SPILL_MEM_START_0__Class1SpillMemStart0 32
+#define O_CLASS1_SPILL_MEM_START_0__CLASS1SPILLMEMSTART0 0
+#define W_CLASS1_SPILL_MEM_START_0__CLASS1SPILLMEMSTART0 32
#define R_CLASS1_SPILL_MEM_START_1 0x211
-#define O_CLASS1_SPILL_MEM_START_1__Class1SpillMemStart1 0
-#define W_CLASS1_SPILL_MEM_START_1__Class1SpillMemStart1 3
+#define O_CLASS1_SPILL_MEM_START_1__CLASS1SPILLMEMSTART1 0
+#define W_CLASS1_SPILL_MEM_START_1__CLASS1SPILLMEMSTART1 3
#define R_CLASS1_SPILL_MEM_SIZE 0x212
-#define O_CLASS1_SPILL_MEM_SIZE__Class1SpillMemSize 0
-#define W_CLASS1_SPILL_MEM_SIZE__Class1SpillMemSize 32
+#define O_CLASS1_SPILL_MEM_SIZE__CLASS1SPILLMEMSIZE 0
+#define W_CLASS1_SPILL_MEM_SIZE__CLASS1SPILLMEMSIZE 32
#define R_CLASS2_SPILL_MEM_START_0 0x213
-#define O_CLASS2_SPILL_MEM_START_0__Class2SpillMemStart0 0
-#define W_CLASS2_SPILL_MEM_START_0__Class2SpillMemStart0 32
+#define O_CLASS2_SPILL_MEM_START_0__CLASS2SPILLMEMSTART0 0
+#define W_CLASS2_SPILL_MEM_START_0__CLASS2SPILLMEMSTART0 32
#define R_CLASS2_SPILL_MEM_START_1 0x214
-#define O_CLASS2_SPILL_MEM_START_1__Class2SpillMemStart1 0
-#define W_CLASS2_SPILL_MEM_START_1__Class2SpillMemStart1 3
+#define O_CLASS2_SPILL_MEM_START_1__CLASS2SPILLMEMSTART1 0
+#define W_CLASS2_SPILL_MEM_START_1__CLASS2SPILLMEMSTART1 3
#define R_CLASS2_SPILL_MEM_SIZE 0x215
-#define O_CLASS2_SPILL_MEM_SIZE__Class2SpillMemSize 0
-#define W_CLASS2_SPILL_MEM_SIZE__Class2SpillMemSize 32
+#define O_CLASS2_SPILL_MEM_SIZE__CLASS2SPILLMEMSIZE 0
+#define W_CLASS2_SPILL_MEM_SIZE__CLASS2SPILLMEMSIZE 32
#define R_CLASS3_SPILL_MEM_START_0 0x216
-#define O_CLASS3_SPILL_MEM_START_0__Class3SpillMemStart0 0
-#define W_CLASS3_SPILL_MEM_START_0__Class3SpillMemStart0 32
+#define O_CLASS3_SPILL_MEM_START_0__CLASS3SPILLMEMSTART0 0
+#define W_CLASS3_SPILL_MEM_START_0__CLASS3SPILLMEMSTART0 32
#define R_CLASS3_SPILL_MEM_START_1 0x217
-#define O_CLASS3_SPILL_MEM_START_1__Class3SpillMemStart1 0
-#define W_CLASS3_SPILL_MEM_START_1__Class3SpillMemStart1 3
+#define O_CLASS3_SPILL_MEM_START_1__CLASS3SPILLMEMSTART1 0
+#define W_CLASS3_SPILL_MEM_START_1__CLASS3SPILLMEMSTART1 3
#define R_CLASS3_SPILL_MEM_SIZE 0x218
-#define O_CLASS3_SPILL_MEM_SIZE__Class3SpillMemSize 0
-#define W_CLASS3_SPILL_MEM_SIZE__Class3SpillMemSize 32
+#define O_CLASS3_SPILL_MEM_SIZE__CLASS3SPILLMEMSIZE 0
+#define W_CLASS3_SPILL_MEM_SIZE__CLASS3SPILLMEMSIZE 32
#define R_REG_FRIN1_SPILL_MEM_START_0 0x219
#define R_REG_FRIN1_SPILL_MEM_START_1 0x21a
#define R_REG_FRIN1_SPILL_MEM_SIZE 0x21b
@@ -679,244 +679,244 @@
#define O_SPISTRV3__EG_STRV_THRESH_15 0
#define W_SPISTRV3__EG_STRV_THRESH_15 7
#define R_TXDATAFIFO0 0x221
-#define O_TXDATAFIFO0__Tx0DataFifoStart 24
-#define W_TXDATAFIFO0__Tx0DataFifoStart 7
-#define O_TXDATAFIFO0__Tx0DataFifoSize 16
-#define W_TXDATAFIFO0__Tx0DataFifoSize 7
-#define O_TXDATAFIFO0__Tx1DataFifoStart 8
-#define W_TXDATAFIFO0__Tx1DataFifoStart 7
-#define O_TXDATAFIFO0__Tx1DataFifoSize 0
-#define W_TXDATAFIFO0__Tx1DataFifoSize 7
+#define O_TXDATAFIFO0__TX0DATAFIFOSTART 24
+#define W_TXDATAFIFO0__TX0DATAFIFOSTART 7
+#define O_TXDATAFIFO0__TX0DATAFIFOSIZE 16
+#define W_TXDATAFIFO0__TX0DATAFIFOSIZE 7
+#define O_TXDATAFIFO0__TX1DATAFIFOSTART 8
+#define W_TXDATAFIFO0__TX1DATAFIFOSTART 7
+#define O_TXDATAFIFO0__TX1DATAFIFOSIZE 0
+#define W_TXDATAFIFO0__TX1DATAFIFOSIZE 7
#define R_TXDATAFIFO1 0x222
-#define O_TXDATAFIFO1__Tx2DataFifoStart 24
-#define W_TXDATAFIFO1__Tx2DataFifoStart 7
-#define O_TXDATAFIFO1__Tx2DataFifoSize 16
-#define W_TXDATAFIFO1__Tx2DataFifoSize 7
-#define O_TXDATAFIFO1__Tx3DataFifoStart 8
-#define W_TXDATAFIFO1__Tx3DataFifoStart 7
-#define O_TXDATAFIFO1__Tx3DataFifoSize 0
-#define W_TXDATAFIFO1__Tx3DataFifoSize 7
+#define O_TXDATAFIFO1__TX2DATAFIFOSTART 24
+#define W_TXDATAFIFO1__TX2DATAFIFOSTART 7
+#define O_TXDATAFIFO1__TX2DATAFIFOSIZE 16
+#define W_TXDATAFIFO1__TX2DATAFIFOSIZE 7
+#define O_TXDATAFIFO1__TX3DATAFIFOSTART 8
+#define W_TXDATAFIFO1__TX3DATAFIFOSTART 7
+#define O_TXDATAFIFO1__TX3DATAFIFOSIZE 0
+#define W_TXDATAFIFO1__TX3DATAFIFOSIZE 7
#define R_TXDATAFIFO2 0x223
-#define O_TXDATAFIFO2__Tx4DataFifoStart 24
-#define W_TXDATAFIFO2__Tx4DataFifoStart 7
-#define O_TXDATAFIFO2__Tx4DataFifoSize 16
-#define W_TXDATAFIFO2__Tx4DataFifoSize 7
-#define O_TXDATAFIFO2__Tx5DataFifoStart 8
-#define W_TXDATAFIFO2__Tx5DataFifoStart 7
-#define O_TXDATAFIFO2__Tx5DataFifoSize 0
-#define W_TXDATAFIFO2__Tx5DataFifoSize 7
+#define O_TXDATAFIFO2__TX4DATAFIFOSTART 24
+#define W_TXDATAFIFO2__TX4DATAFIFOSTART 7
+#define O_TXDATAFIFO2__TX4DATAFIFOSIZE 16
+#define W_TXDATAFIFO2__TX4DATAFIFOSIZE 7
+#define O_TXDATAFIFO2__TX5DATAFIFOSTART 8
+#define W_TXDATAFIFO2__TX5DATAFIFOSTART 7
+#define O_TXDATAFIFO2__TX5DATAFIFOSIZE 0
+#define W_TXDATAFIFO2__TX5DATAFIFOSIZE 7
#define R_TXDATAFIFO3 0x224
-#define O_TXDATAFIFO3__Tx6DataFifoStart 24
-#define W_TXDATAFIFO3__Tx6DataFifoStart 7
-#define O_TXDATAFIFO3__Tx6DataFifoSize 16
-#define W_TXDATAFIFO3__Tx6DataFifoSize 7
-#define O_TXDATAFIFO3__Tx7DataFifoStart 8
-#define W_TXDATAFIFO3__Tx7DataFifoStart 7
-#define O_TXDATAFIFO3__Tx7DataFifoSize 0
-#define W_TXDATAFIFO3__Tx7DataFifoSize 7
+#define O_TXDATAFIFO3__TX6DATAFIFOSTART 24
+#define W_TXDATAFIFO3__TX6DATAFIFOSTART 7
+#define O_TXDATAFIFO3__TX6DATAFIFOSIZE 16
+#define W_TXDATAFIFO3__TX6DATAFIFOSIZE 7
+#define O_TXDATAFIFO3__TX7DATAFIFOSTART 8
+#define W_TXDATAFIFO3__TX7DATAFIFOSTART 7
+#define O_TXDATAFIFO3__TX7DATAFIFOSIZE 0
+#define W_TXDATAFIFO3__TX7DATAFIFOSIZE 7
#define R_TXDATAFIFO4 0x225
-#define O_TXDATAFIFO4__Tx8DataFifoStart 24
-#define W_TXDATAFIFO4__Tx8DataFifoStart 7
-#define O_TXDATAFIFO4__Tx8DataFifoSize 16
-#define W_TXDATAFIFO4__Tx8DataFifoSize 7
-#define O_TXDATAFIFO4__Tx9DataFifoStart 8
-#define W_TXDATAFIFO4__Tx9DataFifoStart 7
-#define O_TXDATAFIFO4__Tx9DataFifoSize 0
-#define W_TXDATAFIFO4__Tx9DataFifoSize 7
+#define O_TXDATAFIFO4__TX8DATAFIFOSTART 24
+#define W_TXDATAFIFO4__TX8DATAFIFOSTART 7
+#define O_TXDATAFIFO4__TX8DATAFIFOSIZE 16
+#define W_TXDATAFIFO4__TX8DATAFIFOSIZE 7
+#define O_TXDATAFIFO4__TX9DATAFIFOSTART 8
+#define W_TXDATAFIFO4__TX9DATAFIFOSTART 7
+#define O_TXDATAFIFO4__TX9DATAFIFOSIZE 0
+#define W_TXDATAFIFO4__TX9DATAFIFOSIZE 7
#define R_TXDATAFIFO5 0x226
-#define O_TXDATAFIFO5__Tx10DataFifoStart 24
-#define W_TXDATAFIFO5__Tx10DataFifoStart 7
-#define O_TXDATAFIFO5__Tx10DataFifoSize 16
-#define W_TXDATAFIFO5__Tx10DataFifoSize 7
-#define O_TXDATAFIFO5__Tx11DataFifoStart 8
-#define W_TXDATAFIFO5__Tx11DataFifoStart 7
-#define O_TXDATAFIFO5__Tx11DataFifoSize 0
-#define W_TXDATAFIFO5__Tx11DataFifoSize 7
+#define O_TXDATAFIFO5__TX10DATAFIFOSTART 24
+#define W_TXDATAFIFO5__TX10DATAFIFOSTART 7
+#define O_TXDATAFIFO5__TX10DATAFIFOSIZE 16
+#define W_TXDATAFIFO5__TX10DATAFIFOSIZE 7
+#define O_TXDATAFIFO5__TX11DATAFIFOSTART 8
+#define W_TXDATAFIFO5__TX11DATAFIFOSTART 7
+#define O_TXDATAFIFO5__TX11DATAFIFOSIZE 0
+#define W_TXDATAFIFO5__TX11DATAFIFOSIZE 7
#define R_TXDATAFIFO6 0x227
-#define O_TXDATAFIFO6__Tx12DataFifoStart 24
-#define W_TXDATAFIFO6__Tx12DataFifoStart 7
-#define O_TXDATAFIFO6__Tx12DataFifoSize 16
-#define W_TXDATAFIFO6__Tx12DataFifoSize 7
-#define O_TXDATAFIFO6__Tx13DataFifoStart 8
-#define W_TXDATAFIFO6__Tx13DataFifoStart 7
-#define O_TXDATAFIFO6__Tx13DataFifoSize 0
-#define W_TXDATAFIFO6__Tx13DataFifoSize 7
+#define O_TXDATAFIFO6__TX12DATAFIFOSTART 24
+#define W_TXDATAFIFO6__TX12DATAFIFOSTART 7
+#define O_TXDATAFIFO6__TX12DATAFIFOSIZE 16
+#define W_TXDATAFIFO6__TX12DATAFIFOSIZE 7
+#define O_TXDATAFIFO6__TX13DATAFIFOSTART 8
+#define W_TXDATAFIFO6__TX13DATAFIFOSTART 7
+#define O_TXDATAFIFO6__TX13DATAFIFOSIZE 0
+#define W_TXDATAFIFO6__TX13DATAFIFOSIZE 7
#define R_TXDATAFIFO7 0x228
-#define O_TXDATAFIFO7__Tx14DataFifoStart 24
-#define W_TXDATAFIFO7__Tx14DataFifoStart 7
-#define O_TXDATAFIFO7__Tx14DataFifoSize 16
-#define W_TXDATAFIFO7__Tx14DataFifoSize 7
-#define O_TXDATAFIFO7__Tx15DataFifoStart 8
-#define W_TXDATAFIFO7__Tx15DataFifoStart 7
-#define O_TXDATAFIFO7__Tx15DataFifoSize 0
-#define W_TXDATAFIFO7__Tx15DataFifoSize 7
+#define O_TXDATAFIFO7__TX14DATAFIFOSTART 24
+#define W_TXDATAFIFO7__TX14DATAFIFOSTART 7
+#define O_TXDATAFIFO7__TX14DATAFIFOSIZE 16
+#define W_TXDATAFIFO7__TX14DATAFIFOSIZE 7
+#define O_TXDATAFIFO7__TX15DATAFIFOSTART 8
+#define W_TXDATAFIFO7__TX15DATAFIFOSTART 7
+#define O_TXDATAFIFO7__TX15DATAFIFOSIZE 0
+#define W_TXDATAFIFO7__TX15DATAFIFOSIZE 7
#define R_RXDATAFIFO0 0x229
-#define O_RXDATAFIFO0__Rx0DataFifoStart 24
-#define W_RXDATAFIFO0__Rx0DataFifoStart 7
-#define O_RXDATAFIFO0__Rx0DataFifoSize 16
-#define W_RXDATAFIFO0__Rx0DataFifoSize 7
-#define O_RXDATAFIFO0__Rx1DataFifoStart 8
-#define W_RXDATAFIFO0__Rx1DataFifoStart 7
-#define O_RXDATAFIFO0__Rx1DataFifoSize 0
-#define W_RXDATAFIFO0__Rx1DataFifoSize 7
+#define O_RXDATAFIFO0__RX0DATAFIFOSTART 24
+#define W_RXDATAFIFO0__RX0DATAFIFOSTART 7
+#define O_RXDATAFIFO0__RX0DATAFIFOSIZE 16
+#define W_RXDATAFIFO0__RX0DATAFIFOSIZE 7
+#define O_RXDATAFIFO0__RX1DATAFIFOSTART 8
+#define W_RXDATAFIFO0__RX1DATAFIFOSTART 7
+#define O_RXDATAFIFO0__RX1DATAFIFOSIZE 0
+#define W_RXDATAFIFO0__RX1DATAFIFOSIZE 7
#define R_RXDATAFIFO1 0x22A
-#define O_RXDATAFIFO1__Rx2DataFifoStart 24
-#define W_RXDATAFIFO1__Rx2DataFifoStart 7
-#define O_RXDATAFIFO1__Rx2DataFifoSize 16
-#define W_RXDATAFIFO1__Rx2DataFifoSize 7
-#define O_RXDATAFIFO1__Rx3DataFifoStart 8
-#define W_RXDATAFIFO1__Rx3DataFifoStart 7
-#define O_RXDATAFIFO1__Rx3DataFifoSize 0
-#define W_RXDATAFIFO1__Rx3DataFifoSize 7
+#define O_RXDATAFIFO1__RX2DATAFIFOSTART 24
+#define W_RXDATAFIFO1__RX2DATAFIFOSTART 7
+#define O_RXDATAFIFO1__RX2DATAFIFOSIZE 16
+#define W_RXDATAFIFO1__RX2DATAFIFOSIZE 7
+#define O_RXDATAFIFO1__RX3DATAFIFOSTART 8
+#define W_RXDATAFIFO1__RX3DATAFIFOSTART 7
+#define O_RXDATAFIFO1__RX3DATAFIFOSIZE 0
+#define W_RXDATAFIFO1__RX3DATAFIFOSIZE 7
#define R_RXDATAFIFO2 0x22B
-#define O_RXDATAFIFO2__Rx4DataFifoStart 24
-#define W_RXDATAFIFO2__Rx4DataFifoStart 7
-#define O_RXDATAFIFO2__Rx4DataFifoSize 16
-#define W_RXDATAFIFO2__Rx4DataFifoSize 7
-#define O_RXDATAFIFO2__Rx5DataFifoStart 8
-#define W_RXDATAFIFO2__Rx5DataFifoStart 7
-#define O_RXDATAFIFO2__Rx5DataFifoSize 0
-#define W_RXDATAFIFO2__Rx5DataFifoSize 7
+#define O_RXDATAFIFO2__RX4DATAFIFOSTART 24
+#define W_RXDATAFIFO2__RX4DATAFIFOSTART 7
+#define O_RXDATAFIFO2__RX4DATAFIFOSIZE 16
+#define W_RXDATAFIFO2__RX4DATAFIFOSIZE 7
+#define O_RXDATAFIFO2__RX5DATAFIFOSTART 8
+#define W_RXDATAFIFO2__RX5DATAFIFOSTART 7
+#define O_RXDATAFIFO2__RX5DATAFIFOSIZE 0
+#define W_RXDATAFIFO2__RX5DATAFIFOSIZE 7
#define R_RXDATAFIFO3 0x22C
-#define O_RXDATAFIFO3__Rx6DataFifoStart 24
-#define W_RXDATAFIFO3__Rx6DataFifoStart 7
-#define O_RXDATAFIFO3__Rx6DataFifoSize 16
-#define W_RXDATAFIFO3__Rx6DataFifoSize 7
-#define O_RXDATAFIFO3__Rx7DataFifoStart 8
-#define W_RXDATAFIFO3__Rx7DataFifoStart 7
-#define O_RXDATAFIFO3__Rx7DataFifoSize 0
-#define W_RXDATAFIFO3__Rx7DataFifoSize 7
+#define O_RXDATAFIFO3__RX6DATAFIFOSTART 24
+#define W_RXDATAFIFO3__RX6DATAFIFOSTART 7
+#define O_RXDATAFIFO3__RX6DATAFIFOSIZE 16
+#define W_RXDATAFIFO3__RX6DATAFIFOSIZE 7
+#define O_RXDATAFIFO3__RX7DATAFIFOSTART 8
+#define W_RXDATAFIFO3__RX7DATAFIFOSTART 7
+#define O_RXDATAFIFO3__RX7DATAFIFOSIZE 0
+#define W_RXDATAFIFO3__RX7DATAFIFOSIZE 7
#define R_RXDATAFIFO4 0x22D
-#define O_RXDATAFIFO4__Rx8DataFifoStart 24
-#define W_RXDATAFIFO4__Rx8DataFifoStart 7
-#define O_RXDATAFIFO4__Rx8DataFifoSize 16
-#define W_RXDATAFIFO4__Rx8DataFifoSize 7
-#define O_RXDATAFIFO4__Rx9DataFifoStart 8
-#define W_RXDATAFIFO4__Rx9DataFifoStart 7
-#define O_RXDATAFIFO4__Rx9DataFifoSize 0
-#define W_RXDATAFIFO4__Rx9DataFifoSize 7
+#define O_RXDATAFIFO4__RX8DATAFIFOSTART 24
+#define W_RXDATAFIFO4__RX8DATAFIFOSTART 7
+#define O_RXDATAFIFO4__RX8DATAFIFOSIZE 16
+#define W_RXDATAFIFO4__RX8DATAFIFOSIZE 7
+#define O_RXDATAFIFO4__RX9DATAFIFOSTART 8
+#define W_RXDATAFIFO4__RX9DATAFIFOSTART 7
+#define O_RXDATAFIFO4__RX9DATAFIFOSIZE 0
+#define W_RXDATAFIFO4__RX9DATAFIFOSIZE 7
#define R_RXDATAFIFO5 0x22E
-#define O_RXDATAFIFO5__Rx10DataFifoStart 24
-#define W_RXDATAFIFO5__Rx10DataFifoStart 7
-#define O_RXDATAFIFO5__Rx10DataFifoSize 16
-#define W_RXDATAFIFO5__Rx10DataFifoSize 7
-#define O_RXDATAFIFO5__Rx11DataFifoStart 8
-#define W_RXDATAFIFO5__Rx11DataFifoStart 7
-#define O_RXDATAFIFO5__Rx11DataFifoSize 0
-#define W_RXDATAFIFO5__Rx11DataFifoSize 7
+#define O_RXDATAFIFO5__RX10DATAFIFOSTART 24
+#define W_RXDATAFIFO5__RX10DATAFIFOSTART 7
+#define O_RXDATAFIFO5__RX10DATAFIFOSIZE 16
+#define W_RXDATAFIFO5__RX10DATAFIFOSIZE 7
+#define O_RXDATAFIFO5__RX11DATAFIFOSTART 8
+#define W_RXDATAFIFO5__RX11DATAFIFOSTART 7
+#define O_RXDATAFIFO5__RX11DATAFIFOSIZE 0
+#define W_RXDATAFIFO5__RX11DATAFIFOSIZE 7
#define R_RXDATAFIFO6 0x22F
-#define O_RXDATAFIFO6__Rx12DataFifoStart 24
-#define W_RXDATAFIFO6__Rx12DataFifoStart 7
-#define O_RXDATAFIFO6__Rx12DataFifoSize 16
-#define W_RXDATAFIFO6__Rx12DataFifoSize 7
-#define O_RXDATAFIFO6__Rx13DataFifoStart 8
-#define W_RXDATAFIFO6__Rx13DataFifoStart 7
-#define O_RXDATAFIFO6__Rx13DataFifoSize 0
-#define W_RXDATAFIFO6__Rx13DataFifoSize 7
+#define O_RXDATAFIFO6__RX12DATAFIFOSTART 24
+#define W_RXDATAFIFO6__RX12DATAFIFOSTART 7
+#define O_RXDATAFIFO6__RX12DATAFIFOSIZE 16
+#define W_RXDATAFIFO6__RX12DATAFIFOSIZE 7
+#define O_RXDATAFIFO6__RX13DATAFIFOSTART 8
+#define W_RXDATAFIFO6__RX13DATAFIFOSTART 7
+#define O_RXDATAFIFO6__RX13DATAFIFOSIZE 0
+#define W_RXDATAFIFO6__RX13DATAFIFOSIZE 7
#define R_RXDATAFIFO7 0x230
-#define O_RXDATAFIFO7__Rx14DataFifoStart 24
-#define W_RXDATAFIFO7__Rx14DataFifoStart 7
-#define O_RXDATAFIFO7__Rx14DataFifoSize 16
-#define W_RXDATAFIFO7__Rx14DataFifoSize 7
-#define O_RXDATAFIFO7__Rx15DataFifoStart 8
-#define W_RXDATAFIFO7__Rx15DataFifoStart 7
-#define O_RXDATAFIFO7__Rx15DataFifoSize 0
-#define W_RXDATAFIFO7__Rx15DataFifoSize 7
+#define O_RXDATAFIFO7__RX14DATAFIFOSTART 24
+#define W_RXDATAFIFO7__RX14DATAFIFOSTART 7
+#define O_RXDATAFIFO7__RX14DATAFIFOSIZE 16
+#define W_RXDATAFIFO7__RX14DATAFIFOSIZE 7
+#define O_RXDATAFIFO7__RX15DATAFIFOSTART 8
+#define W_RXDATAFIFO7__RX15DATAFIFOSTART 7
+#define O_RXDATAFIFO7__RX15DATAFIFOSIZE 0
+#define W_RXDATAFIFO7__RX15DATAFIFOSIZE 7
#define R_XGMACPADCALIBRATION 0x231
#define R_FREEQCARVE 0x233
#define R_SPI4STATICDELAY0 0x240
-#define O_SPI4STATICDELAY0__DataLine7 28
-#define W_SPI4STATICDELAY0__DataLine7 4
-#define O_SPI4STATICDELAY0__DataLine6 24
-#define W_SPI4STATICDELAY0__DataLine6 4
-#define O_SPI4STATICDELAY0__DataLine5 20
-#define W_SPI4STATICDELAY0__DataLine5 4
-#define O_SPI4STATICDELAY0__DataLine4 16
-#define W_SPI4STATICDELAY0__DataLine4 4
-#define O_SPI4STATICDELAY0__DataLine3 12
-#define W_SPI4STATICDELAY0__DataLine3 4
-#define O_SPI4STATICDELAY0__DataLine2 8
-#define W_SPI4STATICDELAY0__DataLine2 4
-#define O_SPI4STATICDELAY0__DataLine1 4
-#define W_SPI4STATICDELAY0__DataLine1 4
-#define O_SPI4STATICDELAY0__DataLine0 0
-#define W_SPI4STATICDELAY0__DataLine0 4
+#define O_SPI4STATICDELAY0__DATALINE7 28
+#define W_SPI4STATICDELAY0__DATALINE7 4
+#define O_SPI4STATICDELAY0__DATALINE6 24
+#define W_SPI4STATICDELAY0__DATALINE6 4
+#define O_SPI4STATICDELAY0__DATALINE5 20
+#define W_SPI4STATICDELAY0__DATALINE5 4
+#define O_SPI4STATICDELAY0__DATALINE4 16
+#define W_SPI4STATICDELAY0__DATALINE4 4
+#define O_SPI4STATICDELAY0__DATALINE3 12
+#define W_SPI4STATICDELAY0__DATALINE3 4
+#define O_SPI4STATICDELAY0__DATALINE2 8
+#define W_SPI4STATICDELAY0__DATALINE2 4
+#define O_SPI4STATICDELAY0__DATALINE1 4
+#define W_SPI4STATICDELAY0__DATALINE1 4
+#define O_SPI4STATICDELAY0__DATALINE0 0
+#define W_SPI4STATICDELAY0__DATALINE0 4
#define R_SPI4STATICDELAY1 0x241
-#define O_SPI4STATICDELAY1__DataLine15 28
-#define W_SPI4STATICDELAY1__DataLine15 4
-#define O_SPI4STATICDELAY1__DataLine14 24
-#define W_SPI4STATICDELAY1__DataLine14 4
-#define O_SPI4STATICDELAY1__DataLine13 20
-#define W_SPI4STATICDELAY1__DataLine13 4
-#define O_SPI4STATICDELAY1__DataLine12 16
-#define W_SPI4STATICDELAY1__DataLine12 4
-#define O_SPI4STATICDELAY1__DataLine11 12
-#define W_SPI4STATICDELAY1__DataLine11 4
-#define O_SPI4STATICDELAY1__DataLine10 8
-#define W_SPI4STATICDELAY1__DataLine10 4
-#define O_SPI4STATICDELAY1__DataLine9 4
-#define W_SPI4STATICDELAY1__DataLine9 4
-#define O_SPI4STATICDELAY1__DataLine8 0
-#define W_SPI4STATICDELAY1__DataLine8 4
+#define O_SPI4STATICDELAY1__DATALINE15 28
+#define W_SPI4STATICDELAY1__DATALINE15 4
+#define O_SPI4STATICDELAY1__DATALINE14 24
+#define W_SPI4STATICDELAY1__DATALINE14 4
+#define O_SPI4STATICDELAY1__DATALINE13 20
+#define W_SPI4STATICDELAY1__DATALINE13 4
+#define O_SPI4STATICDELAY1__DATALINE12 16
+#define W_SPI4STATICDELAY1__DATALINE12 4
+#define O_SPI4STATICDELAY1__DATALINE11 12
+#define W_SPI4STATICDELAY1__DATALINE11 4
+#define O_SPI4STATICDELAY1__DATALINE10 8
+#define W_SPI4STATICDELAY1__DATALINE10 4
+#define O_SPI4STATICDELAY1__DATALINE9 4
+#define W_SPI4STATICDELAY1__DATALINE9 4
+#define O_SPI4STATICDELAY1__DATALINE8 0
+#define W_SPI4STATICDELAY1__DATALINE8 4
#define R_SPI4STATICDELAY2 0x242
-#define O_SPI4STATICDELAY0__TxStat1 8
-#define W_SPI4STATICDELAY0__TxStat1 4
-#define O_SPI4STATICDELAY0__TxStat0 4
-#define W_SPI4STATICDELAY0__TxStat0 4
-#define O_SPI4STATICDELAY0__RxControl 0
-#define W_SPI4STATICDELAY0__RxControl 4
+#define O_SPI4STATICDELAY0__TXSTAT1 8
+#define W_SPI4STATICDELAY0__TXSTAT1 4
+#define O_SPI4STATICDELAY0__TXSTAT0 4
+#define W_SPI4STATICDELAY0__TXSTAT0 4
+#define O_SPI4STATICDELAY0__RXCONTROL 0
+#define W_SPI4STATICDELAY0__RXCONTROL 4
#define R_SPI4CONTROL 0x243
-#define O_SPI4CONTROL__StaticDelay 2
+#define O_SPI4CONTROL__STATICDELAY 2
#define O_SPI4CONTROL__LVDS_LVTTL 1
-#define O_SPI4CONTROL__SPI4Enable 0
+#define O_SPI4CONTROL__SPI4ENABLE 0
#define R_CLASSWATERMARKS 0x244
-#define O_CLASSWATERMARKS__Class0Watermark 24
-#define W_CLASSWATERMARKS__Class0Watermark 5
-#define O_CLASSWATERMARKS__Class1Watermark 16
-#define W_CLASSWATERMARKS__Class1Watermark 5
-#define O_CLASSWATERMARKS__Class3Watermark 0
-#define W_CLASSWATERMARKS__Class3Watermark 5
+#define O_CLASSWATERMARKS__CLASS0WATERMARK 24
+#define W_CLASSWATERMARKS__CLASS0WATERMARK 5
+#define O_CLASSWATERMARKS__CLASS1WATERMARK 16
+#define W_CLASSWATERMARKS__CLASS1WATERMARK 5
+#define O_CLASSWATERMARKS__CLASS3WATERMARK 0
+#define W_CLASSWATERMARKS__CLASS3WATERMARK 5
#define R_RXWATERMARKS1 0x245
-#define O_RXWATERMARKS__Rx0DataWatermark 24
-#define W_RXWATERMARKS__Rx0DataWatermark 7
-#define O_RXWATERMARKS__Rx1DataWatermark 16
-#define W_RXWATERMARKS__Rx1DataWatermark 7
-#define O_RXWATERMARKS__Rx3DataWatermark 0
-#define W_RXWATERMARKS__Rx3DataWatermark 7
+#define O_RXWATERMARKS__RX0DATAWATERMARK 24
+#define W_RXWATERMARKS__RX0DATAWATERMARK 7
+#define O_RXWATERMARKS__RX1DATAWATERMARK 16
+#define W_RXWATERMARKS__RX1DATAWATERMARK 7
+#define O_RXWATERMARKS__RX3DATAWATERMARK 0
+#define W_RXWATERMARKS__RX3DATAWATERMARK 7
#define R_RXWATERMARKS2 0x246
-#define O_RXWATERMARKS__Rx4DataWatermark 24
-#define W_RXWATERMARKS__Rx4DataWatermark 7
-#define O_RXWATERMARKS__Rx5DataWatermark 16
-#define W_RXWATERMARKS__Rx5DataWatermark 7
-#define O_RXWATERMARKS__Rx6DataWatermark 8
-#define W_RXWATERMARKS__Rx6DataWatermark 7
-#define O_RXWATERMARKS__Rx7DataWatermark 0
-#define W_RXWATERMARKS__Rx7DataWatermark 7
+#define O_RXWATERMARKS__RX4DATAWATERMARK 24
+#define W_RXWATERMARKS__RX4DATAWATERMARK 7
+#define O_RXWATERMARKS__RX5DATAWATERMARK 16
+#define W_RXWATERMARKS__RX5DATAWATERMARK 7
+#define O_RXWATERMARKS__RX6DATAWATERMARK 8
+#define W_RXWATERMARKS__RX6DATAWATERMARK 7
+#define O_RXWATERMARKS__RX7DATAWATERMARK 0
+#define W_RXWATERMARKS__RX7DATAWATERMARK 7
#define R_RXWATERMARKS3 0x247
-#define O_RXWATERMARKS__Rx8DataWatermark 24
-#define W_RXWATERMARKS__Rx8DataWatermark 7
-#define O_RXWATERMARKS__Rx9DataWatermark 16
-#define W_RXWATERMARKS__Rx9DataWatermark 7
-#define O_RXWATERMARKS__Rx10DataWatermark 8
-#define W_RXWATERMARKS__Rx10DataWatermark 7
-#define O_RXWATERMARKS__Rx11DataWatermark 0
-#define W_RXWATERMARKS__Rx11DataWatermark 7
+#define O_RXWATERMARKS__RX8DATAWATERMARK 24
+#define W_RXWATERMARKS__RX8DATAWATERMARK 7
+#define O_RXWATERMARKS__RX9DATAWATERMARK 16
+#define W_RXWATERMARKS__RX9DATAWATERMARK 7
+#define O_RXWATERMARKS__RX10DATAWATERMARK 8
+#define W_RXWATERMARKS__RX10DATAWATERMARK 7
+#define O_RXWATERMARKS__RX11DATAWATERMARK 0
+#define W_RXWATERMARKS__RX11DATAWATERMARK 7
#define R_RXWATERMARKS4 0x248
-#define O_RXWATERMARKS__Rx12DataWatermark 24
-#define W_RXWATERMARKS__Rx12DataWatermark 7
-#define O_RXWATERMARKS__Rx13DataWatermark 16
-#define W_RXWATERMARKS__Rx13DataWatermark 7
-#define O_RXWATERMARKS__Rx14DataWatermark 8
-#define W_RXWATERMARKS__Rx14DataWatermark 7
-#define O_RXWATERMARKS__Rx15DataWatermark 0
-#define W_RXWATERMARKS__Rx15DataWatermark 7
+#define O_RXWATERMARKS__RX12DATAWATERMARK 24
+#define W_RXWATERMARKS__RX12DATAWATERMARK 7
+#define O_RXWATERMARKS__RX13DATAWATERMARK 16
+#define W_RXWATERMARKS__RX13DATAWATERMARK 7
+#define O_RXWATERMARKS__RX14DATAWATERMARK 8
+#define W_RXWATERMARKS__RX14DATAWATERMARK 7
+#define O_RXWATERMARKS__RX15DATAWATERMARK 0
+#define W_RXWATERMARKS__RX15DATAWATERMARK 7
#define R_FREEWATERMARKS 0x249
-#define O_FREEWATERMARKS__FreeOutWatermark 16
-#define W_FREEWATERMARKS__FreeOutWatermark 16
-#define O_FREEWATERMARKS__JumFrWatermark 8
-#define W_FREEWATERMARKS__JumFrWatermark 7
-#define O_FREEWATERMARKS__RegFrWatermark 0
-#define W_FREEWATERMARKS__RegFrWatermark 7
+#define O_FREEWATERMARKS__FREEOUTWATERMARK 16
+#define W_FREEWATERMARKS__FREEOUTWATERMARK 16
+#define O_FREEWATERMARKS__JUMFRWATERMARK 8
+#define W_FREEWATERMARKS__JUMFRWATERMARK 7
+#define O_FREEWATERMARKS__REGFRWATERMARK 0
+#define W_FREEWATERMARKS__REGFRWATERMARK 7
#define R_EGRESSFIFOCARVINGSLOTS 0x24a
#define CTRL_RES0 0
diff --git a/drivers/staging/nvec/nvec.c b/drivers/staging/nvec/nvec.c
index 1503ad7..c335ae2 100644
--- a/drivers/staging/nvec/nvec.c
+++ b/drivers/staging/nvec/nvec.c
@@ -38,18 +38,18 @@
#include "nvec.h"
#define I2C_CNFG 0x00
-#define I2C_CNFG_PACKET_MODE_EN (1 << 10)
-#define I2C_CNFG_NEW_MASTER_SFM (1 << 11)
+#define I2C_CNFG_PACKET_MODE_EN BIT(10)
+#define I2C_CNFG_NEW_MASTER_SFM BIT(11)
#define I2C_CNFG_DEBOUNCE_CNT_SHIFT 12
#define I2C_SL_CNFG 0x20
-#define I2C_SL_NEWSL (1 << 2)
-#define I2C_SL_NACK (1 << 1)
-#define I2C_SL_RESP (1 << 0)
-#define I2C_SL_IRQ (1 << 3)
-#define END_TRANS (1 << 4)
-#define RCVD (1 << 2)
-#define RNW (1 << 1)
+#define I2C_SL_NEWSL BIT(2)
+#define I2C_SL_NACK BIT(1)
+#define I2C_SL_RESP BIT(0)
+#define I2C_SL_IRQ BIT(3)
+#define END_TRANS BIT(4)
+#define RCVD BIT(2)
+#define RNW BIT(1)
#define I2C_SL_RCVD 0x24
#define I2C_SL_STATUS 0x28
@@ -148,7 +148,7 @@
dev_warn(nvec->dev, "unhandled msg type %ld\n", event_type);
print_hex_dump(KERN_WARNING, "payload: ", DUMP_PREFIX_NONE, 16, 1,
- msg, msg[1] + 2, true);
+ msg, msg[1] + 2, true);
return NOTIFY_OK;
}
@@ -257,7 +257,7 @@
* occurred, the nvec driver may print an error.
*/
int nvec_write_async(struct nvec_chip *nvec, const unsigned char *data,
- short size)
+ short size)
{
struct nvec_msg *msg;
unsigned long flags;
@@ -313,10 +313,11 @@
}
dev_dbg(nvec->dev, "nvec_sync_write: 0x%04x\n",
- nvec->sync_write_pending);
+ nvec->sync_write_pending);
if (!(wait_for_completion_timeout(&nvec->sync_write,
- msecs_to_jiffies(2000)))) {
- dev_warn(nvec->dev, "timeout waiting for sync write to complete\n");
+ msecs_to_jiffies(2000)))) {
+ dev_warn(nvec->dev,
+ "timeout waiting for sync write to complete\n");
mutex_unlock(&nvec->sync_write_mutex);
return -ETIMEDOUT;
}
@@ -422,8 +423,8 @@
if ((msg->data[0] >> 7) == 1 && (msg->data[0] & 0x0f) == 5)
print_hex_dump(KERN_WARNING, "ec system event ",
- DUMP_PREFIX_NONE, 16, 1, msg->data,
- msg->data[1] + 2, true);
+ DUMP_PREFIX_NONE, 16, 1, msg->data,
+ msg->data[1] + 2, true);
atomic_notifier_call_chain(&nvec->notifier_list, msg->data[0] & 0x8f,
msg->data);
@@ -493,8 +494,8 @@
{
if (nvec->rx->pos != nvec_msg_size(nvec->rx)) {
dev_err(nvec->dev, "RX incomplete: Expected %u bytes, got %u\n",
- (uint) nvec_msg_size(nvec->rx),
- (uint) nvec->rx->pos);
+ (uint)nvec_msg_size(nvec->rx),
+ (uint)nvec->rx->pos);
nvec_msg_free(nvec, nvec->rx);
nvec->state = 0;
@@ -688,8 +689,8 @@
if ((status & (RCVD | RNW)) == RCVD) {
if (received != nvec->i2c_addr)
dev_err(nvec->dev,
- "received address 0x%02x, expected 0x%02x\n",
- received, nvec->i2c_addr);
+ "received address 0x%02x, expected 0x%02x\n",
+ received, nvec->i2c_addr);
nvec->state = 1;
}
@@ -778,7 +779,7 @@
}
if (of_property_read_u32(nvec->dev->of_node, "slave-addr",
- &nvec->i2c_addr)) {
+ &nvec->i2c_addr)) {
dev_err(nvec->dev, "no i2c address specified");
return -ENODEV;
}
@@ -854,14 +855,14 @@
INIT_WORK(&nvec->tx_work, nvec_request_master);
err = devm_gpio_request_one(&pdev->dev, nvec->gpio, GPIOF_OUT_INIT_HIGH,
- "nvec gpio");
+ "nvec gpio");
if (err < 0) {
dev_err(nvec->dev, "couldn't request gpio\n");
return -ENODEV;
}
err = devm_request_irq(&pdev->dev, nvec->irq, nvec_interrupt, 0,
- "nvec", nvec);
+ "nvec", nvec);
if (err) {
dev_err(nvec->dev, "couldn't request irq\n");
return -ENODEV;
@@ -883,8 +884,10 @@
err = nvec_write_sync(nvec, get_firmware_version, 2, &msg);
if (!err) {
- dev_warn(nvec->dev, "ec firmware version %02x.%02x.%02x / %02x\n",
- msg->data[4], msg->data[5], msg->data[6], msg->data[7]);
+ dev_warn(nvec->dev,
+ "ec firmware version %02x.%02x.%02x / %02x\n",
+ msg->data[4], msg->data[5],
+ msg->data[6], msg->data[7]);
nvec_msg_free(nvec, msg);
}
diff --git a/drivers/staging/octeon-usb/TODO b/drivers/staging/octeon-usb/TODO
index cc58a7e..2b29acc 100644
--- a/drivers/staging/octeon-usb/TODO
+++ b/drivers/staging/octeon-usb/TODO
@@ -1,11 +1,8 @@
-This driver is functional and has been tested on EdgeRouter Lite with
-USB mass storage.
+This driver is functional and has been tested on EdgeRouter Lite,
+D-Link DSR-1000N and EBH5600 evaluation board with USB mass storage.
TODO:
- kernel coding style
- checkpatch warnings
- - dead code elimination
- - device tree bindings
- - possibly eliminate the extra "hardware abstraction layer"
Contact: Aaro Koskinen <aaro.koskinen@iki.fi>
diff --git a/drivers/staging/octeon-usb/octeon-hcd.c b/drivers/staging/octeon-usb/octeon-hcd.c
index eaf6ded..17442b3 100644
--- a/drivers/staging/octeon-usb/octeon-hcd.c
+++ b/drivers/staging/octeon-usb/octeon-hcd.c
@@ -43,29 +43,15 @@
* CORRESPONDENCE TO DESCRIPTION. THE ENTIRE RISK ARISING OUT OF USE OR
* PERFORMANCE OF THE SOFTWARE LIES WITH YOU.
*/
-#include <linux/kernel.h>
-#include <linux/module.h>
-#include <linux/init.h>
-#include <linux/pci.h>
-#include <linux/prefetch.h>
-#include <linux/interrupt.h>
-#include <linux/platform_device.h>
+
#include <linux/usb.h>
-
-#include <linux/time.h>
-#include <linux/delay.h>
-
-#include <asm/octeon/cvmx.h>
-#include <asm/octeon/cvmx-iob-defs.h>
-
+#include <linux/slab.h>
+#include <linux/module.h>
#include <linux/usb/hcd.h>
-
-#include <linux/err.h>
+#include <linux/prefetch.h>
+#include <linux/platform_device.h>
#include <asm/octeon/octeon.h>
-#include <asm/octeon/cvmx-helper.h>
-#include <asm/octeon/cvmx-sysinfo.h>
-#include <asm/octeon/cvmx-helper-board.h>
#include "octeon-hcd.h"
@@ -113,35 +99,35 @@
};
/**
- * enum cvmx_usb_complete - possible callback function status codes
+ * enum cvmx_usb_status - possible callback function status codes
*
- * @CVMX_USB_COMPLETE_SUCCESS: The transaction / operation finished without
+ * @CVMX_USB_STATUS_OK: The transaction / operation finished without
* any errors
- * @CVMX_USB_COMPLETE_SHORT: FIXME: This is currently not implemented
- * @CVMX_USB_COMPLETE_CANCEL: The transaction was canceled while in flight
+ * @CVMX_USB_STATUS_SHORT: FIXME: This is currently not implemented
+ * @CVMX_USB_STATUS_CANCEL: The transaction was canceled while in flight
* by a user call to cvmx_usb_cancel
- * @CVMX_USB_COMPLETE_ERROR: The transaction aborted with an unexpected
+ * @CVMX_USB_STATUS_ERROR: The transaction aborted with an unexpected
* error status
- * @CVMX_USB_COMPLETE_STALL: The transaction received a USB STALL response
+ * @CVMX_USB_STATUS_STALL: The transaction received a USB STALL response
* from the device
- * @CVMX_USB_COMPLETE_XACTERR: The transaction failed with an error from the
+ * @CVMX_USB_STATUS_XACTERR: The transaction failed with an error from the
* device even after a number of retries
- * @CVMX_USB_COMPLETE_DATATGLERR: The transaction failed with a data toggle
+ * @CVMX_USB_STATUS_DATATGLERR: The transaction failed with a data toggle
* error even after a number of retries
- * @CVMX_USB_COMPLETE_BABBLEERR: The transaction failed with a babble error
- * @CVMX_USB_COMPLETE_FRAMEERR: The transaction failed with a frame error
+ * @CVMX_USB_STATUS_BABBLEERR: The transaction failed with a babble error
+ * @CVMX_USB_STATUS_FRAMEERR: The transaction failed with a frame error
* even after a number of retries
*/
-enum cvmx_usb_complete {
- CVMX_USB_COMPLETE_SUCCESS,
- CVMX_USB_COMPLETE_SHORT,
- CVMX_USB_COMPLETE_CANCEL,
- CVMX_USB_COMPLETE_ERROR,
- CVMX_USB_COMPLETE_STALL,
- CVMX_USB_COMPLETE_XACTERR,
- CVMX_USB_COMPLETE_DATATGLERR,
- CVMX_USB_COMPLETE_BABBLEERR,
- CVMX_USB_COMPLETE_FRAMEERR,
+enum cvmx_usb_status {
+ CVMX_USB_STATUS_OK,
+ CVMX_USB_STATUS_SHORT,
+ CVMX_USB_STATUS_CANCEL,
+ CVMX_USB_STATUS_ERROR,
+ CVMX_USB_STATUS_STALL,
+ CVMX_USB_STATUS_XACTERR,
+ CVMX_USB_STATUS_DATATGLERR,
+ CVMX_USB_STATUS_BABBLEERR,
+ CVMX_USB_STATUS_FRAMEERR,
};
/**
@@ -160,13 +146,13 @@
* status call.
*/
struct cvmx_usb_port_status {
- uint32_t reserved : 25;
- uint32_t port_enabled : 1;
- uint32_t port_over_current : 1;
- uint32_t port_powered : 1;
+ u32 reserved : 25;
+ u32 port_enabled : 1;
+ u32 port_over_current : 1;
+ u32 port_powered : 1;
enum cvmx_usb_speed port_speed : 2;
- uint32_t connected : 1;
- uint32_t connect_change : 1;
+ u32 connected : 1;
+ u32 connect_change : 1;
};
/**
@@ -180,7 +166,7 @@
struct cvmx_usb_iso_packet {
int offset;
int length;
- enum cvmx_usb_complete status;
+ enum cvmx_usb_status status;
};
/**
@@ -234,13 +220,13 @@
* The low level hardware can transfer a maximum of this number of bytes in each
* transfer. The field is 19 bits wide
*/
-#define MAX_TRANSFER_BYTES ((1<<19)-1)
+#define MAX_TRANSFER_BYTES ((1 << 19) - 1)
/*
* The low level hardware can transfer a maximum of this number of packets in
* each transfer. The field is 10 bits wide
*/
-#define MAX_TRANSFER_PACKETS ((1<<10)-1)
+#define MAX_TRANSFER_PACKETS ((1 << 10) - 1)
/**
* Logical transactions may take numerous low level
@@ -284,9 +270,9 @@
struct cvmx_usb_transaction {
struct list_head node;
enum cvmx_usb_transfer type;
- uint64_t buffer;
+ u64 buffer;
int buffer_length;
- uint64_t control_header;
+ u64 control_header;
int iso_start_frame;
int iso_number_packets;
struct cvmx_usb_iso_packet *iso_packets;
@@ -328,36 +314,37 @@
struct cvmx_usb_pipe {
struct list_head node;
struct list_head transactions;
- uint64_t interval;
- uint64_t next_tx_frame;
+ u64 interval;
+ u64 next_tx_frame;
enum cvmx_usb_pipe_flags flags;
enum cvmx_usb_speed device_speed;
enum cvmx_usb_transfer transfer_type;
enum cvmx_usb_direction transfer_dir;
int multi_count;
- uint16_t max_packet;
- uint8_t device_addr;
- uint8_t endpoint_num;
- uint8_t hub_device_addr;
- uint8_t hub_port;
- uint8_t pid_toggle;
- uint8_t channel;
- int8_t split_sc_frame;
+ u16 max_packet;
+ u8 device_addr;
+ u8 endpoint_num;
+ u8 hub_device_addr;
+ u8 hub_port;
+ u8 pid_toggle;
+ u8 channel;
+ s8 split_sc_frame;
};
struct cvmx_usb_tx_fifo {
struct {
int channel;
int size;
- uint64_t address;
- } entry[MAX_CHANNELS+1];
+ u64 address;
+ } entry[MAX_CHANNELS + 1];
int head;
int tail;
};
/**
- * struct cvmx_usb_state - the state of the USB block
+ * struct octeon_hcd - the state of the USB block
*
+ * lock: Serialization lock.
* init_flags: Flags passed to initialize.
* index: Which USB block this is for.
* idle_hardware_channels: Bit set for every idle hardware channel.
@@ -372,7 +359,8 @@
* frame_number: Increments every SOF interrupt for time keeping.
* active_split: Points to the current active split, or NULL.
*/
-struct cvmx_usb_state {
+struct octeon_hcd {
+ spinlock_t lock; /* serialization lock */
int init_flags;
int index;
int idle_hardware_channels;
@@ -382,23 +370,18 @@
struct cvmx_usb_port_status port_status;
struct list_head idle_pipes;
struct list_head active_pipes[4];
- uint64_t frame_number;
+ u64 frame_number;
struct cvmx_usb_transaction *active_split;
struct cvmx_usb_tx_fifo periodic;
struct cvmx_usb_tx_fifo nonperiodic;
};
-struct octeon_hcd {
- spinlock_t lock;
- struct cvmx_usb_state usb;
-};
-
/* This macro spins on a register waiting for it to reach a condition. */
#define CVMX_WAIT_FOR_FIELD32(address, _union, cond, timeout_usec) \
({int result; \
do { \
- uint64_t done = cvmx_get_cycle() + (uint64_t)timeout_usec * \
- octeon_get_clock_rate() / 1000000; \
+ u64 done = cvmx_get_cycle() + (u64)timeout_usec * \
+ octeon_get_clock_rate() / 1000000; \
union _union c; \
\
while (1) { \
@@ -431,7 +414,7 @@
/* Returns the IO address to push/pop stuff data from the FIFOs */
#define USB_FIFO_ADDRESS(channel, usb_index) \
- (CVMX_USBCX_GOTGCTL(usb_index) + ((channel)+1)*0x1000)
+ (CVMX_USBCX_GOTGCTL(usb_index) + ((channel) + 1) * 0x1000)
/**
* struct octeon_temp_buffer - a bounce buffer for USB transfers
@@ -447,11 +430,6 @@
u8 data[0];
};
-static inline struct octeon_hcd *cvmx_usb_to_octeon(struct cvmx_usb_state *p)
-{
- return container_of(p, struct octeon_hcd, usb);
-}
-
static inline struct usb_hcd *octeon_to_hcd(struct octeon_hcd *p)
{
return container_of((void *)p, struct usb_hcd, hcd_priv);
@@ -562,14 +540,12 @@
*
* Returns: Result of the read
*/
-static inline uint32_t cvmx_usb_read_csr32(struct cvmx_usb_state *usb,
- uint64_t address)
+static inline u32 cvmx_usb_read_csr32(struct octeon_hcd *usb, u64 address)
{
- uint32_t result = cvmx_read64_uint32(address ^ 4);
+ u32 result = cvmx_read64_uint32(address ^ 4);
return result;
}
-
/**
* Write a USB 32bit CSR. It performs the necessary address
* swizzle for 32bit CSRs and logs the value in a readable format
@@ -579,8 +555,8 @@
* @address: 64bit address to write
* @value: Value to write
*/
-static inline void cvmx_usb_write_csr32(struct cvmx_usb_state *usb,
- uint64_t address, uint32_t value)
+static inline void cvmx_usb_write_csr32(struct octeon_hcd *usb,
+ u64 address, u32 value)
{
cvmx_write64_uint32(address ^ 4, value);
cvmx_read64_uint64(CVMX_USBNX_DMA0_INB_CHN0(usb->index));
@@ -595,14 +571,13 @@
*
* Returns: Non zero if we need to do split transactions
*/
-static inline int cvmx_usb_pipe_needs_split(struct cvmx_usb_state *usb,
+static inline int cvmx_usb_pipe_needs_split(struct octeon_hcd *usb,
struct cvmx_usb_pipe *pipe)
{
return pipe->device_speed != CVMX_USB_SPEED_HIGH &&
usb->usbcx_hprt.s.prtspd == CVMX_USB_SPEED_HIGH;
}
-
/**
* Trivial utility function to return the correct PID for a pipe
*
@@ -617,7 +592,7 @@
return 0; /* Data0 */
}
-static void cvmx_fifo_setup(struct cvmx_usb_state *usb)
+static void cvmx_fifo_setup(struct octeon_hcd *usb)
{
union cvmx_usbcx_ghwcfg3 usbcx_ghwcfg3;
union cvmx_usbcx_gnptxfsiz npsiz;
@@ -675,7 +650,7 @@
*
* Returns: 0 or a negative error code.
*/
-static int cvmx_usb_shutdown(struct cvmx_usb_state *usb)
+static int cvmx_usb_shutdown(struct octeon_hcd *usb)
{
union cvmx_usbnx_clk_ctl usbn_clk_ctl;
@@ -704,12 +679,12 @@
* off in the disabled state.
*
* @dev: Pointer to struct device for logging purposes.
- * @usb: Pointer to struct cvmx_usb_state.
+ * @usb: Pointer to struct octeon_hcd.
*
* Returns: 0 or a negative error code.
*/
static int cvmx_usb_initialize(struct device *dev,
- struct cvmx_usb_state *usb)
+ struct octeon_hcd *usb)
{
int channel;
int divisor;
@@ -975,7 +950,7 @@
*
* @usb: USB device state populated by cvmx_usb_initialize().
*/
-static void cvmx_usb_reset_port(struct cvmx_usb_state *usb)
+static void cvmx_usb_reset_port(struct octeon_hcd *usb)
{
usb->usbcx_hprt.u32 = cvmx_usb_read_csr32(usb,
CVMX_USBCX_HPRT(usb->index));
@@ -1002,7 +977,6 @@
CVMX_USBCX_HPRT(usb->index));
}
-
/**
* Disable a USB port. After this call the USB port will not
* generate data transfers and will not generate events.
@@ -1013,7 +987,7 @@
*
* Returns: 0 or a negative error code.
*/
-static int cvmx_usb_disable(struct cvmx_usb_state *usb)
+static int cvmx_usb_disable(struct octeon_hcd *usb)
{
/* Disable the port */
USB_SET_FIELD32(CVMX_USBCX_HPRT(usb->index), cvmx_usbcx_hprt,
@@ -1021,7 +995,6 @@
return 0;
}
-
/**
* Get the current state of the USB port. Use this call to
* determine if the usb port has anything connected, is enabled,
@@ -1033,8 +1006,7 @@
*
* Returns: Port status information
*/
-static struct cvmx_usb_port_status cvmx_usb_get_status(
- struct cvmx_usb_state *usb)
+static struct cvmx_usb_port_status cvmx_usb_get_status(struct octeon_hcd *usb)
{
union cvmx_usbcx_hprt usbc_hprt;
struct cvmx_usb_port_status result;
@@ -1105,7 +1077,7 @@
*
* Returns: A non-NULL value is a pipe. NULL means an error.
*/
-static struct cvmx_usb_pipe *cvmx_usb_open_pipe(struct cvmx_usb_state *usb,
+static struct cvmx_usb_pipe *cvmx_usb_open_pipe(struct octeon_hcd *usb,
int device_addr,
int endpoint_num,
enum cvmx_usb_speed
@@ -1125,8 +1097,8 @@
if (!pipe)
return NULL;
if ((device_speed == CVMX_USB_SPEED_HIGH) &&
- (transfer_dir == CVMX_USB_DIRECTION_OUT) &&
- (transfer_type == CVMX_USB_TRANSFER_BULK))
+ (transfer_dir == CVMX_USB_DIRECTION_OUT) &&
+ (transfer_type == CVMX_USB_TRANSFER_BULK))
pipe->flags |= CVMX_USB_PIPE_FLAGS_NEED_PING;
pipe->device_addr = device_addr;
pipe->endpoint_num = endpoint_num;
@@ -1143,9 +1115,9 @@
if (!interval)
interval = 1;
if (cvmx_usb_pipe_needs_split(usb, pipe)) {
- pipe->interval = interval*8;
+ pipe->interval = interval * 8;
/* Force start splits to be schedule on uFrame 0 */
- pipe->next_tx_frame = ((usb->frame_number+7)&~7) +
+ pipe->next_tx_frame = ((usb->frame_number + 7) & ~7) +
pipe->interval;
} else {
pipe->interval = interval;
@@ -1166,7 +1138,6 @@
return pipe;
}
-
/**
* Poll the RX FIFOs and remove data as needed. This function is only used
* in non DMA mode. It is very important that this function be called quickly
@@ -1174,13 +1145,13 @@
*
* @usb: USB device state populated by cvmx_usb_initialize().
*/
-static void cvmx_usb_poll_rx_fifo(struct cvmx_usb_state *usb)
+static void cvmx_usb_poll_rx_fifo(struct octeon_hcd *usb)
{
union cvmx_usbcx_grxstsph rx_status;
int channel;
int bytes;
- uint64_t address;
- uint32_t *ptr;
+ u64 address;
+ u32 *ptr;
rx_status.u32 = cvmx_usb_read_csr32(usb,
CVMX_USBCX_GRXSTSPH(usb->index));
@@ -1213,7 +1184,6 @@
CVMX_SYNCW;
}
-
/**
* Fill the TX hardware fifo with data out of the software
* fifos
@@ -1225,7 +1195,7 @@
* Returns: Non zero if the hardware fifo was too small and needs
* to be serviced again.
*/
-static int cvmx_usb_fill_tx_hw(struct cvmx_usb_state *usb,
+static int cvmx_usb_fill_tx_hw(struct octeon_hcd *usb,
struct cvmx_usb_tx_fifo *fifo, int available)
{
/*
@@ -1234,9 +1204,9 @@
*/
while (available && (fifo->head != fifo->tail)) {
int i = fifo->tail;
- const uint32_t *ptr = cvmx_phys_to_ptr(fifo->entry[i].address);
- uint64_t csr_address = USB_FIFO_ADDRESS(fifo->entry[i].channel,
- usb->index) ^ 4;
+ const u32 *ptr = cvmx_phys_to_ptr(fifo->entry[i].address);
+ u64 csr_address = USB_FIFO_ADDRESS(fifo->entry[i].channel,
+ usb->index) ^ 4;
int words = available;
/* Limit the amount of data to what the SW fifo has */
@@ -1275,13 +1245,12 @@
return fifo->head != fifo->tail;
}
-
/**
* Check the hardware FIFOs and fill them as needed
*
* @usb: USB device state populated by cvmx_usb_initialize().
*/
-static void cvmx_usb_poll_tx_fifo(struct cvmx_usb_state *usb)
+static void cvmx_usb_poll_tx_fifo(struct octeon_hcd *usb)
{
if (usb->periodic.head != usb->periodic.tail) {
union cvmx_usbcx_hptxsts tx_status;
@@ -1312,14 +1281,13 @@
}
}
-
/**
* Fill the TX FIFO with an outgoing packet
*
* @usb: USB device state populated by cvmx_usb_initialize().
* @channel: Channel number to get packet from
*/
-static void cvmx_usb_fill_tx_fifo(struct cvmx_usb_state *usb, int channel)
+static void cvmx_usb_fill_tx_fifo(struct octeon_hcd *usb, int channel)
{
union cvmx_usbcx_hccharx hcchar;
union cvmx_usbcx_hcspltx usbc_hcsplt;
@@ -1348,7 +1316,7 @@
return;
if ((hcchar.s.eptype == CVMX_USB_TRANSFER_INTERRUPT) ||
- (hcchar.s.eptype == CVMX_USB_TRANSFER_ISOCHRONOUS))
+ (hcchar.s.eptype == CVMX_USB_TRANSFER_ISOCHRONOUS))
fifo = &usb->periodic;
else
fifo = &usb->nonperiodic;
@@ -1357,7 +1325,7 @@
fifo->entry[fifo->head].address =
cvmx_read64_uint64(CVMX_USBNX_DMA0_OUTB_CHN0(usb->index) +
channel * 8);
- fifo->entry[fifo->head].size = (usbc_hctsiz.s.xfersize+3)>>2;
+ fifo->entry[fifo->head].size = (usbc_hctsiz.s.xfersize + 3) >> 2;
fifo->head++;
if (fifo->head > MAX_CHANNELS)
fifo->head = 0;
@@ -1373,12 +1341,11 @@
* @channel: Channel to setup
* @pipe: Pipe for control transaction
*/
-static void cvmx_usb_start_channel_control(struct cvmx_usb_state *usb,
+static void cvmx_usb_start_channel_control(struct octeon_hcd *usb,
int channel,
struct cvmx_usb_pipe *pipe)
{
- struct octeon_hcd *priv = cvmx_usb_to_octeon(usb);
- struct usb_hcd *hcd = octeon_to_hcd(priv);
+ struct usb_hcd *hcd = octeon_to_hcd(usb);
struct device *dev = hcd->self.controller;
struct cvmx_usb_transaction *transaction =
list_first_entry(&pipe->transactions, typeof(*transaction),
@@ -1488,9 +1455,9 @@
*/
packets_to_transfer = DIV_ROUND_UP(bytes_to_transfer,
pipe->max_packet);
- if (packets_to_transfer == 0)
+ if (packets_to_transfer == 0) {
packets_to_transfer = 1;
- else if ((packets_to_transfer > 1) &&
+ } else if ((packets_to_transfer > 1) &&
(usb->init_flags & CVMX_USB_INITIALIZE_FLAGS_NO_DMA)) {
/*
* Limit to one packet when not using DMA. Channels must be
@@ -1515,7 +1482,6 @@
usbc_hctsiz.u32);
}
-
/**
* Start a channel to perform the pipe's head transaction
*
@@ -1523,7 +1489,7 @@
* @channel: Channel to setup
* @pipe: Pipe to start
*/
-static void cvmx_usb_start_channel(struct cvmx_usb_state *usb, int channel,
+static void cvmx_usb_start_channel(struct octeon_hcd *usb, int channel,
struct cvmx_usb_pipe *pipe)
{
struct cvmx_usb_transaction *transaction =
@@ -1539,7 +1505,7 @@
pipe->flags |= CVMX_USB_PIPE_FLAGS_SCHEDULED;
/* Mark this channel as in use */
- usb->idle_hardware_channels &= ~(1<<channel);
+ usb->idle_hardware_channels &= ~(1 << channel);
/* Enable the channel interrupt bits */
{
@@ -1579,22 +1545,22 @@
usbc_hcintmsk.s.xfercomplmsk = 1;
}
cvmx_usb_write_csr32(usb,
- CVMX_USBCX_HCINTMSKX(channel, usb->index),
- usbc_hcintmsk.u32);
+ CVMX_USBCX_HCINTMSKX(channel, usb->index),
+ usbc_hcintmsk.u32);
/* Enable the channel interrupt to propagate */
usbc_haintmsk.u32 = cvmx_usb_read_csr32(usb,
CVMX_USBCX_HAINTMSK(usb->index));
- usbc_haintmsk.s.haintmsk |= 1<<channel;
+ usbc_haintmsk.s.haintmsk |= 1 << channel;
cvmx_usb_write_csr32(usb, CVMX_USBCX_HAINTMSK(usb->index),
usbc_haintmsk.u32);
}
/* Setup the location the DMA engine uses. */
{
- uint64_t reg;
- uint64_t dma_address = transaction->buffer +
- transaction->actual_bytes;
+ u64 reg;
+ u64 dma_address = transaction->buffer +
+ transaction->actual_bytes;
if (transaction->type == CVMX_USB_TRANSFER_ISOCHRONOUS)
dma_address = transaction->buffer +
@@ -1636,15 +1602,16 @@
* We only store the lower two bits since the time ahead
* can only be two frames
*/
- if ((transaction->stage&1) == 0) {
+ if ((transaction->stage & 1) == 0) {
if (transaction->type == CVMX_USB_TRANSFER_BULK)
pipe->split_sc_frame =
(usb->frame_number + 1) & 0x7f;
else
pipe->split_sc_frame =
(usb->frame_number + 2) & 0x7f;
- } else
+ } else {
pipe->split_sc_frame = -1;
+ }
usbc_hcsplt.s.spltena = 1;
usbc_hcsplt.s.hubaddr = pipe->hub_device_addr;
@@ -1666,10 +1633,9 @@
* begin/middle/end of the data or all
*/
if (!usbc_hcsplt.s.compsplt &&
- (pipe->transfer_dir ==
- CVMX_USB_DIRECTION_OUT) &&
- (pipe->transfer_type ==
- CVMX_USB_TRANSFER_ISOCHRONOUS)) {
+ (pipe->transfer_dir == CVMX_USB_DIRECTION_OUT) &&
+ (pipe->transfer_type ==
+ CVMX_USB_TRANSFER_ISOCHRONOUS)) {
/*
* Clear the split complete frame number as
* there isn't going to be a split complete
@@ -1732,11 +1698,11 @@
*/
packets_to_transfer =
DIV_ROUND_UP(bytes_to_transfer, pipe->max_packet);
- if (packets_to_transfer == 0)
+ if (packets_to_transfer == 0) {
packets_to_transfer = 1;
- else if ((packets_to_transfer > 1) &&
- (usb->init_flags &
- CVMX_USB_INITIALIZE_FLAGS_NO_DMA)) {
+ } else if ((packets_to_transfer > 1) &&
+ (usb->init_flags &
+ CVMX_USB_INITIALIZE_FLAGS_NO_DMA)) {
/*
* Limit to one packet when not using DMA. Channels must
* be restarted between every packet for IN
@@ -1783,7 +1749,7 @@
* Set the startframe odd/even properly. This is only used for
* periodic
*/
- usbc_hcchar.s.oddfrm = usb->frame_number&1;
+ usbc_hcchar.s.oddfrm = usb->frame_number & 1;
/*
* Set the number of back to back packets allowed by this
@@ -1843,9 +1809,11 @@
break;
}
{
- union cvmx_usbcx_hctsizx usbc_hctsiz = {.u32 =
+ union cvmx_usbcx_hctsizx usbc_hctsiz = { .u32 =
cvmx_usb_read_csr32(usb,
- CVMX_USBCX_HCTSIZX(channel, usb->index))};
+ CVMX_USBCX_HCTSIZX(channel,
+ usb->index))
+ };
transaction->xfersize = usbc_hctsiz.s.xfersize;
transaction->pktcnt = usbc_hctsiz.s.pktcnt;
}
@@ -1858,21 +1826,19 @@
cvmx_usb_fill_tx_fifo(usb, channel);
}
-
/**
* Find a pipe that is ready to be scheduled to hardware.
* @usb: USB device state populated by cvmx_usb_initialize().
- * @list: Pipe list to search
- * @current_frame:
- * Frame counter to use as a time reference.
+ * @xfer_type: Transfer type
*
* Returns: Pipe or NULL if none are ready
*/
static struct cvmx_usb_pipe *cvmx_usb_find_ready_pipe(
- struct cvmx_usb_state *usb,
- struct list_head *list,
- uint64_t current_frame)
+ struct octeon_hcd *usb,
+ enum cvmx_usb_transfer xfer_type)
{
+ struct list_head *list = usb->active_pipes + xfer_type;
+ u64 current_frame = usb->frame_number;
struct cvmx_usb_pipe *pipe;
list_for_each_entry(pipe, list, node) {
@@ -1880,11 +1846,11 @@
list_first_entry(&pipe->transactions, typeof(*t),
node);
if (!(pipe->flags & CVMX_USB_PIPE_FLAGS_SCHEDULED) && t &&
- (pipe->next_tx_frame <= current_frame) &&
- ((pipe->split_sc_frame == -1) ||
- ((((int)current_frame - (int)pipe->split_sc_frame)
- & 0x7f) < 0x40)) &&
- (!usb->active_split || (usb->active_split == t))) {
+ (pipe->next_tx_frame <= current_frame) &&
+ ((pipe->split_sc_frame == -1) ||
+ ((((int)current_frame - pipe->split_sc_frame) & 0x7f) <
+ 0x40)) &&
+ (!usb->active_split || (usb->active_split == t))) {
prefetch(t);
return pipe;
}
@@ -1892,6 +1858,32 @@
return NULL;
}
+static struct cvmx_usb_pipe *cvmx_usb_next_pipe(struct octeon_hcd *usb,
+ int is_sof)
+{
+ struct cvmx_usb_pipe *pipe;
+
+ /* Find a pipe needing service. */
+ if (is_sof) {
+ /*
+ * Only process periodic pipes on SOF interrupts. This way we
+ * are sure that the periodic data is sent in the beginning of
+ * the frame.
+ */
+ pipe = cvmx_usb_find_ready_pipe(usb,
+ CVMX_USB_TRANSFER_ISOCHRONOUS);
+ if (pipe)
+ return pipe;
+ pipe = cvmx_usb_find_ready_pipe(usb,
+ CVMX_USB_TRANSFER_INTERRUPT);
+ if (pipe)
+ return pipe;
+ }
+ pipe = cvmx_usb_find_ready_pipe(usb, CVMX_USB_TRANSFER_CONTROL);
+ if (pipe)
+ return pipe;
+ return cvmx_usb_find_ready_pipe(usb, CVMX_USB_TRANSFER_BULK);
+}
/**
* Called whenever a pipe might need to be scheduled to the
@@ -1900,7 +1892,7 @@
* @usb: USB device state populated by cvmx_usb_initialize().
* @is_sof: True if this schedule was called on a SOF interrupt.
*/
-static void cvmx_usb_schedule(struct cvmx_usb_state *usb, int is_sof)
+static void cvmx_usb_schedule(struct octeon_hcd *usb, int is_sof)
{
int channel;
struct cvmx_usb_pipe *pipe;
@@ -1922,7 +1914,7 @@
CVMX_USBCX_HFIR(usb->index))
};
- if (hfnum.s.frrem < hfir.s.frint/4)
+ if (hfnum.s.frrem < hfir.s.frint / 4)
goto done;
}
@@ -1932,35 +1924,7 @@
if (unlikely(channel > 7))
break;
- /* Find a pipe needing service */
- pipe = NULL;
- if (is_sof) {
- /*
- * Only process periodic pipes on SOF interrupts. This
- * way we are sure that the periodic data is sent in the
- * beginning of the frame
- */
- pipe = cvmx_usb_find_ready_pipe(usb,
- usb->active_pipes +
- CVMX_USB_TRANSFER_ISOCHRONOUS,
- usb->frame_number);
- if (likely(!pipe))
- pipe = cvmx_usb_find_ready_pipe(usb,
- usb->active_pipes +
- CVMX_USB_TRANSFER_INTERRUPT,
- usb->frame_number);
- }
- if (likely(!pipe)) {
- pipe = cvmx_usb_find_ready_pipe(usb,
- usb->active_pipes +
- CVMX_USB_TRANSFER_CONTROL,
- usb->frame_number);
- if (likely(!pipe))
- pipe = cvmx_usb_find_ready_pipe(usb,
- usb->active_pipes +
- CVMX_USB_TRANSFER_BULK,
- usb->frame_number);
- }
+ pipe = cvmx_usb_next_pipe(usb, is_sof);
if (!pipe)
break;
@@ -1974,7 +1938,7 @@
*/
need_sof = 0;
for (ttype = CVMX_USB_TRANSFER_CONTROL;
- ttype <= CVMX_USB_TRANSFER_INTERRUPT; ttype++) {
+ ttype <= CVMX_USB_TRANSFER_INTERRUPT; ttype++) {
list_for_each_entry(pipe, &usb->active_pipes[ttype], node) {
if (pipe->next_tx_frame > usb->frame_number) {
need_sof = 1;
@@ -1986,19 +1950,18 @@
cvmx_usbcx_gintmsk, sofmsk, need_sof);
}
-static void octeon_usb_urb_complete_callback(struct cvmx_usb_state *usb,
- enum cvmx_usb_complete status,
+static void octeon_usb_urb_complete_callback(struct octeon_hcd *usb,
+ enum cvmx_usb_status status,
struct cvmx_usb_pipe *pipe,
struct cvmx_usb_transaction
*transaction,
int bytes_transferred,
struct urb *urb)
{
- struct octeon_hcd *priv = cvmx_usb_to_octeon(usb);
- struct usb_hcd *hcd = octeon_to_hcd(priv);
+ struct usb_hcd *hcd = octeon_to_hcd(usb);
struct device *dev = hcd->self.controller;
- if (likely(status == CVMX_USB_COMPLETE_SUCCESS))
+ if (likely(status == CVMX_USB_STATUS_OK))
urb->actual_length = bytes_transferred;
else
urb->actual_length = 0;
@@ -2015,12 +1978,11 @@
* field.
*/
struct cvmx_usb_iso_packet *iso_packet =
- (struct cvmx_usb_iso_packet *) urb->setup_packet;
+ (struct cvmx_usb_iso_packet *)urb->setup_packet;
/* Recalculate the transfer size by adding up each packet */
urb->actual_length = 0;
for (i = 0; i < urb->number_of_packets; i++) {
- if (iso_packet[i].status ==
- CVMX_USB_COMPLETE_SUCCESS) {
+ if (iso_packet[i].status == CVMX_USB_STATUS_OK) {
urb->iso_frame_desc[i].status = 0;
urb->iso_frame_desc[i].actual_length =
iso_packet[i].length;
@@ -2040,41 +2002,41 @@
}
switch (status) {
- case CVMX_USB_COMPLETE_SUCCESS:
+ case CVMX_USB_STATUS_OK:
urb->status = 0;
break;
- case CVMX_USB_COMPLETE_CANCEL:
+ case CVMX_USB_STATUS_CANCEL:
if (urb->status == 0)
urb->status = -ENOENT;
break;
- case CVMX_USB_COMPLETE_STALL:
+ case CVMX_USB_STATUS_STALL:
dev_dbg(dev, "status=stall pipe=%p transaction=%p size=%d\n",
pipe, transaction, bytes_transferred);
urb->status = -EPIPE;
break;
- case CVMX_USB_COMPLETE_BABBLEERR:
+ case CVMX_USB_STATUS_BABBLEERR:
dev_dbg(dev, "status=babble pipe=%p transaction=%p size=%d\n",
pipe, transaction, bytes_transferred);
urb->status = -EPIPE;
break;
- case CVMX_USB_COMPLETE_SHORT:
+ case CVMX_USB_STATUS_SHORT:
dev_dbg(dev, "status=short pipe=%p transaction=%p size=%d\n",
pipe, transaction, bytes_transferred);
urb->status = -EREMOTEIO;
break;
- case CVMX_USB_COMPLETE_ERROR:
- case CVMX_USB_COMPLETE_XACTERR:
- case CVMX_USB_COMPLETE_DATATGLERR:
- case CVMX_USB_COMPLETE_FRAMEERR:
+ case CVMX_USB_STATUS_ERROR:
+ case CVMX_USB_STATUS_XACTERR:
+ case CVMX_USB_STATUS_DATATGLERR:
+ case CVMX_USB_STATUS_FRAMEERR:
dev_dbg(dev, "status=%d pipe=%p transaction=%p size=%d\n",
status, pipe, transaction, bytes_transferred);
urb->status = -EPROTO;
break;
}
- usb_hcd_unlink_urb_from_ep(octeon_to_hcd(priv), urb);
- spin_unlock(&priv->lock);
- usb_hcd_giveback_urb(octeon_to_hcd(priv), urb, urb->status);
- spin_lock(&priv->lock);
+ usb_hcd_unlink_urb_from_ep(octeon_to_hcd(usb), urb);
+ spin_unlock(&usb->lock);
+ usb_hcd_giveback_urb(octeon_to_hcd(usb), urb, urb->status);
+ spin_lock(&usb->lock);
}
/**
@@ -2088,10 +2050,10 @@
* @complete_code:
* Completion code
*/
-static void cvmx_usb_perform_complete(struct cvmx_usb_state *usb,
- struct cvmx_usb_pipe *pipe,
- struct cvmx_usb_transaction *transaction,
- enum cvmx_usb_complete complete_code)
+static void cvmx_usb_complete(struct octeon_hcd *usb,
+ struct cvmx_usb_pipe *pipe,
+ struct cvmx_usb_transaction *transaction,
+ enum cvmx_usb_status complete_code)
{
/* If this was a split then clear our split in progress marker */
if (usb->active_split == transaction)
@@ -2111,7 +2073,7 @@
* next one
*/
if ((transaction->iso_number_packets > 1) &&
- (complete_code == CVMX_USB_COMPLETE_SUCCESS)) {
+ (complete_code == CVMX_USB_STATUS_OK)) {
/* No bytes transferred for this packet as of yet */
transaction->actual_bytes = 0;
/* One less ISO waiting to transfer */
@@ -2134,7 +2096,6 @@
kfree(transaction);
}
-
/**
* Submit a usb transaction to a pipe. Called for all types
* of transactions.
@@ -2158,12 +2119,12 @@
* Returns: Transaction or NULL on failure.
*/
static struct cvmx_usb_transaction *cvmx_usb_submit_transaction(
- struct cvmx_usb_state *usb,
+ struct octeon_hcd *usb,
struct cvmx_usb_pipe *pipe,
enum cvmx_usb_transfer type,
- uint64_t buffer,
+ u64 buffer,
int buffer_length,
- uint64_t control_header,
+ u64 control_header,
int iso_start_frame,
int iso_number_packets,
struct cvmx_usb_iso_packet *iso_packets,
@@ -2209,7 +2170,6 @@
return transaction;
}
-
/**
* Call to submit a USB Bulk transfer to a pipe.
*
@@ -2220,7 +2180,7 @@
* Returns: A submitted transaction or NULL on failure.
*/
static struct cvmx_usb_transaction *cvmx_usb_submit_bulk(
- struct cvmx_usb_state *usb,
+ struct octeon_hcd *usb,
struct cvmx_usb_pipe *pipe,
struct urb *urb)
{
@@ -2234,7 +2194,6 @@
urb);
}
-
/**
* Call to submit a USB Interrupt transfer to a pipe.
*
@@ -2245,7 +2204,7 @@
* Returns: A submitted transaction or NULL on failure.
*/
static struct cvmx_usb_transaction *cvmx_usb_submit_interrupt(
- struct cvmx_usb_state *usb,
+ struct octeon_hcd *usb,
struct cvmx_usb_pipe *pipe,
struct urb *urb)
{
@@ -2260,7 +2219,6 @@
urb);
}
-
/**
* Call to submit a USB Control transfer to a pipe.
*
@@ -2271,12 +2229,12 @@
* Returns: A submitted transaction or NULL on failure.
*/
static struct cvmx_usb_transaction *cvmx_usb_submit_control(
- struct cvmx_usb_state *usb,
+ struct octeon_hcd *usb,
struct cvmx_usb_pipe *pipe,
struct urb *urb)
{
int buffer_length = urb->transfer_buffer_length;
- uint64_t control_header = urb->setup_dma;
+ u64 control_header = urb->setup_dma;
struct usb_ctrlrequest *header = cvmx_phys_to_ptr(control_header);
if ((header->bRequestType & USB_DIR_IN) == 0)
@@ -2292,7 +2250,6 @@
urb);
}
-
/**
* Call to submit a USB Isochronous transfer to a pipe.
*
@@ -2303,13 +2260,13 @@
* Returns: A submitted transaction or NULL on failure.
*/
static struct cvmx_usb_transaction *cvmx_usb_submit_isochronous(
- struct cvmx_usb_state *usb,
+ struct octeon_hcd *usb,
struct cvmx_usb_pipe *pipe,
struct urb *urb)
{
struct cvmx_usb_iso_packet *packets;
- packets = (struct cvmx_usb_iso_packet *) urb->setup_packet;
+ packets = (struct cvmx_usb_iso_packet *)urb->setup_packet;
return cvmx_usb_submit_transaction(usb, pipe,
CVMX_USB_TRANSFER_ISOCHRONOUS,
urb->transfer_dma,
@@ -2320,7 +2277,6 @@
packets, urb);
}
-
/**
* Cancel one outstanding request in a pipe. Canceling a request
* can fail if the transaction has already completed before cancel
@@ -2334,7 +2290,7 @@
*
* Returns: 0 or a negative error code.
*/
-static int cvmx_usb_cancel(struct cvmx_usb_state *usb,
+static int cvmx_usb_cancel(struct octeon_hcd *usb,
struct cvmx_usb_pipe *pipe,
struct cvmx_usb_transaction *transaction)
{
@@ -2360,17 +2316,15 @@
if (usbc_hcchar.s.chena) {
usbc_hcchar.s.chdis = 1;
cvmx_usb_write_csr32(usb,
- CVMX_USBCX_HCCHARX(pipe->channel,
- usb->index),
- usbc_hcchar.u32);
+ CVMX_USBCX_HCCHARX(pipe->channel,
+ usb->index),
+ usbc_hcchar.u32);
}
}
- cvmx_usb_perform_complete(usb, pipe, transaction,
- CVMX_USB_COMPLETE_CANCEL);
+ cvmx_usb_complete(usb, pipe, transaction, CVMX_USB_STATUS_CANCEL);
return 0;
}
-
/**
* Cancel all outstanding requests in a pipe. Logically all this
* does is call cvmx_usb_cancel() in a loop.
@@ -2380,7 +2334,7 @@
*
* Returns: 0 or a negative error code.
*/
-static int cvmx_usb_cancel_all(struct cvmx_usb_state *usb,
+static int cvmx_usb_cancel_all(struct octeon_hcd *usb,
struct cvmx_usb_pipe *pipe)
{
struct cvmx_usb_transaction *transaction, *next;
@@ -2395,7 +2349,6 @@
return 0;
}
-
/**
* Close a pipe created with cvmx_usb_open_pipe().
*
@@ -2405,7 +2358,7 @@
* Returns: 0 or a negative error code. EBUSY is returned if the pipe has
* outstanding transfers.
*/
-static int cvmx_usb_close_pipe(struct cvmx_usb_state *usb,
+static int cvmx_usb_close_pipe(struct octeon_hcd *usb,
struct cvmx_usb_pipe *pipe)
{
/* Fail if the pipe has pending transactions */
@@ -2426,7 +2379,7 @@
*
* Returns: USB frame number
*/
-static int cvmx_usb_get_frame_number(struct cvmx_usb_state *usb)
+static int cvmx_usb_get_frame_number(struct octeon_hcd *usb)
{
int frame_number;
union cvmx_usbcx_hfnum usbc_hfnum;
@@ -2437,6 +2390,197 @@
return frame_number;
}
+static void cvmx_usb_transfer_control(struct octeon_hcd *usb,
+ struct cvmx_usb_pipe *pipe,
+ struct cvmx_usb_transaction *transaction,
+ union cvmx_usbcx_hccharx usbc_hcchar,
+ int buffer_space_left,
+ int bytes_in_last_packet)
+{
+ switch (transaction->stage) {
+ case CVMX_USB_STAGE_NON_CONTROL:
+ case CVMX_USB_STAGE_NON_CONTROL_SPLIT_COMPLETE:
+ /* This should be impossible */
+ cvmx_usb_complete(usb, pipe, transaction,
+ CVMX_USB_STATUS_ERROR);
+ break;
+ case CVMX_USB_STAGE_SETUP:
+ pipe->pid_toggle = 1;
+ if (cvmx_usb_pipe_needs_split(usb, pipe)) {
+ transaction->stage =
+ CVMX_USB_STAGE_SETUP_SPLIT_COMPLETE;
+ } else {
+ struct usb_ctrlrequest *header =
+ cvmx_phys_to_ptr(transaction->control_header);
+ if (header->wLength)
+ transaction->stage = CVMX_USB_STAGE_DATA;
+ else
+ transaction->stage = CVMX_USB_STAGE_STATUS;
+ }
+ break;
+ case CVMX_USB_STAGE_SETUP_SPLIT_COMPLETE:
+ {
+ struct usb_ctrlrequest *header =
+ cvmx_phys_to_ptr(transaction->control_header);
+ if (header->wLength)
+ transaction->stage = CVMX_USB_STAGE_DATA;
+ else
+ transaction->stage = CVMX_USB_STAGE_STATUS;
+ }
+ break;
+ case CVMX_USB_STAGE_DATA:
+ if (cvmx_usb_pipe_needs_split(usb, pipe)) {
+ transaction->stage = CVMX_USB_STAGE_DATA_SPLIT_COMPLETE;
+ /*
+ * For setup OUT data that are splits,
+ * the hardware doesn't appear to count
+ * transferred data. Here we manually
+ * update the data transferred
+ */
+ if (!usbc_hcchar.s.epdir) {
+ if (buffer_space_left < pipe->max_packet)
+ transaction->actual_bytes +=
+ buffer_space_left;
+ else
+ transaction->actual_bytes +=
+ pipe->max_packet;
+ }
+ } else if ((buffer_space_left == 0) ||
+ (bytes_in_last_packet < pipe->max_packet)) {
+ pipe->pid_toggle = 1;
+ transaction->stage = CVMX_USB_STAGE_STATUS;
+ }
+ break;
+ case CVMX_USB_STAGE_DATA_SPLIT_COMPLETE:
+ if ((buffer_space_left == 0) ||
+ (bytes_in_last_packet < pipe->max_packet)) {
+ pipe->pid_toggle = 1;
+ transaction->stage = CVMX_USB_STAGE_STATUS;
+ } else {
+ transaction->stage = CVMX_USB_STAGE_DATA;
+ }
+ break;
+ case CVMX_USB_STAGE_STATUS:
+ if (cvmx_usb_pipe_needs_split(usb, pipe))
+ transaction->stage =
+ CVMX_USB_STAGE_STATUS_SPLIT_COMPLETE;
+ else
+ cvmx_usb_complete(usb, pipe, transaction,
+ CVMX_USB_STATUS_OK);
+ break;
+ case CVMX_USB_STAGE_STATUS_SPLIT_COMPLETE:
+ cvmx_usb_complete(usb, pipe, transaction, CVMX_USB_STATUS_OK);
+ break;
+ }
+}
+
+static void cvmx_usb_transfer_bulk(struct octeon_hcd *usb,
+ struct cvmx_usb_pipe *pipe,
+ struct cvmx_usb_transaction *transaction,
+ union cvmx_usbcx_hcintx usbc_hcint,
+ int buffer_space_left,
+ int bytes_in_last_packet)
+{
+ /*
+ * The only time a bulk transfer isn't complete when it finishes with
+ * an ACK is during a split transaction. For splits we need to continue
+ * the transfer if more data is needed.
+ */
+ if (cvmx_usb_pipe_needs_split(usb, pipe)) {
+ if (transaction->stage == CVMX_USB_STAGE_NON_CONTROL)
+ transaction->stage =
+ CVMX_USB_STAGE_NON_CONTROL_SPLIT_COMPLETE;
+ else if (buffer_space_left &&
+ (bytes_in_last_packet == pipe->max_packet))
+ transaction->stage = CVMX_USB_STAGE_NON_CONTROL;
+ else
+ cvmx_usb_complete(usb, pipe, transaction,
+ CVMX_USB_STATUS_OK);
+ } else {
+ if ((pipe->device_speed == CVMX_USB_SPEED_HIGH) &&
+ (pipe->transfer_dir == CVMX_USB_DIRECTION_OUT) &&
+ (usbc_hcint.s.nak))
+ pipe->flags |= CVMX_USB_PIPE_FLAGS_NEED_PING;
+ if (!buffer_space_left ||
+ (bytes_in_last_packet < pipe->max_packet))
+ cvmx_usb_complete(usb, pipe, transaction,
+ CVMX_USB_STATUS_OK);
+ }
+}
+
+static void cvmx_usb_transfer_intr(struct octeon_hcd *usb,
+ struct cvmx_usb_pipe *pipe,
+ struct cvmx_usb_transaction *transaction,
+ int buffer_space_left,
+ int bytes_in_last_packet)
+{
+ if (cvmx_usb_pipe_needs_split(usb, pipe)) {
+ if (transaction->stage == CVMX_USB_STAGE_NON_CONTROL) {
+ transaction->stage =
+ CVMX_USB_STAGE_NON_CONTROL_SPLIT_COMPLETE;
+ } else if (buffer_space_left &&
+ (bytes_in_last_packet == pipe->max_packet)) {
+ transaction->stage = CVMX_USB_STAGE_NON_CONTROL;
+ } else {
+ pipe->next_tx_frame += pipe->interval;
+ cvmx_usb_complete(usb, pipe, transaction,
+ CVMX_USB_STATUS_OK);
+ }
+ } else if (!buffer_space_left ||
+ (bytes_in_last_packet < pipe->max_packet)) {
+ pipe->next_tx_frame += pipe->interval;
+ cvmx_usb_complete(usb, pipe, transaction, CVMX_USB_STATUS_OK);
+ }
+}
+
+static void cvmx_usb_transfer_isoc(struct octeon_hcd *usb,
+ struct cvmx_usb_pipe *pipe,
+ struct cvmx_usb_transaction *transaction,
+ int buffer_space_left,
+ int bytes_in_last_packet,
+ int bytes_this_transfer)
+{
+ if (cvmx_usb_pipe_needs_split(usb, pipe)) {
+ /*
+ * ISOCHRONOUS OUT splits don't require a complete split stage.
+ * Instead they use a sequence of begin OUT splits to transfer
+ * the data 188 bytes at a time. Once the transfer is complete,
+ * the pipe sleeps until the next schedule interval.
+ */
+ if (pipe->transfer_dir == CVMX_USB_DIRECTION_OUT) {
+ /*
+ * If no space left or this wasn't a max size packet
+ * then this transfer is complete. Otherwise start it
+ * again to send the next 188 bytes
+ */
+ if (!buffer_space_left || (bytes_this_transfer < 188)) {
+ pipe->next_tx_frame += pipe->interval;
+ cvmx_usb_complete(usb, pipe, transaction,
+ CVMX_USB_STATUS_OK);
+ }
+ return;
+ }
+ if (transaction->stage ==
+ CVMX_USB_STAGE_NON_CONTROL_SPLIT_COMPLETE) {
+ /*
+ * We are in the incoming data phase. Keep getting data
+ * until we run out of space or get a small packet
+ */
+ if ((buffer_space_left == 0) ||
+ (bytes_in_last_packet < pipe->max_packet)) {
+ pipe->next_tx_frame += pipe->interval;
+ cvmx_usb_complete(usb, pipe, transaction,
+ CVMX_USB_STATUS_OK);
+ }
+ } else {
+ transaction->stage =
+ CVMX_USB_STAGE_NON_CONTROL_SPLIT_COMPLETE;
+ }
+ } else {
+ pipe->next_tx_frame += pipe->interval;
+ cvmx_usb_complete(usb, pipe, transaction, CVMX_USB_STATUS_OK);
+ }
+}
/**
* Poll a channel for status
@@ -2446,10 +2590,9 @@
*
* Returns: Zero on success
*/
-static int cvmx_usb_poll_channel(struct cvmx_usb_state *usb, int channel)
+static int cvmx_usb_poll_channel(struct octeon_hcd *usb, int channel)
{
- struct octeon_hcd *priv = cvmx_usb_to_octeon(usb);
- struct usb_hcd *hcd = octeon_to_hcd(priv);
+ struct usb_hcd *hcd = octeon_to_hcd(usb);
struct device *dev = hcd->self.controller;
union cvmx_usbcx_hcintx usbc_hcint;
union cvmx_usbcx_hctsizx usbc_hctsiz;
@@ -2476,9 +2619,9 @@
* write of HCCHARX without changing things
*/
cvmx_usb_write_csr32(usb,
- CVMX_USBCX_HCCHARX(channel,
- usb->index),
- usbc_hcchar.u32);
+ CVMX_USBCX_HCCHARX(channel,
+ usb->index),
+ usbc_hcchar.u32);
return 0;
}
@@ -2493,14 +2636,12 @@
hcintmsk.u32 = 0;
hcintmsk.s.chhltdmsk = 1;
cvmx_usb_write_csr32(usb,
- CVMX_USBCX_HCINTMSKX(channel,
- usb->index),
- hcintmsk.u32);
+ CVMX_USBCX_HCINTMSKX(channel, usb->index),
+ hcintmsk.u32);
usbc_hcchar.s.chdis = 1;
cvmx_usb_write_csr32(usb,
- CVMX_USBCX_HCCHARX(channel,
- usb->index),
- usbc_hcchar.u32);
+ CVMX_USBCX_HCCHARX(channel, usb->index),
+ usbc_hcchar.u32);
return 0;
} else if (usbc_hcint.s.xfercompl) {
/*
@@ -2524,7 +2665,7 @@
/* Disable the channel interrupts now that it is done */
cvmx_usb_write_csr32(usb, CVMX_USBCX_HCINTMSKX(channel, usb->index), 0);
- usb->idle_hardware_channels |= (1<<channel);
+ usb->idle_hardware_channels |= (1 << channel);
/* Make sure this channel is tied to a valid pipe */
pipe = usb->pipe_for_channel[channel];
@@ -2594,7 +2735,7 @@
* transferred
*/
if ((transaction->stage == CVMX_USB_STAGE_SETUP) ||
- (transaction->stage == CVMX_USB_STAGE_SETUP_SPLIT_COMPLETE))
+ (transaction->stage == CVMX_USB_STAGE_SETUP_SPLIT_COMPLETE))
bytes_this_transfer = 0;
/*
@@ -2622,8 +2763,8 @@
* will clear this flag
*/
if ((pipe->device_speed == CVMX_USB_SPEED_HIGH) &&
- (pipe->transfer_type == CVMX_USB_TRANSFER_BULK) &&
- (pipe->transfer_dir == CVMX_USB_DIRECTION_OUT))
+ (pipe->transfer_type == CVMX_USB_TRANSFER_BULK) &&
+ (pipe->transfer_dir == CVMX_USB_DIRECTION_OUT))
pipe->flags |= CVMX_USB_PIPE_FLAGS_NEED_PING;
if (unlikely(WARN_ON_ONCE(bytes_this_transfer < 0))) {
@@ -2632,8 +2773,8 @@
* keeps substracting same byte count over and over again. In
* such case we just need to fail every transaction.
*/
- cvmx_usb_perform_complete(usb, pipe, transaction,
- CVMX_USB_COMPLETE_ERROR);
+ cvmx_usb_complete(usb, pipe, transaction,
+ CVMX_USB_STATUS_ERROR);
return 0;
}
@@ -2645,24 +2786,24 @@
* the actual bytes transferred
*/
pipe->pid_toggle = 0;
- cvmx_usb_perform_complete(usb, pipe, transaction,
- CVMX_USB_COMPLETE_STALL);
+ cvmx_usb_complete(usb, pipe, transaction,
+ CVMX_USB_STATUS_STALL);
} else if (usbc_hcint.s.xacterr) {
/*
* XactErr as a response means the device signaled
* something wrong with the transfer. For example, PID
* toggle errors cause these.
*/
- cvmx_usb_perform_complete(usb, pipe, transaction,
- CVMX_USB_COMPLETE_XACTERR);
+ cvmx_usb_complete(usb, pipe, transaction,
+ CVMX_USB_STATUS_XACTERR);
} else if (usbc_hcint.s.bblerr) {
/* Babble Error (BblErr) */
- cvmx_usb_perform_complete(usb, pipe, transaction,
- CVMX_USB_COMPLETE_BABBLEERR);
+ cvmx_usb_complete(usb, pipe, transaction,
+ CVMX_USB_STATUS_BABBLEERR);
} else if (usbc_hcint.s.datatglerr) {
/* Data toggle error */
- cvmx_usb_perform_complete(usb, pipe, transaction,
- CVMX_USB_COMPLETE_DATATGLERR);
+ cvmx_usb_complete(usb, pipe, transaction,
+ CVMX_USB_STATUS_DATATGLERR);
} else if (usbc_hcint.s.nyet) {
/*
* NYET as a response is only allowed in three cases: as a
@@ -2677,10 +2818,10 @@
* again. Otherwise this transaction is complete
*/
if ((buffer_space_left == 0) ||
- (bytes_in_last_packet < pipe->max_packet))
- cvmx_usb_perform_complete(usb, pipe,
- transaction,
- CVMX_USB_COMPLETE_SUCCESS);
+ (bytes_in_last_packet < pipe->max_packet))
+ cvmx_usb_complete(usb, pipe,
+ transaction,
+ CVMX_USB_STATUS_OK);
} else {
/*
* Split transactions retry the split complete 4 times
@@ -2714,205 +2855,26 @@
switch (transaction->type) {
case CVMX_USB_TRANSFER_CONTROL:
- switch (transaction->stage) {
- case CVMX_USB_STAGE_NON_CONTROL:
- case CVMX_USB_STAGE_NON_CONTROL_SPLIT_COMPLETE:
- /* This should be impossible */
- cvmx_usb_perform_complete(usb, pipe,
- transaction, CVMX_USB_COMPLETE_ERROR);
- break;
- case CVMX_USB_STAGE_SETUP:
- pipe->pid_toggle = 1;
- if (cvmx_usb_pipe_needs_split(usb, pipe))
- transaction->stage =
- CVMX_USB_STAGE_SETUP_SPLIT_COMPLETE;
- else {
- struct usb_ctrlrequest *header =
- cvmx_phys_to_ptr(transaction->control_header);
- if (header->wLength)
- transaction->stage =
- CVMX_USB_STAGE_DATA;
- else
- transaction->stage =
- CVMX_USB_STAGE_STATUS;
- }
- break;
- case CVMX_USB_STAGE_SETUP_SPLIT_COMPLETE:
- {
- struct usb_ctrlrequest *header =
- cvmx_phys_to_ptr(transaction->control_header);
- if (header->wLength)
- transaction->stage =
- CVMX_USB_STAGE_DATA;
- else
- transaction->stage =
- CVMX_USB_STAGE_STATUS;
- }
- break;
- case CVMX_USB_STAGE_DATA:
- if (cvmx_usb_pipe_needs_split(usb, pipe)) {
- transaction->stage =
- CVMX_USB_STAGE_DATA_SPLIT_COMPLETE;
- /*
- * For setup OUT data that are splits,
- * the hardware doesn't appear to count
- * transferred data. Here we manually
- * update the data transferred
- */
- if (!usbc_hcchar.s.epdir) {
- if (buffer_space_left < pipe->max_packet)
- transaction->actual_bytes +=
- buffer_space_left;
- else
- transaction->actual_bytes +=
- pipe->max_packet;
- }
- } else if ((buffer_space_left == 0) ||
- (bytes_in_last_packet <
- pipe->max_packet)) {
- pipe->pid_toggle = 1;
- transaction->stage =
- CVMX_USB_STAGE_STATUS;
- }
- break;
- case CVMX_USB_STAGE_DATA_SPLIT_COMPLETE:
- if ((buffer_space_left == 0) ||
- (bytes_in_last_packet <
- pipe->max_packet)) {
- pipe->pid_toggle = 1;
- transaction->stage =
- CVMX_USB_STAGE_STATUS;
- } else {
- transaction->stage =
- CVMX_USB_STAGE_DATA;
- }
- break;
- case CVMX_USB_STAGE_STATUS:
- if (cvmx_usb_pipe_needs_split(usb, pipe))
- transaction->stage =
- CVMX_USB_STAGE_STATUS_SPLIT_COMPLETE;
- else
- cvmx_usb_perform_complete(usb, pipe,
- transaction,
- CVMX_USB_COMPLETE_SUCCESS);
- break;
- case CVMX_USB_STAGE_STATUS_SPLIT_COMPLETE:
- cvmx_usb_perform_complete(usb, pipe,
- transaction,
- CVMX_USB_COMPLETE_SUCCESS);
- break;
- }
+ cvmx_usb_transfer_control(usb, pipe, transaction,
+ usbc_hcchar,
+ buffer_space_left,
+ bytes_in_last_packet);
break;
case CVMX_USB_TRANSFER_BULK:
+ cvmx_usb_transfer_bulk(usb, pipe, transaction,
+ usbc_hcint, buffer_space_left,
+ bytes_in_last_packet);
+ break;
case CVMX_USB_TRANSFER_INTERRUPT:
- /*
- * The only time a bulk transfer isn't complete when it
- * finishes with an ACK is during a split transaction.
- * For splits we need to continue the transfer if more
- * data is needed
- */
- if (cvmx_usb_pipe_needs_split(usb, pipe)) {
- if (transaction->stage ==
- CVMX_USB_STAGE_NON_CONTROL)
- transaction->stage =
- CVMX_USB_STAGE_NON_CONTROL_SPLIT_COMPLETE;
- else {
- if (buffer_space_left &&
- (bytes_in_last_packet ==
- pipe->max_packet))
- transaction->stage =
- CVMX_USB_STAGE_NON_CONTROL;
- else {
- if (transaction->type ==
- CVMX_USB_TRANSFER_INTERRUPT)
- pipe->next_tx_frame +=
- pipe->interval;
- cvmx_usb_perform_complete(
- usb,
- pipe,
- transaction,
- CVMX_USB_COMPLETE_SUCCESS);
- }
- }
- } else {
- if ((pipe->device_speed ==
- CVMX_USB_SPEED_HIGH) &&
- (pipe->transfer_type ==
- CVMX_USB_TRANSFER_BULK) &&
- (pipe->transfer_dir ==
- CVMX_USB_DIRECTION_OUT) &&
- (usbc_hcint.s.nak))
- pipe->flags |=
- CVMX_USB_PIPE_FLAGS_NEED_PING;
- if (!buffer_space_left ||
- (bytes_in_last_packet <
- pipe->max_packet)) {
- if (transaction->type ==
- CVMX_USB_TRANSFER_INTERRUPT)
- pipe->next_tx_frame +=
- pipe->interval;
- cvmx_usb_perform_complete(usb, pipe,
- transaction,
- CVMX_USB_COMPLETE_SUCCESS);
- }
- }
+ cvmx_usb_transfer_intr(usb, pipe, transaction,
+ buffer_space_left,
+ bytes_in_last_packet);
break;
case CVMX_USB_TRANSFER_ISOCHRONOUS:
- if (cvmx_usb_pipe_needs_split(usb, pipe)) {
- /*
- * ISOCHRONOUS OUT splits don't require a
- * complete split stage. Instead they use a
- * sequence of begin OUT splits to transfer the
- * data 188 bytes at a time. Once the transfer
- * is complete, the pipe sleeps until the next
- * schedule interval
- */
- if (pipe->transfer_dir ==
- CVMX_USB_DIRECTION_OUT) {
- /*
- * If no space left or this wasn't a max
- * size packet then this transfer is
- * complete. Otherwise start it again to
- * send the next 188 bytes
- */
- if (!buffer_space_left ||
- (bytes_this_transfer < 188)) {
- pipe->next_tx_frame +=
- pipe->interval;
- cvmx_usb_perform_complete(usb,
- pipe, transaction,
- CVMX_USB_COMPLETE_SUCCESS);
- }
- } else {
- if (transaction->stage ==
- CVMX_USB_STAGE_NON_CONTROL_SPLIT_COMPLETE) {
- /*
- * We are in the incoming data
- * phase. Keep getting data
- * until we run out of space or
- * get a small packet
- */
- if ((buffer_space_left == 0) ||
- (bytes_in_last_packet <
- pipe->max_packet)) {
- pipe->next_tx_frame +=
- pipe->interval;
- cvmx_usb_perform_complete(
- usb,
- pipe,
- transaction,
- CVMX_USB_COMPLETE_SUCCESS);
- }
- } else
- transaction->stage =
- CVMX_USB_STAGE_NON_CONTROL_SPLIT_COMPLETE;
- }
- } else {
- pipe->next_tx_frame += pipe->interval;
- cvmx_usb_perform_complete(usb, pipe,
- transaction,
- CVMX_USB_COMPLETE_SUCCESS);
- }
+ cvmx_usb_transfer_isoc(usb, pipe, transaction,
+ buffer_space_left,
+ bytes_in_last_packet,
+ bytes_this_transfer);
break;
}
} else if (usbc_hcint.s.nak) {
@@ -2947,20 +2909,18 @@
* We get channel halted interrupts with no result bits
* sets when the cable is unplugged
*/
- cvmx_usb_perform_complete(usb, pipe, transaction,
- CVMX_USB_COMPLETE_ERROR);
+ cvmx_usb_complete(usb, pipe, transaction,
+ CVMX_USB_STATUS_ERROR);
}
}
return 0;
}
-static void octeon_usb_port_callback(struct cvmx_usb_state *usb)
+static void octeon_usb_port_callback(struct octeon_hcd *usb)
{
- struct octeon_hcd *priv = cvmx_usb_to_octeon(usb);
-
- spin_unlock(&priv->lock);
- usb_hcd_poll_rh_status(octeon_to_hcd(priv));
- spin_lock(&priv->lock);
+ spin_unlock(&usb->lock);
+ usb_hcd_poll_rh_status(octeon_to_hcd(usb));
+ spin_lock(&usb->lock);
}
/**
@@ -2973,7 +2933,7 @@
*
* Returns: 0 or a negative error code.
*/
-static int cvmx_usb_poll(struct cvmx_usb_state *usb)
+static int cvmx_usb_poll(struct octeon_hcd *usb)
{
union cvmx_usbcx_hfnum usbc_hfnum;
union cvmx_usbcx_gintsts usbc_gintsts;
@@ -2982,7 +2942,7 @@
/* Update the frame counter */
usbc_hfnum.u32 = cvmx_usb_read_csr32(usb, CVMX_USBCX_HFNUM(usb->index));
- if ((usb->frame_number&0x3fff) > usbc_hfnum.s.frnum)
+ if ((usb->frame_number & 0x3fff) > usbc_hfnum.s.frnum)
usb->frame_number += 0x4000;
usb->frame_number &= ~0x3fffull;
usb->frame_number |= usbc_hfnum.s.frnum;
@@ -3029,8 +2989,8 @@
*/
octeon_usb_port_callback(usb);
/* Clear the port change bits */
- usbc_hprt.u32 = cvmx_usb_read_csr32(usb,
- CVMX_USBCX_HPRT(usb->index));
+ usbc_hprt.u32 =
+ cvmx_usb_read_csr32(usb, CVMX_USBCX_HPRT(usb->index));
usbc_hprt.s.prtena = 0;
cvmx_usb_write_csr32(usb, CVMX_USBCX_HPRT(usb->index),
usbc_hprt.u32);
@@ -3057,7 +3017,7 @@
channel = __fls(usbc_haint.u32);
cvmx_usb_poll_channel(usb, channel);
- usbc_haint.u32 ^= 1<<channel;
+ usbc_haint.u32 ^= 1 << channel;
}
}
@@ -3074,12 +3034,12 @@
static irqreturn_t octeon_usb_irq(struct usb_hcd *hcd)
{
- struct octeon_hcd *priv = hcd_to_octeon(hcd);
+ struct octeon_hcd *usb = hcd_to_octeon(hcd);
unsigned long flags;
- spin_lock_irqsave(&priv->lock, flags);
- cvmx_usb_poll(&priv->usb);
- spin_unlock_irqrestore(&priv->lock, flags);
+ spin_lock_irqsave(&usb->lock, flags);
+ cvmx_usb_poll(usb);
+ spin_unlock_irqrestore(&usb->lock, flags);
return IRQ_HANDLED;
}
@@ -3096,16 +3056,16 @@
static int octeon_usb_get_frame_number(struct usb_hcd *hcd)
{
- struct octeon_hcd *priv = hcd_to_octeon(hcd);
+ struct octeon_hcd *usb = hcd_to_octeon(hcd);
- return cvmx_usb_get_frame_number(&priv->usb);
+ return cvmx_usb_get_frame_number(usb);
}
static int octeon_usb_urb_enqueue(struct usb_hcd *hcd,
struct urb *urb,
gfp_t mem_flags)
{
- struct octeon_hcd *priv = hcd_to_octeon(hcd);
+ struct octeon_hcd *usb = hcd_to_octeon(hcd);
struct device *dev = hcd->self.controller;
struct cvmx_usb_transaction *transaction = NULL;
struct cvmx_usb_pipe *pipe;
@@ -3115,11 +3075,11 @@
int rc;
urb->status = 0;
- spin_lock_irqsave(&priv->lock, flags);
+ spin_lock_irqsave(&usb->lock, flags);
rc = usb_hcd_link_urb_to_ep(hcd, urb);
if (rc) {
- spin_unlock_irqrestore(&priv->lock, flags);
+ spin_unlock_irqrestore(&usb->lock, flags);
return rc;
}
@@ -3185,7 +3145,7 @@
dev = dev->parent;
}
}
- pipe = cvmx_usb_open_pipe(&priv->usb, usb_pipedevice(urb->pipe),
+ pipe = cvmx_usb_open_pipe(usb, usb_pipedevice(urb->pipe),
usb_pipeendpoint(urb->pipe), speed,
le16_to_cpu(ep->desc.wMaxPacketSize)
& 0x7ff,
@@ -3199,7 +3159,7 @@
split_device, split_port);
if (!pipe) {
usb_hcd_unlink_urb_from_ep(hcd, urb);
- spin_unlock_irqrestore(&priv->lock, flags);
+ spin_unlock_irqrestore(&usb->lock, flags);
dev_dbg(dev, "Failed to create pipe\n");
return -ENOMEM;
}
@@ -3228,8 +3188,7 @@
urb->iso_frame_desc[i].offset;
iso_packet[i].length =
urb->iso_frame_desc[i].length;
- iso_packet[i].status =
- CVMX_USB_COMPLETE_ERROR;
+ iso_packet[i].status = CVMX_USB_STATUS_ERROR;
}
/*
* Store a pointer to the list in the URB setup_packet
@@ -3237,7 +3196,7 @@
* this saves us a bunch of logic.
*/
urb->setup_packet = (char *)iso_packet;
- transaction = cvmx_usb_submit_isochronous(&priv->usb,
+ transaction = cvmx_usb_submit_isochronous(usb,
pipe, urb);
/*
* If submit failed we need to free our private packet
@@ -3253,29 +3212,29 @@
dev_dbg(dev, "Submit interrupt to %d.%d\n",
usb_pipedevice(urb->pipe),
usb_pipeendpoint(urb->pipe));
- transaction = cvmx_usb_submit_interrupt(&priv->usb, pipe, urb);
+ transaction = cvmx_usb_submit_interrupt(usb, pipe, urb);
break;
case PIPE_CONTROL:
dev_dbg(dev, "Submit control to %d.%d\n",
usb_pipedevice(urb->pipe),
usb_pipeendpoint(urb->pipe));
- transaction = cvmx_usb_submit_control(&priv->usb, pipe, urb);
+ transaction = cvmx_usb_submit_control(usb, pipe, urb);
break;
case PIPE_BULK:
dev_dbg(dev, "Submit bulk to %d.%d\n",
usb_pipedevice(urb->pipe),
usb_pipeendpoint(urb->pipe));
- transaction = cvmx_usb_submit_bulk(&priv->usb, pipe, urb);
+ transaction = cvmx_usb_submit_bulk(usb, pipe, urb);
break;
}
if (!transaction) {
usb_hcd_unlink_urb_from_ep(hcd, urb);
- spin_unlock_irqrestore(&priv->lock, flags);
+ spin_unlock_irqrestore(&usb->lock, flags);
dev_dbg(dev, "Failed to submit\n");
return -ENOMEM;
}
urb->hcpriv = transaction;
- spin_unlock_irqrestore(&priv->lock, flags);
+ spin_unlock_irqrestore(&usb->lock, flags);
return 0;
}
@@ -3283,24 +3242,24 @@
struct urb *urb,
int status)
{
- struct octeon_hcd *priv = hcd_to_octeon(hcd);
+ struct octeon_hcd *usb = hcd_to_octeon(hcd);
unsigned long flags;
int rc;
if (!urb->dev)
return -EINVAL;
- spin_lock_irqsave(&priv->lock, flags);
+ spin_lock_irqsave(&usb->lock, flags);
rc = usb_hcd_check_unlink_urb(hcd, urb, status);
if (rc)
goto out;
urb->status = status;
- cvmx_usb_cancel(&priv->usb, urb->ep->hcpriv, urb->hcpriv);
+ cvmx_usb_cancel(usb, urb->ep->hcpriv, urb->hcpriv);
out:
- spin_unlock_irqrestore(&priv->lock, flags);
+ spin_unlock_irqrestore(&usb->lock, flags);
return rc;
}
@@ -3311,28 +3270,28 @@
struct device *dev = hcd->self.controller;
if (ep->hcpriv) {
- struct octeon_hcd *priv = hcd_to_octeon(hcd);
+ struct octeon_hcd *usb = hcd_to_octeon(hcd);
struct cvmx_usb_pipe *pipe = ep->hcpriv;
unsigned long flags;
- spin_lock_irqsave(&priv->lock, flags);
- cvmx_usb_cancel_all(&priv->usb, pipe);
- if (cvmx_usb_close_pipe(&priv->usb, pipe))
+ spin_lock_irqsave(&usb->lock, flags);
+ cvmx_usb_cancel_all(usb, pipe);
+ if (cvmx_usb_close_pipe(usb, pipe))
dev_dbg(dev, "Closing pipe %p failed\n", pipe);
- spin_unlock_irqrestore(&priv->lock, flags);
+ spin_unlock_irqrestore(&usb->lock, flags);
ep->hcpriv = NULL;
}
}
static int octeon_usb_hub_status_data(struct usb_hcd *hcd, char *buf)
{
- struct octeon_hcd *priv = hcd_to_octeon(hcd);
+ struct octeon_hcd *usb = hcd_to_octeon(hcd);
struct cvmx_usb_port_status port_status;
unsigned long flags;
- spin_lock_irqsave(&priv->lock, flags);
- port_status = cvmx_usb_get_status(&priv->usb);
- spin_unlock_irqrestore(&priv->lock, flags);
+ spin_lock_irqsave(&usb->lock, flags);
+ port_status = cvmx_usb_get_status(usb);
+ spin_unlock_irqrestore(&usb->lock, flags);
buf[0] = 0;
buf[0] = port_status.connect_change << 1;
@@ -3340,12 +3299,11 @@
}
static int octeon_usb_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
- u16 wIndex, char *buf, u16 wLength)
+ u16 wIndex, char *buf, u16 wLength)
{
- struct octeon_hcd *priv = hcd_to_octeon(hcd);
+ struct octeon_hcd *usb = hcd_to_octeon(hcd);
struct device *dev = hcd->self.controller;
struct cvmx_usb_port_status usb_port_status;
- struct cvmx_usb_state *usb = &priv->usb;
int port_status;
struct usb_hub_descriptor *desc;
unsigned long flags;
@@ -3372,9 +3330,9 @@
switch (wValue) {
case USB_PORT_FEAT_ENABLE:
dev_dbg(dev, " ENABLE\n");
- spin_lock_irqsave(&priv->lock, flags);
- cvmx_usb_disable(&priv->usb);
- spin_unlock_irqrestore(&priv->lock, flags);
+ spin_lock_irqsave(&usb->lock, flags);
+ cvmx_usb_disable(usb);
+ spin_unlock_irqrestore(&usb->lock, flags);
break;
case USB_PORT_FEAT_SUSPEND:
dev_dbg(dev, " SUSPEND\n");
@@ -3391,20 +3349,18 @@
case USB_PORT_FEAT_C_CONNECTION:
dev_dbg(dev, " C_CONNECTION\n");
/* Clears drivers internal connect status change flag */
- spin_lock_irqsave(&priv->lock, flags);
- priv->usb.port_status =
- cvmx_usb_get_status(&priv->usb);
- spin_unlock_irqrestore(&priv->lock, flags);
+ spin_lock_irqsave(&usb->lock, flags);
+ usb->port_status = cvmx_usb_get_status(usb);
+ spin_unlock_irqrestore(&usb->lock, flags);
break;
case USB_PORT_FEAT_C_RESET:
dev_dbg(dev, " C_RESET\n");
/*
* Clears the driver's internal Port Reset Change flag.
*/
- spin_lock_irqsave(&priv->lock, flags);
- priv->usb.port_status =
- cvmx_usb_get_status(&priv->usb);
- spin_unlock_irqrestore(&priv->lock, flags);
+ spin_lock_irqsave(&usb->lock, flags);
+ usb->port_status = cvmx_usb_get_status(usb);
+ spin_unlock_irqrestore(&usb->lock, flags);
break;
case USB_PORT_FEAT_C_ENABLE:
dev_dbg(dev, " C_ENABLE\n");
@@ -3412,10 +3368,9 @@
* Clears the driver's internal Port Enable/Disable
* Change flag.
*/
- spin_lock_irqsave(&priv->lock, flags);
- priv->usb.port_status =
- cvmx_usb_get_status(&priv->usb);
- spin_unlock_irqrestore(&priv->lock, flags);
+ spin_lock_irqsave(&usb->lock, flags);
+ usb->port_status = cvmx_usb_get_status(usb);
+ spin_unlock_irqrestore(&usb->lock, flags);
break;
case USB_PORT_FEAT_C_SUSPEND:
dev_dbg(dev, " C_SUSPEND\n");
@@ -3428,10 +3383,9 @@
case USB_PORT_FEAT_C_OVER_CURRENT:
dev_dbg(dev, " C_OVER_CURRENT\n");
/* Clears the driver's overcurrent Change flag */
- spin_lock_irqsave(&priv->lock, flags);
- priv->usb.port_status =
- cvmx_usb_get_status(&priv->usb);
- spin_unlock_irqrestore(&priv->lock, flags);
+ spin_lock_irqsave(&usb->lock, flags);
+ usb->port_status = cvmx_usb_get_status(usb);
+ spin_unlock_irqrestore(&usb->lock, flags);
break;
default:
dev_dbg(dev, " UNKNOWN\n");
@@ -3452,7 +3406,7 @@
break;
case GetHubStatus:
dev_dbg(dev, "GetHubStatus\n");
- *(__le32 *) buf = 0;
+ *(__le32 *)buf = 0;
break;
case GetPortStatus:
dev_dbg(dev, "GetPortStatus\n");
@@ -3461,9 +3415,9 @@
return -EINVAL;
}
- spin_lock_irqsave(&priv->lock, flags);
- usb_port_status = cvmx_usb_get_status(&priv->usb);
- spin_unlock_irqrestore(&priv->lock, flags);
+ spin_lock_irqsave(&usb->lock, flags);
+ usb_port_status = cvmx_usb_get_status(usb);
+ spin_unlock_irqrestore(&usb->lock, flags);
port_status = 0;
if (usb_port_status.connect_change) {
@@ -3504,7 +3458,7 @@
dev_dbg(dev, " LOWSPEED\n");
}
- *((__le32 *) buf) = cpu_to_le32(port_status);
+ *((__le32 *)buf) = cpu_to_le32(port_status);
break;
case SetHubFeature:
dev_dbg(dev, "SetHubFeature\n");
@@ -3526,16 +3480,16 @@
/*
* Program the port power bit to drive VBUS on the USB.
*/
- spin_lock_irqsave(&priv->lock, flags);
+ spin_lock_irqsave(&usb->lock, flags);
USB_SET_FIELD32(CVMX_USBCX_HPRT(usb->index),
cvmx_usbcx_hprt, prtpwr, 1);
- spin_unlock_irqrestore(&priv->lock, flags);
+ spin_unlock_irqrestore(&usb->lock, flags);
return 0;
case USB_PORT_FEAT_RESET:
dev_dbg(dev, " RESET\n");
- spin_lock_irqsave(&priv->lock, flags);
- cvmx_usb_reset_port(&priv->usb);
- spin_unlock_irqrestore(&priv->lock, flags);
+ spin_lock_irqsave(&usb->lock, flags);
+ cvmx_usb_reset_port(usb);
+ spin_unlock_irqrestore(&usb->lock, flags);
return 0;
case USB_PORT_FEAT_INDICATOR:
dev_dbg(dev, " INDICATOR\n");
@@ -3580,14 +3534,14 @@
struct device_node *usbn_node;
int irq = platform_get_irq(pdev, 0);
struct device *dev = &pdev->dev;
- struct octeon_hcd *priv;
+ struct octeon_hcd *usb;
struct usb_hcd *hcd;
u32 clock_rate = 48000000;
bool is_crystal_clock = false;
const char *clock_type;
int i;
- if (dev->of_node == NULL) {
+ if (!dev->of_node) {
dev_err(dev, "Error: empty of_node\n");
return -ENXIO;
}
@@ -3614,9 +3568,8 @@
break;
default:
dev_err(dev, "Illegal USBN \"clock-frequency\" %u\n",
- clock_rate);
+ clock_rate);
return -ENXIO;
-
}
i = of_property_read_string(usbn_node,
@@ -3634,7 +3587,7 @@
initialize_flags |= CVMX_USB_INITIALIZE_FLAGS_CLOCK_XO_GND;
res_mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
- if (res_mem == NULL) {
+ if (!res_mem) {
dev_err(dev, "found no memory resource\n");
return -ENXIO;
}
@@ -3680,31 +3633,31 @@
return -1;
}
hcd->uses_new_polling = 1;
- priv = (struct octeon_hcd *)hcd->hcd_priv;
+ usb = (struct octeon_hcd *)hcd->hcd_priv;
- spin_lock_init(&priv->lock);
+ spin_lock_init(&usb->lock);
- priv->usb.init_flags = initialize_flags;
+ usb->init_flags = initialize_flags;
/* Initialize the USB state structure */
- priv->usb.index = usb_num;
- INIT_LIST_HEAD(&priv->usb.idle_pipes);
- for (i = 0; i < ARRAY_SIZE(priv->usb.active_pipes); i++)
- INIT_LIST_HEAD(&priv->usb.active_pipes[i]);
+ usb->index = usb_num;
+ INIT_LIST_HEAD(&usb->idle_pipes);
+ for (i = 0; i < ARRAY_SIZE(usb->active_pipes); i++)
+ INIT_LIST_HEAD(&usb->active_pipes[i]);
/* Due to an errata, CN31XX doesn't support DMA */
if (OCTEON_IS_MODEL(OCTEON_CN31XX)) {
- priv->usb.init_flags |= CVMX_USB_INITIALIZE_FLAGS_NO_DMA;
+ usb->init_flags |= CVMX_USB_INITIALIZE_FLAGS_NO_DMA;
/* Only use one channel with non DMA */
- priv->usb.idle_hardware_channels = 0x1;
+ usb->idle_hardware_channels = 0x1;
} else if (OCTEON_IS_MODEL(OCTEON_CN5XXX)) {
/* CN5XXX have an errata with channel 3 */
- priv->usb.idle_hardware_channels = 0xf7;
+ usb->idle_hardware_channels = 0xf7;
} else {
- priv->usb.idle_hardware_channels = 0xff;
+ usb->idle_hardware_channels = 0xff;
}
- status = cvmx_usb_initialize(dev, &priv->usb);
+ status = cvmx_usb_initialize(dev, usb);
if (status) {
dev_dbg(dev, "USB initialization failed with %d\n", status);
kfree(hcd);
@@ -3729,13 +3682,13 @@
int status;
struct device *dev = &pdev->dev;
struct usb_hcd *hcd = dev_get_drvdata(dev);
- struct octeon_hcd *priv = hcd_to_octeon(hcd);
+ struct octeon_hcd *usb = hcd_to_octeon(hcd);
unsigned long flags;
usb_remove_hcd(hcd);
- spin_lock_irqsave(&priv->lock, flags);
- status = cvmx_usb_shutdown(&priv->usb);
- spin_unlock_irqrestore(&priv->lock, flags);
+ spin_lock_irqsave(&usb->lock, flags);
+ status = cvmx_usb_shutdown(usb);
+ spin_unlock_irqrestore(&usb->lock, flags);
if (status)
dev_dbg(dev, "USB shutdown failed with %d\n", status);
@@ -3754,7 +3707,7 @@
static struct platform_driver octeon_usb_driver = {
.driver = {
- .name = "OcteonUSB",
+ .name = "octeon-hcd",
.of_match_table = octeon_usb_match,
},
.probe = octeon_usb_probe,
diff --git a/drivers/staging/octeon-usb/octeon-hcd.h b/drivers/staging/octeon-usb/octeon-hcd.h
index 70e7fa5..3353aefe 100644
--- a/drivers/staging/octeon-usb/octeon-hcd.h
+++ b/drivers/staging/octeon-usb/octeon-hcd.h
@@ -110,7 +110,7 @@
* initialization. Do not change this register after the initial programming.
*/
union cvmx_usbcx_gahbcfg {
- uint32_t u32;
+ u32 u32;
/**
* struct cvmx_usbcx_gahbcfg_s
* @ptxfemplvl: Periodic TxFIFO Empty Level (PTxFEmpLvl)
@@ -145,13 +145,13 @@
* * 1'b1: Unmask the interrupt assertion to the application.
*/
struct cvmx_usbcx_gahbcfg_s {
- __BITFIELD_FIELD(uint32_t reserved_9_31 : 23,
- __BITFIELD_FIELD(uint32_t ptxfemplvl : 1,
- __BITFIELD_FIELD(uint32_t nptxfemplvl : 1,
- __BITFIELD_FIELD(uint32_t reserved_6_6 : 1,
- __BITFIELD_FIELD(uint32_t dmaen : 1,
- __BITFIELD_FIELD(uint32_t hbstlen : 4,
- __BITFIELD_FIELD(uint32_t glblintrmsk : 1,
+ __BITFIELD_FIELD(u32 reserved_9_31 : 23,
+ __BITFIELD_FIELD(u32 ptxfemplvl : 1,
+ __BITFIELD_FIELD(u32 nptxfemplvl : 1,
+ __BITFIELD_FIELD(u32 reserved_6_6 : 1,
+ __BITFIELD_FIELD(u32 dmaen : 1,
+ __BITFIELD_FIELD(u32 hbstlen : 4,
+ __BITFIELD_FIELD(u32 glblintrmsk : 1,
;)))))))
} s;
};
@@ -164,7 +164,7 @@
* This register contains the configuration options of the O2P USB core.
*/
union cvmx_usbcx_ghwcfg3 {
- uint32_t u32;
+ u32 u32;
/**
* struct cvmx_usbcx_ghwcfg3_s
* @dfifodepth: DFIFO Depth (DfifoDepth)
@@ -212,16 +212,16 @@
* * Others: Reserved
*/
struct cvmx_usbcx_ghwcfg3_s {
- __BITFIELD_FIELD(uint32_t dfifodepth : 16,
- __BITFIELD_FIELD(uint32_t reserved_13_15 : 3,
- __BITFIELD_FIELD(uint32_t ahbphysync : 1,
- __BITFIELD_FIELD(uint32_t rsttype : 1,
- __BITFIELD_FIELD(uint32_t optfeature : 1,
- __BITFIELD_FIELD(uint32_t vendor_control_interface_support : 1,
- __BITFIELD_FIELD(uint32_t i2c_selection : 1,
- __BITFIELD_FIELD(uint32_t otgen : 1,
- __BITFIELD_FIELD(uint32_t pktsizewidth : 3,
- __BITFIELD_FIELD(uint32_t xfersizewidth : 4,
+ __BITFIELD_FIELD(u32 dfifodepth : 16,
+ __BITFIELD_FIELD(u32 reserved_13_15 : 3,
+ __BITFIELD_FIELD(u32 ahbphysync : 1,
+ __BITFIELD_FIELD(u32 rsttype : 1,
+ __BITFIELD_FIELD(u32 optfeature : 1,
+ __BITFIELD_FIELD(u32 vendor_control_interface_support : 1,
+ __BITFIELD_FIELD(u32 i2c_selection : 1,
+ __BITFIELD_FIELD(u32 otgen : 1,
+ __BITFIELD_FIELD(u32 pktsizewidth : 3,
+ __BITFIELD_FIELD(u32 xfersizewidth : 4,
;))))))))))
} s;
};
@@ -238,7 +238,7 @@
* Mask interrupt: 1'b0, Unmask interrupt: 1'b1
*/
union cvmx_usbcx_gintmsk {
- uint32_t u32;
+ u32 u32;
/**
* struct cvmx_usbcx_gintmsk_s
* @wkupintmsk: Resume/Remote Wakeup Detected Interrupt Mask
@@ -279,38 +279,38 @@
* @modemismsk: Mode Mismatch Interrupt Mask (ModeMisMsk)
*/
struct cvmx_usbcx_gintmsk_s {
- __BITFIELD_FIELD(uint32_t wkupintmsk : 1,
- __BITFIELD_FIELD(uint32_t sessreqintmsk : 1,
- __BITFIELD_FIELD(uint32_t disconnintmsk : 1,
- __BITFIELD_FIELD(uint32_t conidstschngmsk : 1,
- __BITFIELD_FIELD(uint32_t reserved_27_27 : 1,
- __BITFIELD_FIELD(uint32_t ptxfempmsk : 1,
- __BITFIELD_FIELD(uint32_t hchintmsk : 1,
- __BITFIELD_FIELD(uint32_t prtintmsk : 1,
- __BITFIELD_FIELD(uint32_t reserved_23_23 : 1,
- __BITFIELD_FIELD(uint32_t fetsuspmsk : 1,
- __BITFIELD_FIELD(uint32_t incomplpmsk : 1,
- __BITFIELD_FIELD(uint32_t incompisoinmsk : 1,
- __BITFIELD_FIELD(uint32_t oepintmsk : 1,
- __BITFIELD_FIELD(uint32_t inepintmsk : 1,
- __BITFIELD_FIELD(uint32_t epmismsk : 1,
- __BITFIELD_FIELD(uint32_t reserved_16_16 : 1,
- __BITFIELD_FIELD(uint32_t eopfmsk : 1,
- __BITFIELD_FIELD(uint32_t isooutdropmsk : 1,
- __BITFIELD_FIELD(uint32_t enumdonemsk : 1,
- __BITFIELD_FIELD(uint32_t usbrstmsk : 1,
- __BITFIELD_FIELD(uint32_t usbsuspmsk : 1,
- __BITFIELD_FIELD(uint32_t erlysuspmsk : 1,
- __BITFIELD_FIELD(uint32_t i2cint : 1,
- __BITFIELD_FIELD(uint32_t ulpickintmsk : 1,
- __BITFIELD_FIELD(uint32_t goutnakeffmsk : 1,
- __BITFIELD_FIELD(uint32_t ginnakeffmsk : 1,
- __BITFIELD_FIELD(uint32_t nptxfempmsk : 1,
- __BITFIELD_FIELD(uint32_t rxflvlmsk : 1,
- __BITFIELD_FIELD(uint32_t sofmsk : 1,
- __BITFIELD_FIELD(uint32_t otgintmsk : 1,
- __BITFIELD_FIELD(uint32_t modemismsk : 1,
- __BITFIELD_FIELD(uint32_t reserved_0_0 : 1,
+ __BITFIELD_FIELD(u32 wkupintmsk : 1,
+ __BITFIELD_FIELD(u32 sessreqintmsk : 1,
+ __BITFIELD_FIELD(u32 disconnintmsk : 1,
+ __BITFIELD_FIELD(u32 conidstschngmsk : 1,
+ __BITFIELD_FIELD(u32 reserved_27_27 : 1,
+ __BITFIELD_FIELD(u32 ptxfempmsk : 1,
+ __BITFIELD_FIELD(u32 hchintmsk : 1,
+ __BITFIELD_FIELD(u32 prtintmsk : 1,
+ __BITFIELD_FIELD(u32 reserved_23_23 : 1,
+ __BITFIELD_FIELD(u32 fetsuspmsk : 1,
+ __BITFIELD_FIELD(u32 incomplpmsk : 1,
+ __BITFIELD_FIELD(u32 incompisoinmsk : 1,
+ __BITFIELD_FIELD(u32 oepintmsk : 1,
+ __BITFIELD_FIELD(u32 inepintmsk : 1,
+ __BITFIELD_FIELD(u32 epmismsk : 1,
+ __BITFIELD_FIELD(u32 reserved_16_16 : 1,
+ __BITFIELD_FIELD(u32 eopfmsk : 1,
+ __BITFIELD_FIELD(u32 isooutdropmsk : 1,
+ __BITFIELD_FIELD(u32 enumdonemsk : 1,
+ __BITFIELD_FIELD(u32 usbrstmsk : 1,
+ __BITFIELD_FIELD(u32 usbsuspmsk : 1,
+ __BITFIELD_FIELD(u32 erlysuspmsk : 1,
+ __BITFIELD_FIELD(u32 i2cint : 1,
+ __BITFIELD_FIELD(u32 ulpickintmsk : 1,
+ __BITFIELD_FIELD(u32 goutnakeffmsk : 1,
+ __BITFIELD_FIELD(u32 ginnakeffmsk : 1,
+ __BITFIELD_FIELD(u32 nptxfempmsk : 1,
+ __BITFIELD_FIELD(u32 rxflvlmsk : 1,
+ __BITFIELD_FIELD(u32 sofmsk : 1,
+ __BITFIELD_FIELD(u32 otgintmsk : 1,
+ __BITFIELD_FIELD(u32 modemismsk : 1,
+ __BITFIELD_FIELD(u32 reserved_0_0 : 1,
;))))))))))))))))))))))))))))))))
} s;
};
@@ -331,7 +331,7 @@
* automatically.
*/
union cvmx_usbcx_gintsts {
- uint32_t u32;
+ u32 u32;
/**
* struct cvmx_usbcx_gintsts_s
* @wkupint: Resume/Remote Wakeup Detected Interrupt (WkUpInt)
@@ -509,38 +509,38 @@
* * 1'b1: Host mode
*/
struct cvmx_usbcx_gintsts_s {
- __BITFIELD_FIELD(uint32_t wkupint : 1,
- __BITFIELD_FIELD(uint32_t sessreqint : 1,
- __BITFIELD_FIELD(uint32_t disconnint : 1,
- __BITFIELD_FIELD(uint32_t conidstschng : 1,
- __BITFIELD_FIELD(uint32_t reserved_27_27 : 1,
- __BITFIELD_FIELD(uint32_t ptxfemp : 1,
- __BITFIELD_FIELD(uint32_t hchint : 1,
- __BITFIELD_FIELD(uint32_t prtint : 1,
- __BITFIELD_FIELD(uint32_t reserved_23_23 : 1,
- __BITFIELD_FIELD(uint32_t fetsusp : 1,
- __BITFIELD_FIELD(uint32_t incomplp : 1,
- __BITFIELD_FIELD(uint32_t incompisoin : 1,
- __BITFIELD_FIELD(uint32_t oepint : 1,
- __BITFIELD_FIELD(uint32_t iepint : 1,
- __BITFIELD_FIELD(uint32_t epmis : 1,
- __BITFIELD_FIELD(uint32_t reserved_16_16 : 1,
- __BITFIELD_FIELD(uint32_t eopf : 1,
- __BITFIELD_FIELD(uint32_t isooutdrop : 1,
- __BITFIELD_FIELD(uint32_t enumdone : 1,
- __BITFIELD_FIELD(uint32_t usbrst : 1,
- __BITFIELD_FIELD(uint32_t usbsusp : 1,
- __BITFIELD_FIELD(uint32_t erlysusp : 1,
- __BITFIELD_FIELD(uint32_t i2cint : 1,
- __BITFIELD_FIELD(uint32_t ulpickint : 1,
- __BITFIELD_FIELD(uint32_t goutnakeff : 1,
- __BITFIELD_FIELD(uint32_t ginnakeff : 1,
- __BITFIELD_FIELD(uint32_t nptxfemp : 1,
- __BITFIELD_FIELD(uint32_t rxflvl : 1,
- __BITFIELD_FIELD(uint32_t sof : 1,
- __BITFIELD_FIELD(uint32_t otgint : 1,
- __BITFIELD_FIELD(uint32_t modemis : 1,
- __BITFIELD_FIELD(uint32_t curmod : 1,
+ __BITFIELD_FIELD(u32 wkupint : 1,
+ __BITFIELD_FIELD(u32 sessreqint : 1,
+ __BITFIELD_FIELD(u32 disconnint : 1,
+ __BITFIELD_FIELD(u32 conidstschng : 1,
+ __BITFIELD_FIELD(u32 reserved_27_27 : 1,
+ __BITFIELD_FIELD(u32 ptxfemp : 1,
+ __BITFIELD_FIELD(u32 hchint : 1,
+ __BITFIELD_FIELD(u32 prtint : 1,
+ __BITFIELD_FIELD(u32 reserved_23_23 : 1,
+ __BITFIELD_FIELD(u32 fetsusp : 1,
+ __BITFIELD_FIELD(u32 incomplp : 1,
+ __BITFIELD_FIELD(u32 incompisoin : 1,
+ __BITFIELD_FIELD(u32 oepint : 1,
+ __BITFIELD_FIELD(u32 iepint : 1,
+ __BITFIELD_FIELD(u32 epmis : 1,
+ __BITFIELD_FIELD(u32 reserved_16_16 : 1,
+ __BITFIELD_FIELD(u32 eopf : 1,
+ __BITFIELD_FIELD(u32 isooutdrop : 1,
+ __BITFIELD_FIELD(u32 enumdone : 1,
+ __BITFIELD_FIELD(u32 usbrst : 1,
+ __BITFIELD_FIELD(u32 usbsusp : 1,
+ __BITFIELD_FIELD(u32 erlysusp : 1,
+ __BITFIELD_FIELD(u32 i2cint : 1,
+ __BITFIELD_FIELD(u32 ulpickint : 1,
+ __BITFIELD_FIELD(u32 goutnakeff : 1,
+ __BITFIELD_FIELD(u32 ginnakeff : 1,
+ __BITFIELD_FIELD(u32 nptxfemp : 1,
+ __BITFIELD_FIELD(u32 rxflvl : 1,
+ __BITFIELD_FIELD(u32 sof : 1,
+ __BITFIELD_FIELD(u32 otgint : 1,
+ __BITFIELD_FIELD(u32 modemis : 1,
+ __BITFIELD_FIELD(u32 curmod : 1,
;))))))))))))))))))))))))))))))))
} s;
};
@@ -554,7 +554,7 @@
* Non-Periodic TxFIFO.
*/
union cvmx_usbcx_gnptxfsiz {
- uint32_t u32;
+ u32 u32;
/**
* struct cvmx_usbcx_gnptxfsiz_s
* @nptxfdep: Non-Periodic TxFIFO Depth (NPTxFDep)
@@ -566,8 +566,8 @@
* Transmit FIFO RAM.
*/
struct cvmx_usbcx_gnptxfsiz_s {
- __BITFIELD_FIELD(uint32_t nptxfdep : 16,
- __BITFIELD_FIELD(uint32_t nptxfstaddr : 16,
+ __BITFIELD_FIELD(u32 nptxfdep : 16,
+ __BITFIELD_FIELD(u32 nptxfstaddr : 16,
;))
} s;
};
@@ -581,7 +581,7 @@
* Non-Periodic TxFIFO and the Non-Periodic Transmit Request Queue.
*/
union cvmx_usbcx_gnptxsts {
- uint32_t u32;
+ u32 u32;
/**
* struct cvmx_usbcx_gnptxsts_s
* @nptxqtop: Top of the Non-Periodic Transmit Request Queue (NPTxQTop)
@@ -617,10 +617,10 @@
* * Others: Reserved
*/
struct cvmx_usbcx_gnptxsts_s {
- __BITFIELD_FIELD(uint32_t reserved_31_31 : 1,
- __BITFIELD_FIELD(uint32_t nptxqtop : 7,
- __BITFIELD_FIELD(uint32_t nptxqspcavail : 8,
- __BITFIELD_FIELD(uint32_t nptxfspcavail : 16,
+ __BITFIELD_FIELD(u32 reserved_31_31 : 1,
+ __BITFIELD_FIELD(u32 nptxqtop : 7,
+ __BITFIELD_FIELD(u32 nptxqspcavail : 8,
+ __BITFIELD_FIELD(u32 nptxfspcavail : 16,
;))))
} s;
};
@@ -634,7 +634,7 @@
* the core.
*/
union cvmx_usbcx_grstctl {
- uint32_t u32;
+ u32 u32;
/**
* struct cvmx_usbcx_grstctl_s
* @ahbidle: AHB Master Idle (AHBIdle)
@@ -739,16 +739,16 @@
* selected, the PHY domain has to be reset for proper operation.
*/
struct cvmx_usbcx_grstctl_s {
- __BITFIELD_FIELD(uint32_t ahbidle : 1,
- __BITFIELD_FIELD(uint32_t dmareq : 1,
- __BITFIELD_FIELD(uint32_t reserved_11_29 : 19,
- __BITFIELD_FIELD(uint32_t txfnum : 5,
- __BITFIELD_FIELD(uint32_t txfflsh : 1,
- __BITFIELD_FIELD(uint32_t rxfflsh : 1,
- __BITFIELD_FIELD(uint32_t intknqflsh : 1,
- __BITFIELD_FIELD(uint32_t frmcntrrst : 1,
- __BITFIELD_FIELD(uint32_t hsftrst : 1,
- __BITFIELD_FIELD(uint32_t csftrst : 1,
+ __BITFIELD_FIELD(u32 ahbidle : 1,
+ __BITFIELD_FIELD(u32 dmareq : 1,
+ __BITFIELD_FIELD(u32 reserved_11_29 : 19,
+ __BITFIELD_FIELD(u32 txfnum : 5,
+ __BITFIELD_FIELD(u32 txfflsh : 1,
+ __BITFIELD_FIELD(u32 rxfflsh : 1,
+ __BITFIELD_FIELD(u32 intknqflsh : 1,
+ __BITFIELD_FIELD(u32 frmcntrrst : 1,
+ __BITFIELD_FIELD(u32 hsftrst : 1,
+ __BITFIELD_FIELD(u32 csftrst : 1,
;))))))))))
} s;
};
@@ -762,7 +762,7 @@
* RxFIFO.
*/
union cvmx_usbcx_grxfsiz {
- uint32_t u32;
+ u32 u32;
/**
* struct cvmx_usbcx_grxfsiz_s
* @rxfdep: RxFIFO Depth (RxFDep)
@@ -771,8 +771,8 @@
* * Maximum value is 32768
*/
struct cvmx_usbcx_grxfsiz_s {
- __BITFIELD_FIELD(uint32_t reserved_16_31 : 16,
- __BITFIELD_FIELD(uint32_t rxfdep : 16,
+ __BITFIELD_FIELD(u32 reserved_16_31 : 16,
+ __BITFIELD_FIELD(u32 rxfdep : 16,
;))
} s;
};
@@ -792,7 +792,7 @@
* hardware.
*/
union cvmx_usbcx_grxstsph {
- uint32_t u32;
+ u32 u32;
/**
* struct cvmx_usbcx_grxstsph_s
* @pktsts: Packet Status (PktSts)
@@ -814,11 +814,11 @@
* packet belongs.
*/
struct cvmx_usbcx_grxstsph_s {
- __BITFIELD_FIELD(uint32_t reserved_21_31 : 11,
- __BITFIELD_FIELD(uint32_t pktsts : 4,
- __BITFIELD_FIELD(uint32_t dpid : 2,
- __BITFIELD_FIELD(uint32_t bcnt : 11,
- __BITFIELD_FIELD(uint32_t chnum : 4,
+ __BITFIELD_FIELD(u32 reserved_21_31 : 11,
+ __BITFIELD_FIELD(u32 pktsts : 4,
+ __BITFIELD_FIELD(u32 dpid : 2,
+ __BITFIELD_FIELD(u32 bcnt : 11,
+ __BITFIELD_FIELD(u32 chnum : 4,
;)))))
} s;
};
@@ -835,7 +835,7 @@
* to this register after the initial programming.
*/
union cvmx_usbcx_gusbcfg {
- uint32_t u32;
+ u32 u32;
/**
* struct cvmx_usbcx_gusbcfg_s
* @otgi2csel: UTMIFS or I2C Interface Select (OtgI2CSel)
@@ -895,19 +895,19 @@
* * One 48-MHz PHY clock = 0.25 bit times
*/
struct cvmx_usbcx_gusbcfg_s {
- __BITFIELD_FIELD(uint32_t reserved_17_31 : 15,
- __BITFIELD_FIELD(uint32_t otgi2csel : 1,
- __BITFIELD_FIELD(uint32_t phylpwrclksel : 1,
- __BITFIELD_FIELD(uint32_t reserved_14_14 : 1,
- __BITFIELD_FIELD(uint32_t usbtrdtim : 4,
- __BITFIELD_FIELD(uint32_t hnpcap : 1,
- __BITFIELD_FIELD(uint32_t srpcap : 1,
- __BITFIELD_FIELD(uint32_t ddrsel : 1,
- __BITFIELD_FIELD(uint32_t physel : 1,
- __BITFIELD_FIELD(uint32_t fsintf : 1,
- __BITFIELD_FIELD(uint32_t ulpi_utmi_sel : 1,
- __BITFIELD_FIELD(uint32_t phyif : 1,
- __BITFIELD_FIELD(uint32_t toutcal : 3,
+ __BITFIELD_FIELD(u32 reserved_17_31 : 15,
+ __BITFIELD_FIELD(u32 otgi2csel : 1,
+ __BITFIELD_FIELD(u32 phylpwrclksel : 1,
+ __BITFIELD_FIELD(u32 reserved_14_14 : 1,
+ __BITFIELD_FIELD(u32 usbtrdtim : 4,
+ __BITFIELD_FIELD(u32 hnpcap : 1,
+ __BITFIELD_FIELD(u32 srpcap : 1,
+ __BITFIELD_FIELD(u32 ddrsel : 1,
+ __BITFIELD_FIELD(u32 physel : 1,
+ __BITFIELD_FIELD(u32 fsintf : 1,
+ __BITFIELD_FIELD(u32 ulpi_utmi_sel : 1,
+ __BITFIELD_FIELD(u32 phyif : 1,
+ __BITFIELD_FIELD(u32 toutcal : 3,
;)))))))))))))
} s;
};
@@ -925,15 +925,15 @@
* in the corresponding Host Channel-n Interrupt register.
*/
union cvmx_usbcx_haint {
- uint32_t u32;
+ u32 u32;
/**
* struct cvmx_usbcx_haint_s
* @haint: Channel Interrupts (HAINT)
* One bit per channel: Bit 0 for Channel 0, bit 15 for Channel 15
*/
struct cvmx_usbcx_haint_s {
- __BITFIELD_FIELD(uint32_t reserved_16_31 : 16,
- __BITFIELD_FIELD(uint32_t haint : 16,
+ __BITFIELD_FIELD(u32 reserved_16_31 : 16,
+ __BITFIELD_FIELD(u32 haint : 16,
;))
} s;
};
@@ -950,15 +950,15 @@
* Mask interrupt: 1'b0 Unmask interrupt: 1'b1
*/
union cvmx_usbcx_haintmsk {
- uint32_t u32;
+ u32 u32;
/**
* struct cvmx_usbcx_haintmsk_s
* @haintmsk: Channel Interrupt Mask (HAINTMsk)
* One bit per channel: Bit 0 for channel 0, bit 15 for channel 15
*/
struct cvmx_usbcx_haintmsk_s {
- __BITFIELD_FIELD(uint32_t reserved_16_31 : 16,
- __BITFIELD_FIELD(uint32_t haintmsk : 16,
+ __BITFIELD_FIELD(u32 reserved_16_31 : 16,
+ __BITFIELD_FIELD(u32 haintmsk : 16,
;))
} s;
};
@@ -970,7 +970,7 @@
*
*/
union cvmx_usbcx_hccharx {
- uint32_t u32;
+ u32 u32;
/**
* struct cvmx_usbcx_hccharx_s
* @chena: Channel Enable (ChEna)
@@ -1028,17 +1028,17 @@
* Indicates the maximum packet size of the associated endpoint.
*/
struct cvmx_usbcx_hccharx_s {
- __BITFIELD_FIELD(uint32_t chena : 1,
- __BITFIELD_FIELD(uint32_t chdis : 1,
- __BITFIELD_FIELD(uint32_t oddfrm : 1,
- __BITFIELD_FIELD(uint32_t devaddr : 7,
- __BITFIELD_FIELD(uint32_t ec : 2,
- __BITFIELD_FIELD(uint32_t eptype : 2,
- __BITFIELD_FIELD(uint32_t lspddev : 1,
- __BITFIELD_FIELD(uint32_t reserved_16_16 : 1,
- __BITFIELD_FIELD(uint32_t epdir : 1,
- __BITFIELD_FIELD(uint32_t epnum : 4,
- __BITFIELD_FIELD(uint32_t mps : 11,
+ __BITFIELD_FIELD(u32 chena : 1,
+ __BITFIELD_FIELD(u32 chdis : 1,
+ __BITFIELD_FIELD(u32 oddfrm : 1,
+ __BITFIELD_FIELD(u32 devaddr : 7,
+ __BITFIELD_FIELD(u32 ec : 2,
+ __BITFIELD_FIELD(u32 eptype : 2,
+ __BITFIELD_FIELD(u32 lspddev : 1,
+ __BITFIELD_FIELD(u32 reserved_16_16 : 1,
+ __BITFIELD_FIELD(u32 epdir : 1,
+ __BITFIELD_FIELD(u32 epnum : 4,
+ __BITFIELD_FIELD(u32 mps : 11,
;)))))))))))
} s;
};
@@ -1052,7 +1052,7 @@
* register after initializing the host.
*/
union cvmx_usbcx_hcfg {
- uint32_t u32;
+ u32 u32;
/**
* struct cvmx_usbcx_hcfg_s
* @fslssupp: FS- and LS-Only Support (FSLSSupp)
@@ -1084,9 +1084,9 @@
* * 2'b11: Reserved
*/
struct cvmx_usbcx_hcfg_s {
- __BITFIELD_FIELD(uint32_t reserved_3_31 : 29,
- __BITFIELD_FIELD(uint32_t fslssupp : 1,
- __BITFIELD_FIELD(uint32_t fslspclksel : 2,
+ __BITFIELD_FIELD(u32 reserved_3_31 : 29,
+ __BITFIELD_FIELD(u32 fslssupp : 1,
+ __BITFIELD_FIELD(u32 fslspclksel : 2,
;)))
} s;
};
@@ -1106,7 +1106,7 @@
* HAINT and GINTSTS registers.
*/
union cvmx_usbcx_hcintx {
- uint32_t u32;
+ u32 u32;
/**
* struct cvmx_usbcx_hcintx_s
* @datatglerr: Data Toggle Error (DataTglErr)
@@ -1126,18 +1126,18 @@
* Transfer completed normally without any errors.
*/
struct cvmx_usbcx_hcintx_s {
- __BITFIELD_FIELD(uint32_t reserved_11_31 : 21,
- __BITFIELD_FIELD(uint32_t datatglerr : 1,
- __BITFIELD_FIELD(uint32_t frmovrun : 1,
- __BITFIELD_FIELD(uint32_t bblerr : 1,
- __BITFIELD_FIELD(uint32_t xacterr : 1,
- __BITFIELD_FIELD(uint32_t nyet : 1,
- __BITFIELD_FIELD(uint32_t ack : 1,
- __BITFIELD_FIELD(uint32_t nak : 1,
- __BITFIELD_FIELD(uint32_t stall : 1,
- __BITFIELD_FIELD(uint32_t ahberr : 1,
- __BITFIELD_FIELD(uint32_t chhltd : 1,
- __BITFIELD_FIELD(uint32_t xfercompl : 1,
+ __BITFIELD_FIELD(u32 reserved_11_31 : 21,
+ __BITFIELD_FIELD(u32 datatglerr : 1,
+ __BITFIELD_FIELD(u32 frmovrun : 1,
+ __BITFIELD_FIELD(u32 bblerr : 1,
+ __BITFIELD_FIELD(u32 xacterr : 1,
+ __BITFIELD_FIELD(u32 nyet : 1,
+ __BITFIELD_FIELD(u32 ack : 1,
+ __BITFIELD_FIELD(u32 nak : 1,
+ __BITFIELD_FIELD(u32 stall : 1,
+ __BITFIELD_FIELD(u32 ahberr : 1,
+ __BITFIELD_FIELD(u32 chhltd : 1,
+ __BITFIELD_FIELD(u32 xfercompl : 1,
;))))))))))))
} s;
};
@@ -1152,7 +1152,7 @@
* Mask interrupt: 1'b0 Unmask interrupt: 1'b1
*/
union cvmx_usbcx_hcintmskx {
- uint32_t u32;
+ u32 u32;
/**
* struct cvmx_usbcx_hcintmskx_s
* @datatglerrmsk: Data Toggle Error Mask (DataTglErrMsk)
@@ -1168,18 +1168,18 @@
* @xfercomplmsk: Transfer Completed Mask (XferComplMsk)
*/
struct cvmx_usbcx_hcintmskx_s {
- __BITFIELD_FIELD(uint32_t reserved_11_31 : 21,
- __BITFIELD_FIELD(uint32_t datatglerrmsk : 1,
- __BITFIELD_FIELD(uint32_t frmovrunmsk : 1,
- __BITFIELD_FIELD(uint32_t bblerrmsk : 1,
- __BITFIELD_FIELD(uint32_t xacterrmsk : 1,
- __BITFIELD_FIELD(uint32_t nyetmsk : 1,
- __BITFIELD_FIELD(uint32_t ackmsk : 1,
- __BITFIELD_FIELD(uint32_t nakmsk : 1,
- __BITFIELD_FIELD(uint32_t stallmsk : 1,
- __BITFIELD_FIELD(uint32_t ahberrmsk : 1,
- __BITFIELD_FIELD(uint32_t chhltdmsk : 1,
- __BITFIELD_FIELD(uint32_t xfercomplmsk : 1,
+ __BITFIELD_FIELD(u32 reserved_11_31 : 21,
+ __BITFIELD_FIELD(u32 datatglerrmsk : 1,
+ __BITFIELD_FIELD(u32 frmovrunmsk : 1,
+ __BITFIELD_FIELD(u32 bblerrmsk : 1,
+ __BITFIELD_FIELD(u32 xacterrmsk : 1,
+ __BITFIELD_FIELD(u32 nyetmsk : 1,
+ __BITFIELD_FIELD(u32 ackmsk : 1,
+ __BITFIELD_FIELD(u32 nakmsk : 1,
+ __BITFIELD_FIELD(u32 stallmsk : 1,
+ __BITFIELD_FIELD(u32 ahberrmsk : 1,
+ __BITFIELD_FIELD(u32 chhltdmsk : 1,
+ __BITFIELD_FIELD(u32 xfercomplmsk : 1,
;))))))))))))
} s;
};
@@ -1191,7 +1191,7 @@
*
*/
union cvmx_usbcx_hcspltx {
- uint32_t u32;
+ u32 u32;
/**
* struct cvmx_usbcx_hcspltx_s
* @spltena: Split Enable (SpltEna)
@@ -1219,12 +1219,12 @@
* translator.
*/
struct cvmx_usbcx_hcspltx_s {
- __BITFIELD_FIELD(uint32_t spltena : 1,
- __BITFIELD_FIELD(uint32_t reserved_17_30 : 14,
- __BITFIELD_FIELD(uint32_t compsplt : 1,
- __BITFIELD_FIELD(uint32_t xactpos : 2,
- __BITFIELD_FIELD(uint32_t hubaddr : 7,
- __BITFIELD_FIELD(uint32_t prtaddr : 7,
+ __BITFIELD_FIELD(u32 spltena : 1,
+ __BITFIELD_FIELD(u32 reserved_17_30 : 14,
+ __BITFIELD_FIELD(u32 compsplt : 1,
+ __BITFIELD_FIELD(u32 xactpos : 2,
+ __BITFIELD_FIELD(u32 hubaddr : 7,
+ __BITFIELD_FIELD(u32 prtaddr : 7,
;))))))
} s;
};
@@ -1236,7 +1236,7 @@
*
*/
union cvmx_usbcx_hctsizx {
- uint32_t u32;
+ u32 u32;
/**
* struct cvmx_usbcx_hctsizx_s
* @dopng: Do Ping (DoPng)
@@ -1265,10 +1265,10 @@
* size for IN transactions (periodic and non-periodic).
*/
struct cvmx_usbcx_hctsizx_s {
- __BITFIELD_FIELD(uint32_t dopng : 1,
- __BITFIELD_FIELD(uint32_t pid : 2,
- __BITFIELD_FIELD(uint32_t pktcnt : 10,
- __BITFIELD_FIELD(uint32_t xfersize : 19,
+ __BITFIELD_FIELD(u32 dopng : 1,
+ __BITFIELD_FIELD(u32 pid : 2,
+ __BITFIELD_FIELD(u32 pktcnt : 10,
+ __BITFIELD_FIELD(u32 xfersize : 19,
;))))
} s;
};
@@ -1282,7 +1282,7 @@
* which the O2P USB core has enumerated.
*/
union cvmx_usbcx_hfir {
- uint32_t u32;
+ u32 u32;
/**
* struct cvmx_usbcx_hfir_s
* @frint: Frame Interval (FrInt)
@@ -1303,8 +1303,8 @@
* * 1 ms (PHY clock frequency for FS/LS)
*/
struct cvmx_usbcx_hfir_s {
- __BITFIELD_FIELD(uint32_t reserved_16_31 : 16,
- __BITFIELD_FIELD(uint32_t frint : 16,
+ __BITFIELD_FIELD(u32 reserved_16_31 : 16,
+ __BITFIELD_FIELD(u32 frint : 16,
;))
} s;
};
@@ -1319,7 +1319,7 @@
* in the current (micro)frame.
*/
union cvmx_usbcx_hfnum {
- uint32_t u32;
+ u32 u32;
/**
* struct cvmx_usbcx_hfnum_s
* @frrem: Frame Time Remaining (FrRem)
@@ -1333,8 +1333,8 @@
* USB, and is reset to 0 when it reaches 16'h3FFF.
*/
struct cvmx_usbcx_hfnum_s {
- __BITFIELD_FIELD(uint32_t frrem : 16,
- __BITFIELD_FIELD(uint32_t frnum : 16,
+ __BITFIELD_FIELD(u32 frrem : 16,
+ __BITFIELD_FIELD(u32 frnum : 16,
;))
} s;
};
@@ -1355,7 +1355,7 @@
* the application must write a 1 to the bit to clear the interrupt.
*/
union cvmx_usbcx_hprt {
- uint32_t u32;
+ u32 u32;
/**
* struct cvmx_usbcx_hprt_s
* @prtspd: Port Speed (PrtSpd)
@@ -1461,21 +1461,21 @@
* * 1: A device is attached to the port.
*/
struct cvmx_usbcx_hprt_s {
- __BITFIELD_FIELD(uint32_t reserved_19_31 : 13,
- __BITFIELD_FIELD(uint32_t prtspd : 2,
- __BITFIELD_FIELD(uint32_t prttstctl : 4,
- __BITFIELD_FIELD(uint32_t prtpwr : 1,
- __BITFIELD_FIELD(uint32_t prtlnsts : 2,
- __BITFIELD_FIELD(uint32_t reserved_9_9 : 1,
- __BITFIELD_FIELD(uint32_t prtrst : 1,
- __BITFIELD_FIELD(uint32_t prtsusp : 1,
- __BITFIELD_FIELD(uint32_t prtres : 1,
- __BITFIELD_FIELD(uint32_t prtovrcurrchng : 1,
- __BITFIELD_FIELD(uint32_t prtovrcurract : 1,
- __BITFIELD_FIELD(uint32_t prtenchng : 1,
- __BITFIELD_FIELD(uint32_t prtena : 1,
- __BITFIELD_FIELD(uint32_t prtconndet : 1,
- __BITFIELD_FIELD(uint32_t prtconnsts : 1,
+ __BITFIELD_FIELD(u32 reserved_19_31 : 13,
+ __BITFIELD_FIELD(u32 prtspd : 2,
+ __BITFIELD_FIELD(u32 prttstctl : 4,
+ __BITFIELD_FIELD(u32 prtpwr : 1,
+ __BITFIELD_FIELD(u32 prtlnsts : 2,
+ __BITFIELD_FIELD(u32 reserved_9_9 : 1,
+ __BITFIELD_FIELD(u32 prtrst : 1,
+ __BITFIELD_FIELD(u32 prtsusp : 1,
+ __BITFIELD_FIELD(u32 prtres : 1,
+ __BITFIELD_FIELD(u32 prtovrcurrchng : 1,
+ __BITFIELD_FIELD(u32 prtovrcurract : 1,
+ __BITFIELD_FIELD(u32 prtenchng : 1,
+ __BITFIELD_FIELD(u32 prtena : 1,
+ __BITFIELD_FIELD(u32 prtconndet : 1,
+ __BITFIELD_FIELD(u32 prtconnsts : 1,
;)))))))))))))))
} s;
};
@@ -1489,7 +1489,7 @@
* TxFIFO, as shown in Figures 310 and 311.
*/
union cvmx_usbcx_hptxfsiz {
- uint32_t u32;
+ u32 u32;
/**
* struct cvmx_usbcx_hptxfsiz_s
* @ptxfsize: Host Periodic TxFIFO Depth (PTxFSize)
@@ -1499,8 +1499,8 @@
* @ptxfstaddr: Host Periodic TxFIFO Start Address (PTxFStAddr)
*/
struct cvmx_usbcx_hptxfsiz_s {
- __BITFIELD_FIELD(uint32_t ptxfsize : 16,
- __BITFIELD_FIELD(uint32_t ptxfstaddr : 16,
+ __BITFIELD_FIELD(u32 ptxfsize : 16,
+ __BITFIELD_FIELD(u32 ptxfstaddr : 16,
;))
} s;
};
@@ -1514,7 +1514,7 @@
* TxFIFO and the Periodic Transmit Request Queue
*/
union cvmx_usbcx_hptxsts {
- uint32_t u32;
+ u32 u32;
/**
* struct cvmx_usbcx_hptxsts_s
* @ptxqtop: Top of the Periodic Transmit Request Queue (PTxQTop)
@@ -1555,9 +1555,9 @@
* * Others: Reserved
*/
struct cvmx_usbcx_hptxsts_s {
- __BITFIELD_FIELD(uint32_t ptxqtop : 8,
- __BITFIELD_FIELD(uint32_t ptxqspcavail : 8,
- __BITFIELD_FIELD(uint32_t ptxfspcavail : 16,
+ __BITFIELD_FIELD(u32 ptxqtop : 8,
+ __BITFIELD_FIELD(u32 ptxqspcavail : 8,
+ __BITFIELD_FIELD(u32 ptxfspcavail : 16,
;)))
} s;
};
@@ -1571,7 +1571,7 @@
* hreset and phy_rst signals.
*/
union cvmx_usbnx_clk_ctl {
- uint64_t u64;
+ u64 u64;
/**
* struct cvmx_usbnx_clk_ctl_s
* @divide2: The 'hclk' used by the USB subsystem is derived
@@ -1661,21 +1661,21 @@
* until AFTER this field is set and then read.
*/
struct cvmx_usbnx_clk_ctl_s {
- __BITFIELD_FIELD(uint64_t reserved_20_63 : 44,
- __BITFIELD_FIELD(uint64_t divide2 : 2,
- __BITFIELD_FIELD(uint64_t hclk_rst : 1,
- __BITFIELD_FIELD(uint64_t p_x_on : 1,
- __BITFIELD_FIELD(uint64_t p_rtype : 2,
- __BITFIELD_FIELD(uint64_t p_com_on : 1,
- __BITFIELD_FIELD(uint64_t p_c_sel : 2,
- __BITFIELD_FIELD(uint64_t cdiv_byp : 1,
- __BITFIELD_FIELD(uint64_t sd_mode : 2,
- __BITFIELD_FIELD(uint64_t s_bist : 1,
- __BITFIELD_FIELD(uint64_t por : 1,
- __BITFIELD_FIELD(uint64_t enable : 1,
- __BITFIELD_FIELD(uint64_t prst : 1,
- __BITFIELD_FIELD(uint64_t hrst : 1,
- __BITFIELD_FIELD(uint64_t divide : 3,
+ __BITFIELD_FIELD(u64 reserved_20_63 : 44,
+ __BITFIELD_FIELD(u64 divide2 : 2,
+ __BITFIELD_FIELD(u64 hclk_rst : 1,
+ __BITFIELD_FIELD(u64 p_x_on : 1,
+ __BITFIELD_FIELD(u64 p_rtype : 2,
+ __BITFIELD_FIELD(u64 p_com_on : 1,
+ __BITFIELD_FIELD(u64 p_c_sel : 2,
+ __BITFIELD_FIELD(u64 cdiv_byp : 1,
+ __BITFIELD_FIELD(u64 sd_mode : 2,
+ __BITFIELD_FIELD(u64 s_bist : 1,
+ __BITFIELD_FIELD(u64 por : 1,
+ __BITFIELD_FIELD(u64 enable : 1,
+ __BITFIELD_FIELD(u64 prst : 1,
+ __BITFIELD_FIELD(u64 hrst : 1,
+ __BITFIELD_FIELD(u64 divide : 3,
;)))))))))))))))
} s;
};
@@ -1688,7 +1688,7 @@
* Contains general control and status information for the USBN block.
*/
union cvmx_usbnx_usbp_ctl_status {
- uint64_t u64;
+ u64 u64;
/**
* struct cvmx_usbnx_usbp_ctl_status_s
* @txrisetune: HS Transmitter Rise/Fall Time Adjustment
@@ -1804,41 +1804,41 @@
* de-assertion.
*/
struct cvmx_usbnx_usbp_ctl_status_s {
- __BITFIELD_FIELD(uint64_t txrisetune : 1,
- __BITFIELD_FIELD(uint64_t txvreftune : 4,
- __BITFIELD_FIELD(uint64_t txfslstune : 4,
- __BITFIELD_FIELD(uint64_t txhsxvtune : 2,
- __BITFIELD_FIELD(uint64_t sqrxtune : 3,
- __BITFIELD_FIELD(uint64_t compdistune : 3,
- __BITFIELD_FIELD(uint64_t otgtune : 3,
- __BITFIELD_FIELD(uint64_t otgdisable : 1,
- __BITFIELD_FIELD(uint64_t portreset : 1,
- __BITFIELD_FIELD(uint64_t drvvbus : 1,
- __BITFIELD_FIELD(uint64_t lsbist : 1,
- __BITFIELD_FIELD(uint64_t fsbist : 1,
- __BITFIELD_FIELD(uint64_t hsbist : 1,
- __BITFIELD_FIELD(uint64_t bist_done : 1,
- __BITFIELD_FIELD(uint64_t bist_err : 1,
- __BITFIELD_FIELD(uint64_t tdata_out : 4,
- __BITFIELD_FIELD(uint64_t siddq : 1,
- __BITFIELD_FIELD(uint64_t txpreemphasistune : 1,
- __BITFIELD_FIELD(uint64_t dma_bmode : 1,
- __BITFIELD_FIELD(uint64_t usbc_end : 1,
- __BITFIELD_FIELD(uint64_t usbp_bist : 1,
- __BITFIELD_FIELD(uint64_t tclk : 1,
- __BITFIELD_FIELD(uint64_t dp_pulld : 1,
- __BITFIELD_FIELD(uint64_t dm_pulld : 1,
- __BITFIELD_FIELD(uint64_t hst_mode : 1,
- __BITFIELD_FIELD(uint64_t tuning : 4,
- __BITFIELD_FIELD(uint64_t tx_bs_enh : 1,
- __BITFIELD_FIELD(uint64_t tx_bs_en : 1,
- __BITFIELD_FIELD(uint64_t loop_enb : 1,
- __BITFIELD_FIELD(uint64_t vtest_enb : 1,
- __BITFIELD_FIELD(uint64_t bist_enb : 1,
- __BITFIELD_FIELD(uint64_t tdata_sel : 1,
- __BITFIELD_FIELD(uint64_t taddr_in : 4,
- __BITFIELD_FIELD(uint64_t tdata_in : 8,
- __BITFIELD_FIELD(uint64_t ate_reset : 1,
+ __BITFIELD_FIELD(u64 txrisetune : 1,
+ __BITFIELD_FIELD(u64 txvreftune : 4,
+ __BITFIELD_FIELD(u64 txfslstune : 4,
+ __BITFIELD_FIELD(u64 txhsxvtune : 2,
+ __BITFIELD_FIELD(u64 sqrxtune : 3,
+ __BITFIELD_FIELD(u64 compdistune : 3,
+ __BITFIELD_FIELD(u64 otgtune : 3,
+ __BITFIELD_FIELD(u64 otgdisable : 1,
+ __BITFIELD_FIELD(u64 portreset : 1,
+ __BITFIELD_FIELD(u64 drvvbus : 1,
+ __BITFIELD_FIELD(u64 lsbist : 1,
+ __BITFIELD_FIELD(u64 fsbist : 1,
+ __BITFIELD_FIELD(u64 hsbist : 1,
+ __BITFIELD_FIELD(u64 bist_done : 1,
+ __BITFIELD_FIELD(u64 bist_err : 1,
+ __BITFIELD_FIELD(u64 tdata_out : 4,
+ __BITFIELD_FIELD(u64 siddq : 1,
+ __BITFIELD_FIELD(u64 txpreemphasistune : 1,
+ __BITFIELD_FIELD(u64 dma_bmode : 1,
+ __BITFIELD_FIELD(u64 usbc_end : 1,
+ __BITFIELD_FIELD(u64 usbp_bist : 1,
+ __BITFIELD_FIELD(u64 tclk : 1,
+ __BITFIELD_FIELD(u64 dp_pulld : 1,
+ __BITFIELD_FIELD(u64 dm_pulld : 1,
+ __BITFIELD_FIELD(u64 hst_mode : 1,
+ __BITFIELD_FIELD(u64 tuning : 4,
+ __BITFIELD_FIELD(u64 tx_bs_enh : 1,
+ __BITFIELD_FIELD(u64 tx_bs_en : 1,
+ __BITFIELD_FIELD(u64 loop_enb : 1,
+ __BITFIELD_FIELD(u64 vtest_enb : 1,
+ __BITFIELD_FIELD(u64 bist_enb : 1,
+ __BITFIELD_FIELD(u64 tdata_sel : 1,
+ __BITFIELD_FIELD(u64 taddr_in : 4,
+ __BITFIELD_FIELD(u64 tdata_in : 8,
+ __BITFIELD_FIELD(u64 ate_reset : 1,
;)))))))))))))))))))))))))))))))))))
} s;
};
diff --git a/drivers/staging/octeon/ethernet-rx.c b/drivers/staging/octeon/ethernet-rx.c
index 6aed3cf..ed55304 100644
--- a/drivers/staging/octeon/ethernet-rx.c
+++ b/drivers/staging/octeon/ethernet-rx.c
@@ -26,8 +26,6 @@
#include <net/xfrm.h>
#endif /* CONFIG_XFRM */
-#include <linux/atomic.h>
-
#include <asm/octeon/octeon.h>
#include "ethernet-defines.h"
@@ -364,17 +362,8 @@
/* Increment RX stats for virtual ports */
if (port >= CVMX_PIP_NUM_INPUT_PORTS) {
-#ifdef CONFIG_64BIT
- atomic64_add(1,
- (atomic64_t *)&priv->stats.rx_packets);
- atomic64_add(skb->len,
- (atomic64_t *)&priv->stats.rx_bytes);
-#else
- atomic_add(1,
- (atomic_t *)&priv->stats.rx_packets);
- atomic_add(skb->len,
- (atomic_t *)&priv->stats.rx_bytes);
-#endif
+ priv->stats.rx_packets++;
+ priv->stats.rx_bytes += skb->len;
}
netif_receive_skb(skb);
} else {
@@ -383,13 +372,7 @@
printk_ratelimited("%s: Device not up, packet dropped\n",
dev->name);
*/
-#ifdef CONFIG_64BIT
- atomic64_add(1,
- (atomic64_t *)&priv->stats.rx_dropped);
-#else
- atomic_add(1,
- (atomic_t *)&priv->stats.rx_dropped);
-#endif
+ priv->stats.rx_dropped++;
dev_kfree_skb_irq(skb);
}
} else {
diff --git a/drivers/staging/octeon/ethernet.c b/drivers/staging/octeon/ethernet.c
index 8d239e2..00adc52 100644
--- a/drivers/staging/octeon/ethernet.c
+++ b/drivers/staging/octeon/ethernet.c
@@ -226,18 +226,7 @@
priv->stats.multicast += rx_status.multicast_packets;
priv->stats.rx_crc_errors += rx_status.inb_errors;
priv->stats.rx_frame_errors += rx_status.fcs_align_err_packets;
-
- /*
- * The drop counter must be incremented atomically
- * since the RX tasklet also increments it.
- */
-#ifdef CONFIG_64BIT
- atomic64_add(rx_status.dropped_packets,
- (atomic64_t *)&priv->stats.rx_dropped);
-#else
- atomic_add(rx_status.dropped_packets,
- (atomic_t *)&priv->stats.rx_dropped);
-#endif
+ priv->stats.rx_dropped += rx_status.dropped_packets;
}
return &priv->stats;
@@ -790,7 +779,6 @@
cvmx_fau_atomic_write32(priv->fau + qos * 4, 0);
switch (priv->imode) {
-
/* These types don't support ports to IPD/PKO */
case CVMX_HELPER_INTERFACE_MODE_DISABLED:
case CVMX_HELPER_INTERFACE_MODE_PCIE:
diff --git a/drivers/staging/rdma/hfi1/efivar.c b/drivers/staging/rdma/hfi1/efivar.c
index e569f9f..47dfe25 100644
--- a/drivers/staging/rdma/hfi1/efivar.c
+++ b/drivers/staging/rdma/hfi1/efivar.c
@@ -127,13 +127,12 @@
* temporary buffer. Now allocate a correctly sized
* buffer.
*/
- data = kmalloc(temp_size, GFP_KERNEL);
+ data = kmemdup(temp_buffer, temp_size, GFP_KERNEL);
if (!data) {
ret = -ENOMEM;
goto fail;
}
- memcpy(data, temp_buffer, temp_size);
*size = temp_size;
*return_data = data;
diff --git a/drivers/staging/rdma/hfi1/mr.c b/drivers/staging/rdma/hfi1/mr.c
index a3f8b88..3825321 100644
--- a/drivers/staging/rdma/hfi1/mr.c
+++ b/drivers/staging/rdma/hfi1/mr.c
@@ -70,7 +70,7 @@
int m, i = 0;
int rval = 0;
- m = (count + HFI1_SEGSZ - 1) / HFI1_SEGSZ;
+ m = DIV_ROUND_UP(count, HFI1_SEGSZ);
for (; i < m; i++) {
mr->map[i] = kzalloc(sizeof(*mr->map[0]), GFP_KERNEL);
if (!mr->map[i])
@@ -159,7 +159,7 @@
int m;
/* Allocate struct plus pointers to first level page tables. */
- m = (count + HFI1_SEGSZ - 1) / HFI1_SEGSZ;
+ m = DIV_ROUND_UP(count, HFI1_SEGSZ);
mr = kzalloc(sizeof(*mr) + m * sizeof(mr->mr.map[0]), GFP_KERNEL);
if (!mr)
goto bail;
@@ -333,7 +333,7 @@
int rval = -ENOMEM;
/* Allocate struct plus pointers to first level page tables. */
- m = (fmr_attr->max_pages + HFI1_SEGSZ - 1) / HFI1_SEGSZ;
+ m = DIV_ROUND_UP(fmr_attr->max_pages, HFI1_SEGSZ);
fmr = kzalloc(sizeof(*fmr) + m * sizeof(fmr->mr.map[0]), GFP_KERNEL);
if (!fmr)
goto bail;
diff --git a/drivers/staging/rdma/hfi1/user_sdma.c b/drivers/staging/rdma/hfi1/user_sdma.c
index 4f55950..2f48419 100644
--- a/drivers/staging/rdma/hfi1/user_sdma.c
+++ b/drivers/staging/rdma/hfi1/user_sdma.c
@@ -67,7 +67,6 @@
#include "hfi.h"
#include "sdma.h"
#include "user_sdma.h"
-#include "sdma.h"
#include "verbs.h" /* for the headers */
#include "common.h" /* for struct hfi1_tid_info */
#include "trace.h"
@@ -925,8 +924,8 @@
unsigned pageidx, len;
base = (unsigned long)iovec->iov.iov_base;
- offset = ((base + iovec->offset + iov_offset) &
- ~PAGE_MASK);
+ offset = offset_in_page(base + iovec->offset +
+ iov_offset);
pageidx = (((iovec->offset + iov_offset +
base) - (base & PAGE_MASK)) >> PAGE_SHIFT);
len = offset + req->info.fragsize > PAGE_SIZE ?
diff --git a/drivers/staging/rtl8188eu/core/rtw_iol.c b/drivers/staging/rtl8188eu/core/rtw_iol.c
index cdcf0ea..2e2145ca 100644
--- a/drivers/staging/rtl8188eu/core/rtw_iol.c
+++ b/drivers/staging/rtl8188eu/core/rtw_iol.c
@@ -11,21 +11,18 @@
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
- * You should have received a copy of the GNU General Public License along with
- * this program; if not, write to the Free Software Foundation, Inc.,
- * 51 Franklin Street, Fifth Floor, Boston, MA 02110, USA
- *
*
******************************************************************************/
-#include<rtw_iol.h>
+#include <rtw_iol.h>
-bool rtw_IOL_applied(struct adapter *adapter)
+bool rtw_IOL_applied(struct adapter *adapter)
{
- if (1 == adapter->registrypriv.fw_iol)
+ if (adapter->registrypriv.fw_iol == 1)
return true;
- if ((2 == adapter->registrypriv.fw_iol) && (!adapter_to_dvobj(adapter)->ishighspeed))
+ if ((adapter->registrypriv.fw_iol == 2) &&
+ (!adapter_to_dvobj(adapter)->ishighspeed))
return true;
return false;
}
diff --git a/drivers/staging/rtl8188eu/core/rtw_mlme.c b/drivers/staging/rtl8188eu/core/rtw_mlme.c
index 9c2e659..a645a62 100644
--- a/drivers/staging/rtl8188eu/core/rtw_mlme.c
+++ b/drivers/staging/rtl8188eu/core/rtw_mlme.c
@@ -122,10 +122,8 @@
{
rtw_free_mlme_priv_ie_data(pmlmepriv);
- if (pmlmepriv) {
- if (pmlmepriv->free_bss_buf)
- vfree(pmlmepriv->free_bss_buf);
- }
+ if (pmlmepriv)
+ vfree(pmlmepriv->free_bss_buf);
}
struct wlan_network *_rtw_alloc_network(struct mlme_priv *pmlmepriv)
diff --git a/drivers/staging/rtl8188eu/core/rtw_mlme_ext.c b/drivers/staging/rtl8188eu/core/rtw_mlme_ext.c
index 96e5c6d..e3add48 100644
--- a/drivers/staging/rtl8188eu/core/rtw_mlme_ext.c
+++ b/drivers/staging/rtl8188eu/core/rtw_mlme_ext.c
@@ -20,6 +20,7 @@
#define _RTW_MLME_EXT_C_
#include <linux/ieee80211.h>
+#include <asm/unaligned.h>
#include <osdep_service.h>
#include <drv_types.h>
@@ -1027,7 +1028,6 @@
unsigned char *pframe, *p;
struct rtw_ieee80211_hdr *pwlanhdr;
__le16 *fctrl;
- __le16 le_tmp;
unsigned int i, j, ie_len, index = 0;
unsigned char rf_type, bssrate[NumRates], sta_bssrate[NumRates];
struct ndis_802_11_var_ie *pIE;
@@ -1073,8 +1073,7 @@
/* listen interval */
/* todo: listen interval for power saving */
- le_tmp = cpu_to_le16(3);
- memcpy(pframe , (unsigned char *)&le_tmp, 2);
+ put_unaligned_le16(3, pframe);
pframe += 2;
pattrib->pktlen += 2;
diff --git a/drivers/staging/rtl8188eu/core/rtw_recv.c b/drivers/staging/rtl8188eu/core/rtw_recv.c
index c81639c..40b7a30 100644
--- a/drivers/staging/rtl8188eu/core/rtw_recv.c
+++ b/drivers/staging/rtl8188eu/core/rtw_recv.c
@@ -116,9 +116,8 @@
rtw_free_uc_swdec_pending_queue(padapter);
- if (precvpriv->pallocated_frame_buf) {
+ if (precvpriv->pallocated_frame_buf)
vfree(precvpriv->pallocated_frame_buf);
- }
rtw_hal_free_recv_priv(padapter);
@@ -910,9 +909,8 @@
process_pwrbit_data(adapter, precv_frame);
- if ((GetFrameSubType(ptr) & WIFI_QOS_DATA_TYPE) == WIFI_QOS_DATA_TYPE) {
+ if ((GetFrameSubType(ptr) & WIFI_QOS_DATA_TYPE) == WIFI_QOS_DATA_TYPE)
process_wmmps_data(adapter, precv_frame);
- }
if (GetFrameSubType(ptr) & BIT(6)) {
/* No data, will not indicate to upper layer, temporily count it here */
@@ -1527,10 +1525,9 @@
if (pdefrag_q != NULL) {
if (fragnum == 0) {
/* the first fragment */
- if (!list_empty(&pdefrag_q->queue)) {
+ if (!list_empty(&pdefrag_q->queue))
/* free current defrag_q */
rtw_free_recvframe_queue(pdefrag_q, pfree_recv_queue);
- }
}
/* Then enqueue the 0~(n-1) fragment into the defrag_q */
@@ -1646,9 +1643,8 @@
a_len -= nSubframe_Length;
if (a_len != 0) {
padding_len = 4 - ((nSubframe_Length + ETH_HLEN) & (4-1));
- if (padding_len == 4) {
+ if (padding_len == 4)
padding_len = 0;
- }
if (a_len < padding_len) {
goto exit;
diff --git a/drivers/staging/rtl8188eu/core/rtw_xmit.c b/drivers/staging/rtl8188eu/core/rtw_xmit.c
index d5ce1e2..f2dd7a6 100644
--- a/drivers/staging/rtl8188eu/core/rtw_xmit.c
+++ b/drivers/staging/rtl8188eu/core/rtw_xmit.c
@@ -247,11 +247,8 @@
pxmitbuf++;
}
- if (pxmitpriv->pallocated_frame_buf)
- vfree(pxmitpriv->pallocated_frame_buf);
-
- if (pxmitpriv->pallocated_xmitbuf)
- vfree(pxmitpriv->pallocated_xmitbuf);
+ vfree(pxmitpriv->pallocated_frame_buf);
+ vfree(pxmitpriv->pallocated_xmitbuf);
/* free xmit extension buff */
pxmitbuf = (struct xmit_buf *)pxmitpriv->pxmit_extbuf;
diff --git a/drivers/staging/rtl8188eu/hal/bb_cfg.c b/drivers/staging/rtl8188eu/hal/bb_cfg.c
index f58a822..c2ad6a3 100644
--- a/drivers/staging/rtl8188eu/hal/bb_cfg.c
+++ b/drivers/staging/rtl8188eu/hal/bb_cfg.c
@@ -598,18 +598,12 @@
reg[RF_PATH_A] = &hal_data->PHYRegDef[RF_PATH_A];
reg[RF_PATH_B] = &hal_data->PHYRegDef[RF_PATH_B];
- reg[RF_PATH_C] = &hal_data->PHYRegDef[RF_PATH_C];
- reg[RF_PATH_D] = &hal_data->PHYRegDef[RF_PATH_D];
reg[RF_PATH_A]->rfintfs = rFPGA0_XAB_RFInterfaceSW;
reg[RF_PATH_B]->rfintfs = rFPGA0_XAB_RFInterfaceSW;
- reg[RF_PATH_C]->rfintfs = rFPGA0_XCD_RFInterfaceSW;
- reg[RF_PATH_D]->rfintfs = rFPGA0_XCD_RFInterfaceSW;
reg[RF_PATH_A]->rfintfi = rFPGA0_XAB_RFInterfaceRB;
reg[RF_PATH_B]->rfintfi = rFPGA0_XAB_RFInterfaceRB;
- reg[RF_PATH_C]->rfintfi = rFPGA0_XCD_RFInterfaceRB;
- reg[RF_PATH_D]->rfintfi = rFPGA0_XCD_RFInterfaceRB;
reg[RF_PATH_A]->rfintfo = rFPGA0_XA_RFInterfaceOE;
reg[RF_PATH_B]->rfintfo = rFPGA0_XB_RFInterfaceOE;
@@ -622,13 +616,9 @@
reg[RF_PATH_A]->rfLSSI_Select = rFPGA0_XAB_RFParameter;
reg[RF_PATH_B]->rfLSSI_Select = rFPGA0_XAB_RFParameter;
- reg[RF_PATH_C]->rfLSSI_Select = rFPGA0_XCD_RFParameter;
- reg[RF_PATH_D]->rfLSSI_Select = rFPGA0_XCD_RFParameter;
reg[RF_PATH_A]->rfTxGainStage = rFPGA0_TxGainStage;
reg[RF_PATH_B]->rfTxGainStage = rFPGA0_TxGainStage;
- reg[RF_PATH_C]->rfTxGainStage = rFPGA0_TxGainStage;
- reg[RF_PATH_D]->rfTxGainStage = rFPGA0_TxGainStage;
reg[RF_PATH_A]->rfHSSIPara1 = rFPGA0_XA_HSSIParameter1;
reg[RF_PATH_B]->rfHSSIPara1 = rFPGA0_XB_HSSIParameter1;
@@ -638,43 +628,27 @@
reg[RF_PATH_A]->rfSwitchControl = rFPGA0_XAB_SwitchControl;
reg[RF_PATH_B]->rfSwitchControl = rFPGA0_XAB_SwitchControl;
- reg[RF_PATH_C]->rfSwitchControl = rFPGA0_XCD_SwitchControl;
- reg[RF_PATH_D]->rfSwitchControl = rFPGA0_XCD_SwitchControl;
reg[RF_PATH_A]->rfAGCControl1 = rOFDM0_XAAGCCore1;
reg[RF_PATH_B]->rfAGCControl1 = rOFDM0_XBAGCCore1;
- reg[RF_PATH_C]->rfAGCControl1 = rOFDM0_XCAGCCore1;
- reg[RF_PATH_D]->rfAGCControl1 = rOFDM0_XDAGCCore1;
reg[RF_PATH_A]->rfAGCControl2 = rOFDM0_XAAGCCore2;
reg[RF_PATH_B]->rfAGCControl2 = rOFDM0_XBAGCCore2;
- reg[RF_PATH_C]->rfAGCControl2 = rOFDM0_XCAGCCore2;
- reg[RF_PATH_D]->rfAGCControl2 = rOFDM0_XDAGCCore2;
reg[RF_PATH_A]->rfRxIQImbalance = rOFDM0_XARxIQImbalance;
reg[RF_PATH_B]->rfRxIQImbalance = rOFDM0_XBRxIQImbalance;
- reg[RF_PATH_C]->rfRxIQImbalance = rOFDM0_XCRxIQImbalance;
- reg[RF_PATH_D]->rfRxIQImbalance = rOFDM0_XDRxIQImbalance;
reg[RF_PATH_A]->rfRxAFE = rOFDM0_XARxAFE;
reg[RF_PATH_B]->rfRxAFE = rOFDM0_XBRxAFE;
- reg[RF_PATH_C]->rfRxAFE = rOFDM0_XCRxAFE;
- reg[RF_PATH_D]->rfRxAFE = rOFDM0_XDRxAFE;
reg[RF_PATH_A]->rfTxIQImbalance = rOFDM0_XATxIQImbalance;
reg[RF_PATH_B]->rfTxIQImbalance = rOFDM0_XBTxIQImbalance;
- reg[RF_PATH_C]->rfTxIQImbalance = rOFDM0_XCTxIQImbalance;
- reg[RF_PATH_D]->rfTxIQImbalance = rOFDM0_XDTxIQImbalance;
reg[RF_PATH_A]->rfTxAFE = rOFDM0_XATxAFE;
reg[RF_PATH_B]->rfTxAFE = rOFDM0_XBTxAFE;
- reg[RF_PATH_C]->rfTxAFE = rOFDM0_XCTxAFE;
- reg[RF_PATH_D]->rfTxAFE = rOFDM0_XDTxAFE;
reg[RF_PATH_A]->rfLSSIReadBack = rFPGA0_XA_LSSIReadBack;
reg[RF_PATH_B]->rfLSSIReadBack = rFPGA0_XB_LSSIReadBack;
- reg[RF_PATH_C]->rfLSSIReadBack = rFPGA0_XC_LSSIReadBack;
- reg[RF_PATH_D]->rfLSSIReadBack = rFPGA0_XD_LSSIReadBack;
reg[RF_PATH_A]->rfLSSIReadBackPi = TransceiverA_HSPI_Readback;
reg[RF_PATH_B]->rfLSSIReadBackPi = TransceiverB_HSPI_Readback;
diff --git a/drivers/staging/rtl8188eu/hal/phy.c b/drivers/staging/rtl8188eu/hal/phy.c
index d3e8a8e..ae42b44 100644
--- a/drivers/staging/rtl8188eu/hal/phy.c
+++ b/drivers/staging/rtl8188eu/hal/phy.c
@@ -180,32 +180,6 @@
hal_data->BW20_24G_Diff[TxCount][RF_PATH_A]+
hal_data->BW20_24G_Diff[TxCount][index];
bw40_pwr[TxCount] = hal_data->Index24G_BW40_Base[TxCount][index];
- } else if (TxCount == RF_PATH_C) {
- cck_pwr[TxCount] = hal_data->Index24G_CCK_Base[TxCount][index];
- ofdm_pwr[TxCount] = hal_data->Index24G_BW40_Base[RF_PATH_A][index]+
- hal_data->BW20_24G_Diff[RF_PATH_A][index]+
- hal_data->BW20_24G_Diff[RF_PATH_B][index]+
- hal_data->BW20_24G_Diff[TxCount][index];
-
- bw20_pwr[TxCount] = hal_data->Index24G_BW40_Base[RF_PATH_A][index]+
- hal_data->BW20_24G_Diff[RF_PATH_A][index]+
- hal_data->BW20_24G_Diff[RF_PATH_B][index]+
- hal_data->BW20_24G_Diff[TxCount][index];
- bw40_pwr[TxCount] = hal_data->Index24G_BW40_Base[TxCount][index];
- } else if (TxCount == RF_PATH_D) {
- cck_pwr[TxCount] = hal_data->Index24G_CCK_Base[TxCount][index];
- ofdm_pwr[TxCount] = hal_data->Index24G_BW40_Base[RF_PATH_A][index]+
- hal_data->BW20_24G_Diff[RF_PATH_A][index]+
- hal_data->BW20_24G_Diff[RF_PATH_B][index]+
- hal_data->BW20_24G_Diff[RF_PATH_C][index]+
- hal_data->BW20_24G_Diff[TxCount][index];
-
- bw20_pwr[TxCount] = hal_data->Index24G_BW40_Base[RF_PATH_A][index]+
- hal_data->BW20_24G_Diff[RF_PATH_A][index]+
- hal_data->BW20_24G_Diff[RF_PATH_B][index]+
- hal_data->BW20_24G_Diff[RF_PATH_C][index]+
- hal_data->BW20_24G_Diff[TxCount][index];
- bw40_pwr[TxCount] = hal_data->Index24G_BW40_Base[TxCount][index];
}
}
}
diff --git a/drivers/staging/rtl8188eu/include/Hal8188EPhyCfg.h b/drivers/staging/rtl8188eu/include/Hal8188EPhyCfg.h
index b8833fa..2670d6b 100644
--- a/drivers/staging/rtl8188eu/include/Hal8188EPhyCfg.h
+++ b/drivers/staging/rtl8188eu/include/Hal8188EPhyCfg.h
@@ -69,8 +69,6 @@
enum rf_radio_path {
RF_PATH_A = 0, /* Radio Path A */
RF_PATH_B = 1, /* Radio Path B */
- RF_PATH_C = 2, /* Radio Path C */
- RF_PATH_D = 3, /* Radio Path D */
};
#define MAX_PG_GROUP 13
diff --git a/drivers/staging/rtl8188eu/os_dep/ioctl_linux.c b/drivers/staging/rtl8188eu/os_dep/ioctl_linux.c
index aec20bb..5d6a1c9 100644
--- a/drivers/staging/rtl8188eu/os_dep/ioctl_linux.c
+++ b/drivers/staging/rtl8188eu/os_dep/ioctl_linux.c
@@ -3095,7 +3095,6 @@
.get_wireless_stats = rtw_get_wireless_stats,
};
-#include <rtw_android.h>
int rtw_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
{
struct iwreq *wrq = (struct iwreq *)rq;
diff --git a/drivers/staging/rtl8192e/rtl8192e/r8192E_dev.c b/drivers/staging/rtl8192e/rtl8192e/r8192E_dev.c
index e9c4f97..ba64a4f 100644
--- a/drivers/staging/rtl8192e/rtl8192e/r8192E_dev.c
+++ b/drivers/staging/rtl8192e/rtl8192e/r8192E_dev.c
@@ -680,7 +680,7 @@
rtl92e_writeb(dev, BW_OPMODE, regBwOpMode);
{
- u32 ratr_value = 0;
+ u32 ratr_value;
ratr_value = regRATR;
if (priv->rf_type == RF_1T2R)
@@ -1000,7 +1000,7 @@
_rtl92e_update_msr(dev);
if (ieee->iw_mode == IW_MODE_INFRA || ieee->iw_mode == IW_MODE_ADHOC) {
- u32 reg = 0;
+ u32 reg;
reg = rtl92e_readl(dev, RCR);
if (priv->rtllib->state == RTLLIB_LINKED) {
@@ -1186,7 +1186,7 @@
struct r8192_priv *priv = rtllib_priv(dev);
dma_addr_t mapping = pci_map_single(priv->pdev, skb->data, skb->len,
PCI_DMA_TODEVICE);
- struct tx_fwinfo_8190pci *pTxFwInfo = NULL;
+ struct tx_fwinfo_8190pci *pTxFwInfo;
pTxFwInfo = (struct tx_fwinfo_8190pci *)skb->data;
memset(pTxFwInfo, 0, sizeof(struct tx_fwinfo_8190pci));
@@ -2235,7 +2235,7 @@
void rtl92e_clear_irq(struct net_device *dev)
{
- u32 tmp = 0;
+ u32 tmp;
tmp = rtl92e_readl(dev, ISR);
rtl92e_writel(dev, ISR, tmp);
diff --git a/drivers/staging/rtl8192e/rtl8192e/rtl_core.c b/drivers/staging/rtl8192e/rtl8192e/rtl_core.c
index 8f989a9..0b06482 100644
--- a/drivers/staging/rtl8192e/rtl8192e/rtl_core.c
+++ b/drivers/staging/rtl8192e/rtl8192e/rtl_core.c
@@ -288,7 +288,7 @@
void rtl92e_irq_enable(struct net_device *dev)
{
- struct r8192_priv *priv = (struct r8192_priv *)rtllib_priv(dev);
+ struct r8192_priv *priv = rtllib_priv(dev);
priv->irq_enabled = 1;
@@ -297,7 +297,7 @@
void rtl92e_irq_disable(struct net_device *dev)
{
- struct r8192_priv *priv = (struct r8192_priv *)rtllib_priv(dev);
+ struct r8192_priv *priv = rtllib_priv(dev);
priv->ops->irq_disable(dev);
@@ -306,7 +306,7 @@
static void _rtl92e_set_chan(struct net_device *dev, short ch)
{
- struct r8192_priv *priv = (struct r8192_priv *)rtllib_priv(dev);
+ struct r8192_priv *priv = rtllib_priv(dev);
RT_TRACE(COMP_CH, "=====>%s()====ch:%d\n", __func__, ch);
if (priv->chan_forced)
@@ -1546,14 +1546,14 @@
*****************************************************************************/
void rtl92e_rx_enable(struct net_device *dev)
{
- struct r8192_priv *priv = (struct r8192_priv *)rtllib_priv(dev);
+ struct r8192_priv *priv = rtllib_priv(dev);
priv->ops->rx_enable(dev);
}
void rtl92e_tx_enable(struct net_device *dev)
{
- struct r8192_priv *priv = (struct r8192_priv *)rtllib_priv(dev);
+ struct r8192_priv *priv = rtllib_priv(dev);
priv->ops->tx_enable(dev);
@@ -1612,7 +1612,7 @@
static void _rtl92e_hard_data_xmit(struct sk_buff *skb, struct net_device *dev,
int rate)
{
- struct r8192_priv *priv = (struct r8192_priv *)rtllib_priv(dev);
+ struct r8192_priv *priv = rtllib_priv(dev);
int ret;
struct cb_desc *tcb_desc = (struct cb_desc *)(skb->cb +
MAX_DEV_ADDR_SIZE);
@@ -1643,7 +1643,7 @@
static int _rtl92e_hard_start_xmit(struct sk_buff *skb, struct net_device *dev)
{
- struct r8192_priv *priv = (struct r8192_priv *)rtllib_priv(dev);
+ struct r8192_priv *priv = rtllib_priv(dev);
int ret;
struct cb_desc *tcb_desc = (struct cb_desc *)(skb->cb +
MAX_DEV_ADDR_SIZE);
@@ -1676,7 +1676,7 @@
static void _rtl92e_tx_isr(struct net_device *dev, int prio)
{
- struct r8192_priv *priv = (struct r8192_priv *)rtllib_priv(dev);
+ struct r8192_priv *priv = rtllib_priv(dev);
struct rtl8192_tx_ring *ring = &priv->tx_ring[prio];
@@ -1850,7 +1850,7 @@
static int _rtl92e_alloc_tx_ring(struct net_device *dev, unsigned int prio,
unsigned int entries)
{
- struct r8192_priv *priv = (struct r8192_priv *)rtllib_priv(dev);
+ struct r8192_priv *priv = rtllib_priv(dev);
struct tx_desc *ring;
dma_addr_t dma;
int i;
@@ -1944,7 +1944,7 @@
void rtl92e_update_rx_pkt_timestamp(struct net_device *dev,
struct rtllib_rx_stats *stats)
{
- struct r8192_priv *priv = (struct r8192_priv *)rtllib_priv(dev);
+ struct r8192_priv *priv = rtllib_priv(dev);
if (stats->bIsAMPDU && !stats->bFirstMPDU)
stats->mac_time = priv->LastRxDescTSF;
@@ -2022,7 +2022,7 @@
static void _rtl92e_rx_normal(struct net_device *dev)
{
- struct r8192_priv *priv = (struct r8192_priv *)rtllib_priv(dev);
+ struct r8192_priv *priv = rtllib_priv(dev);
struct rtllib_hdr_1addr *rtllib_hdr = NULL;
bool unicast_packet = false;
bool bLedBlinking = true;
@@ -2128,7 +2128,7 @@
static void _rtl92e_tx_resume(struct net_device *dev)
{
- struct r8192_priv *priv = (struct r8192_priv *)rtllib_priv(dev);
+ struct r8192_priv *priv = rtllib_priv(dev);
struct rtllib_device *ieee = priv->rtllib;
struct sk_buff *skb;
int queue_index;
@@ -2279,7 +2279,7 @@
/* based on ipw2200 driver */
static int _rtl92e_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
{
- struct r8192_priv *priv = (struct r8192_priv *)rtllib_priv(dev);
+ struct r8192_priv *priv = rtllib_priv(dev);
struct iwreq *wrq = (struct iwreq *)rq;
int ret = -1;
struct rtllib_device *ieee = priv->rtllib;
@@ -2403,7 +2403,7 @@
static irqreturn_t _rtl92e_irq(int irq, void *netdev)
{
struct net_device *dev = (struct net_device *) netdev;
- struct r8192_priv *priv = (struct r8192_priv *)rtllib_priv(dev);
+ struct r8192_priv *priv = rtllib_priv(dev);
unsigned long flags;
u32 inta;
u32 intb;
diff --git a/drivers/staging/rtl8192e/rtl8192e/rtl_dm.c b/drivers/staging/rtl8192e/rtl8192e/rtl_dm.c
index ef03242..b6b714d 100644
--- a/drivers/staging/rtl8192e/rtl8192e/rtl_dm.c
+++ b/drivers/staging/rtl8192e/rtl8192e/rtl_dm.c
@@ -1875,7 +1875,7 @@
struct r8192_priv,
rfpath_check_wq);
struct net_device *dev = priv->rtllib->dev;
- u8 rfpath = 0, i;
+ u8 rfpath, i;
rfpath = rtl92e_readb(dev, 0xc04);
diff --git a/drivers/staging/rtl8192e/rtl819x_BAProc.c b/drivers/staging/rtl8192e/rtl819x_BAProc.c
index c04a020..c7fd1b1 100644
--- a/drivers/staging/rtl8192e/rtl819x_BAProc.c
+++ b/drivers/staging/rtl8192e/rtl819x_BAProc.c
@@ -189,7 +189,7 @@
static void rtllib_send_ADDBAReq(struct rtllib_device *ieee, u8 *dst,
struct ba_record *pBA)
{
- struct sk_buff *skb = NULL;
+ struct sk_buff *skb;
skb = rtllib_ADDBA(ieee, dst, pBA, 0, ACT_ADDBAREQ);
@@ -204,7 +204,7 @@
static void rtllib_send_ADDBARsp(struct rtllib_device *ieee, u8 *dst,
struct ba_record *pBA, u16 StatusCode)
{
- struct sk_buff *skb = NULL;
+ struct sk_buff *skb;
skb = rtllib_ADDBA(ieee, dst, pBA, StatusCode, ACT_ADDBARSP);
if (skb)
@@ -217,7 +217,7 @@
struct ba_record *pBA, enum tr_select TxRxSelect,
u16 ReasonCode)
{
- struct sk_buff *skb = NULL;
+ struct sk_buff *skb;
skb = rtllib_DELBA(ieee, dst, pBA, TxRxSelect, ReasonCode);
if (skb)
diff --git a/drivers/staging/rtl8192e/rtllib_rx.c b/drivers/staging/rtl8192e/rtllib_rx.c
index 37343ec..af64bd3 100644
--- a/drivers/staging/rtl8192e/rtllib_rx.c
+++ b/drivers/staging/rtl8192e/rtllib_rx.c
@@ -905,7 +905,7 @@
{
struct rtllib_hdr_4addr *hdr = (struct rtllib_hdr_4addr *)skb->data;
u16 fc = le16_to_cpu(hdr->frame_ctl);
- size_t hdrlen = 0;
+ size_t hdrlen;
hdrlen = rtllib_get_hdrlen(fc);
if (HTCCheck(ieee, skb->data)) {
diff --git a/drivers/staging/rtl8192e/rtllib_softmac.c b/drivers/staging/rtl8192e/rtllib_softmac.c
index 19c3bff..25b5b5e 100644
--- a/drivers/staging/rtl8192e/rtllib_softmac.c
+++ b/drivers/staging/rtl8192e/rtllib_softmac.c
@@ -776,7 +776,7 @@
{
struct sk_buff *skb;
struct rtllib_authentication *auth;
- int len = 0;
+ int len;
len = sizeof(struct rtllib_authentication) + challengelen +
ieee->tx_headroom + 4;
diff --git a/drivers/staging/rtl8192e/rtllib_softmac_wx.c b/drivers/staging/rtl8192e/rtllib_softmac_wx.c
index 86f52ac..01a75bd 100644
--- a/drivers/staging/rtl8192e/rtllib_softmac_wx.c
+++ b/drivers/staging/rtl8192e/rtllib_softmac_wx.c
@@ -243,7 +243,7 @@
struct iw_request_info *info,
union iwreq_data *wrqu, char *extra)
{
- u32 tmp_rate = 0;
+ u32 tmp_rate;
tmp_rate = TxCountToDataRate(ieee,
ieee->softmac_stats.CurrentShowTxate);
diff --git a/drivers/staging/rtl8192u/ieee80211/ieee80211_softmac.c b/drivers/staging/rtl8192u/ieee80211/ieee80211_softmac.c
index 9bc5aac..f8041f9d6 100644
--- a/drivers/staging/rtl8192u/ieee80211/ieee80211_softmac.c
+++ b/drivers/staging/rtl8192u/ieee80211/ieee80211_softmac.c
@@ -515,7 +515,7 @@
ieee80211_send_probe_requests(ieee);
- queue_delayed_work(ieee->wq, &ieee->softmac_scan_wq, IEEE80211_SOFTMAC_SCAN_TIME);
+ schedule_delayed_work(&ieee->softmac_scan_wq, IEEE80211_SOFTMAC_SCAN_TIME);
up(&ieee->scan_sem);
return;
@@ -614,7 +614,7 @@
if (ieee->softmac_features & IEEE_SOFTMAC_SCAN){
if (ieee->scanning == 0) {
ieee->scanning = 1;
- queue_delayed_work(ieee->wq, &ieee->softmac_scan_wq, 0);
+ schedule_delayed_work(&ieee->softmac_scan_wq, 0);
}
}else
ieee->start_scan(ieee->dev);
@@ -1241,7 +1241,7 @@
ieee->state = IEEE80211_ASSOCIATING_RETRY;
- queue_delayed_work(ieee->wq, &ieee->associate_retry_wq, \
+ schedule_delayed_work(&ieee->associate_retry_wq, \
IEEE80211_SOFTMAC_ASSOC_RETRY_TIME);
spin_unlock_irqrestore(&ieee->lock, flags);
@@ -1382,7 +1382,7 @@
ieee->state = IEEE80211_LINKED;
//ieee->UpdateHalRATRTableHandler(dev, ieee->dot11HTOperationalRateSet);
- queue_work(ieee->wq, &ieee->associate_complete_wq);
+ schedule_work(&ieee->associate_complete_wq);
}
static void ieee80211_associate_procedure_wq(struct work_struct *work)
@@ -1483,7 +1483,7 @@
}
ieee->state = IEEE80211_ASSOCIATING;
- queue_work(ieee->wq, &ieee->associate_procedure_wq);
+ schedule_work(&ieee->associate_procedure_wq);
}else{
if(ieee80211_is_54g(&ieee->current_network) &&
(ieee->modulation & IEEE80211_OFDM_MODULATION)){
@@ -2044,7 +2044,7 @@
"Association response status code 0x%x\n",
errcode);
if(ieee->AsocRetryCount < RT_ASOC_RETRY_LIMIT) {
- queue_work(ieee->wq, &ieee->associate_procedure_wq);
+ schedule_work(&ieee->associate_procedure_wq);
} else {
ieee80211_associate_abort(ieee);
}
@@ -2100,7 +2100,7 @@
notify_wx_assoc_event(ieee);
//HTSetConnectBwMode(ieee, HT_CHANNEL_WIDTH_20, HT_EXTCHNL_OFFSET_NO_EXT);
RemovePeerTS(ieee, header->addr2);
- queue_work(ieee->wq, &ieee->associate_procedure_wq);
+ schedule_work(&ieee->associate_procedure_wq);
}
break;
case IEEE80211_STYPE_MANAGE_ACT:
@@ -2442,7 +2442,7 @@
inline void ieee80211_start_ibss(struct ieee80211_device *ieee)
{
- queue_delayed_work(ieee->wq, &ieee->start_ibss_wq, 150);
+ schedule_delayed_work(&ieee->start_ibss_wq, 150);
}
/* this is called only in user context, with wx_sem held */
@@ -2725,7 +2725,6 @@
setup_timer(&ieee->beacon_timer, ieee80211_send_beacon_cb,
(unsigned long)ieee);
- ieee->wq = create_workqueue(DRV_NAME);
INIT_DELAYED_WORK(&ieee->start_ibss_wq, ieee80211_start_ibss_wq);
INIT_WORK(&ieee->associate_complete_wq, ieee80211_associate_complete_wq);
@@ -2755,7 +2754,6 @@
del_timer_sync(&ieee->associate_timer);
cancel_delayed_work(&ieee->associate_retry_wq);
- destroy_workqueue(ieee->wq);
up(&ieee->wx_sem);
}
diff --git a/drivers/staging/rtl8192u/r8192U_core.c b/drivers/staging/rtl8192u/r8192U_core.c
index 3c58963..3a93218 100644
--- a/drivers/staging/rtl8192u/r8192U_core.c
+++ b/drivers/staging/rtl8192u/r8192U_core.c
@@ -1962,7 +1962,7 @@
network->qos_data.param_count)) {
network->qos_data.old_param_count =
network->qos_data.param_count;
- queue_work(priv->priv_wq, &priv->qos_activate);
+ schedule_work(&priv->qos_activate);
RT_TRACE(COMP_QOS,
"QoS parameters change call qos_activate\n");
}
@@ -1971,7 +1971,7 @@
&def_qos_parameters, size);
if ((network->qos_data.active == 1) && (active_network == 1)) {
- queue_work(priv->priv_wq, &priv->qos_activate);
+ schedule_work(&priv->qos_activate);
RT_TRACE(COMP_QOS,
"QoS was disabled call qos_activate\n");
}
@@ -1990,7 +1990,7 @@
struct r8192_priv *priv = ieee80211_priv(dev);
rtl8192_qos_handle_probe_response(priv, 1, network);
- queue_delayed_work(priv->priv_wq, &priv->update_beacon_wq, 0);
+ schedule_delayed_work(&priv->update_beacon_wq, 0);
return 0;
}
@@ -2042,7 +2042,7 @@
network->flags,
priv->ieee80211->current_network.qos_data.active);
if (set_qos_param == 1)
- queue_work(priv->priv_wq, &priv->qos_activate);
+ schedule_work(&priv->qos_activate);
return 0;
@@ -2387,7 +2387,6 @@
{
struct r8192_priv *priv = ieee80211_priv(dev);
- priv->priv_wq = create_workqueue(DRV_NAME);
INIT_WORK(&priv->reset_wq, rtl8192_restart);
@@ -3518,7 +3517,7 @@
{
struct r8192_priv *priv = ieee80211_priv((struct net_device *)data);
- queue_delayed_work(priv->priv_wq, &priv->watch_dog_wq, 0);
+ schedule_delayed_work(&priv->watch_dog_wq, 0);
mod_timer(&priv->watch_dog_timer,
jiffies + msecs_to_jiffies(IEEE80211_WATCH_DOG_TIME));
}
@@ -5022,7 +5021,6 @@
kfree(priv->pFirmware);
priv->pFirmware = NULL;
rtl8192_usb_deleteendpoints(dev);
- destroy_workqueue(priv->priv_wq);
mdelay(10);
fail:
free_ieee80211(dev);
@@ -5060,7 +5058,6 @@
kfree(priv->pFirmware);
priv->pFirmware = NULL;
rtl8192_usb_deleteendpoints(dev);
- destroy_workqueue(priv->priv_wq);
mdelay(10);
}
free_ieee80211(dev);
diff --git a/drivers/staging/rtl8192u/r8192U_dm.c b/drivers/staging/rtl8192u/r8192U_dm.c
index d2e86b9..1e0e53c 100644
--- a/drivers/staging/rtl8192u/r8192U_dm.c
+++ b/drivers/staging/rtl8192u/r8192U_dm.c
@@ -1628,47 +1628,75 @@
void dm_change_dynamic_initgain_thresh(struct net_device *dev, u32 dm_type,
u32 dm_value)
{
- if (dm_type == DIG_TYPE_THRESH_HIGH) {
+ switch (dm_type) {
+ case DIG_TYPE_THRESH_HIGH:
dm_digtable.rssi_high_thresh = dm_value;
- } else if (dm_type == DIG_TYPE_THRESH_LOW) {
+ break;
+
+ case DIG_TYPE_THRESH_LOW:
dm_digtable.rssi_low_thresh = dm_value;
- } else if (dm_type == DIG_TYPE_THRESH_HIGHPWR_HIGH) {
+ break;
+
+ case DIG_TYPE_THRESH_HIGHPWR_HIGH:
dm_digtable.rssi_high_power_highthresh = dm_value;
- } else if (dm_type == DIG_TYPE_THRESH_HIGHPWR_LOW) {
+ break;
+
+ case DIG_TYPE_THRESH_HIGHPWR_LOW:
dm_digtable.rssi_high_power_lowthresh = dm_value;
- } else if (dm_type == DIG_TYPE_ENABLE) {
+ break;
+
+ case DIG_TYPE_ENABLE:
dm_digtable.dig_state = DM_STA_DIG_MAX;
dm_digtable.dig_enable_flag = true;
- } else if (dm_type == DIG_TYPE_DISABLE) {
+ break;
+
+ case DIG_TYPE_DISABLE:
dm_digtable.dig_state = DM_STA_DIG_MAX;
dm_digtable.dig_enable_flag = false;
- } else if (dm_type == DIG_TYPE_DBG_MODE) {
+ break;
+
+ case DIG_TYPE_DBG_MODE:
if (dm_value >= DM_DBG_MAX)
dm_value = DM_DBG_OFF;
dm_digtable.dbg_mode = (u8)dm_value;
- } else if (dm_type == DIG_TYPE_RSSI) {
+ break;
+
+ case DIG_TYPE_RSSI:
if (dm_value > 100)
dm_value = 30;
dm_digtable.rssi_val = (long)dm_value;
- } else if (dm_type == DIG_TYPE_ALGORITHM) {
+ break;
+
+ case DIG_TYPE_ALGORITHM:
if (dm_value >= DIG_ALGO_MAX)
dm_value = DIG_ALGO_BY_FALSE_ALARM;
if (dm_digtable.dig_algorithm != (u8)dm_value)
dm_digtable.dig_algorithm_switch = 1;
dm_digtable.dig_algorithm = (u8)dm_value;
- } else if (dm_type == DIG_TYPE_BACKOFF) {
+ break;
+
+ case DIG_TYPE_BACKOFF:
if (dm_value > 30)
dm_value = 30;
dm_digtable.backoff_val = (u8)dm_value;
- } else if (dm_type == DIG_TYPE_RX_GAIN_MIN) {
+ break;
+
+ case DIG_TYPE_RX_GAIN_MIN:
if (dm_value == 0)
dm_value = 0x1;
dm_digtable.rx_gain_range_min = (u8)dm_value;
- } else if (dm_type == DIG_TYPE_RX_GAIN_MAX) {
+ break;
+
+ case DIG_TYPE_RX_GAIN_MAX:
if (dm_value > 0x50)
dm_value = 0x50;
dm_digtable.rx_gain_range_max = (u8)dm_value;
+ break;
+
+ default:
+ break;
}
+
} /* DM_ChangeDynamicInitGainThresh */
/*-----------------------------------------------------------------------------
diff --git a/drivers/staging/rtl8712/rtl871x_ioctl_linux.c b/drivers/staging/rtl8712/rtl871x_ioctl_linux.c
index db2e31bc..a15f3ce 100644
--- a/drivers/staging/rtl8712/rtl871x_ioctl_linux.c
+++ b/drivers/staging/rtl8712/rtl871x_ioctl_linux.c
@@ -137,7 +137,7 @@
}
}
-static inline char *translate_scan(struct _adapter *padapter,
+static noinline_for_stack char *translate_scan(struct _adapter *padapter,
struct iw_request_info *info,
struct wlan_network *pnetwork,
char *start, char *stop)
diff --git a/drivers/staging/rtl8712/usb_intf.c b/drivers/staging/rtl8712/usb_intf.c
index b64f10b..f1d3d70 100644
--- a/drivers/staging/rtl8712/usb_intf.c
+++ b/drivers/staging/rtl8712/usb_intf.c
@@ -221,7 +221,7 @@
return 0;
}
-void rtl871x_intf_resume(struct _adapter *padapter)
+static void rtl871x_intf_resume(struct _adapter *padapter)
{
if (padapter->dvobjpriv.inirp_init)
padapter->dvobjpriv.inirp_init(padapter);
diff --git a/drivers/staging/rtl8712/usb_ops_linux.c b/drivers/staging/rtl8712/usb_ops_linux.c
index e77be2a..c2ac581 100644
--- a/drivers/staging/rtl8712/usb_ops_linux.c
+++ b/drivers/staging/rtl8712/usb_ops_linux.c
@@ -228,16 +228,18 @@
}
} else {
switch (purb->status) {
- case -ENOENT:
- if (padapter->bSuspended)
- break;
- /* Fall through. */
case -EINVAL:
case -EPIPE:
case -ENODEV:
case -ESHUTDOWN:
padapter->bDriverStopped = true;
break;
+ case -ENOENT:
+ if (!padapter->bSuspended) {
+ padapter->bDriverStopped = true;
+ break;
+ }
+ /* Fall through. */
case -EPROTO:
precvbuf->reuse = true;
r8712_read_port(padapter, precvpriv->ff_hwaddr, 0,
diff --git a/drivers/staging/rtl8712/xmit_linux.c b/drivers/staging/rtl8712/xmit_linux.c
index d398183..f8866d7 100644
--- a/drivers/staging/rtl8712/xmit_linux.c
+++ b/drivers/staging/rtl8712/xmit_linux.c
@@ -131,7 +131,7 @@
for (i = 0; i < 8; i++) {
pxmitbuf->pxmit_urb[i] = usb_alloc_urb(0, GFP_KERNEL);
- if (pxmitbuf->pxmit_urb[i] == NULL) {
+ if (!pxmitbuf->pxmit_urb[i]) {
netdev_err(padapter->pnetdev, "pxmitbuf->pxmit_urb[i] == NULL\n");
return _FAIL;
}
@@ -171,7 +171,7 @@
goto _xmit_entry_drop;
}
pxmitframe = r8712_alloc_xmitframe(pxmitpriv);
- if (pxmitframe == NULL) {
+ if (!pxmitframe) {
ret = 0;
goto _xmit_entry_drop;
}
diff --git a/drivers/staging/rtl8723au/core/rtw_ap.c b/drivers/staging/rtl8723au/core/rtw_ap.c
index 1aa9b26..ce4b589 100644
--- a/drivers/staging/rtl8723au/core/rtw_ap.c
+++ b/drivers/staging/rtl8723au/core/rtw_ap.c
@@ -171,24 +171,20 @@
return ret;
}
-void expire_timeout_chk23a(struct rtw_adapter *padapter)
+void expire_timeout_chk23a(struct rtw_adapter *padapter)
{
- struct list_head *phead, *plist, *ptmp;
+ struct list_head *phead;
u8 updated = 0;
- struct sta_info *psta;
+ struct sta_info *psta, *ptmp;
struct sta_priv *pstapriv = &padapter->stapriv;
u8 chk_alive_num = 0;
struct sta_info *chk_alive_list[NUM_STA];
int i;
spin_lock_bh(&pstapriv->auth_list_lock);
-
phead = &pstapriv->auth_list;
-
/* check auth_queue */
- list_for_each_safe(plist, ptmp, phead) {
- psta = container_of(plist, struct sta_info, auth_list);
-
+ list_for_each_entry_safe(psta, ptmp, phead, auth_list) {
if (psta->expire_to > 0) {
psta->expire_to--;
if (psta->expire_to == 0) {
@@ -206,19 +202,13 @@
spin_lock_bh(&pstapriv->auth_list_lock);
}
}
-
}
-
spin_unlock_bh(&pstapriv->auth_list_lock);
spin_lock_bh(&pstapriv->asoc_list_lock);
-
phead = &pstapriv->asoc_list;
-
/* check asoc_queue */
- list_for_each_safe(plist, ptmp, phead) {
- psta = container_of(plist, struct sta_info, asoc_list);
-
+ list_for_each_entry_safe(psta, ptmp, phead, asoc_list) {
if (chk_sta_is_alive(psta) || !psta->expire_to) {
psta->expire_to = pstapriv->expire_to;
psta->keep_alive_trycnt = 0;
@@ -283,7 +273,6 @@
}
}
}
-
spin_unlock_bh(&pstapriv->asoc_list_lock);
if (chk_alive_num) {
@@ -1059,7 +1048,7 @@
int rtw_acl_add_sta23a(struct rtw_adapter *padapter, u8 *addr)
{
- struct list_head *plist, *phead;
+ struct list_head *phead;
u8 added = false;
int i, ret = 0;
struct rtw_wlan_acl_node *paclnode;
@@ -1073,12 +1062,8 @@
return -1;
spin_lock_bh(&pacl_node_q->lock);
-
phead = get_list_head(pacl_node_q);
-
- list_for_each(plist, phead) {
- paclnode = container_of(plist, struct rtw_wlan_acl_node, list);
-
+ list_for_each_entry(paclnode, phead, list) {
if (!memcmp(paclnode->addr, addr, ETH_ALEN)) {
if (paclnode->valid == true) {
added = true;
@@ -1087,7 +1072,6 @@
}
}
}
-
spin_unlock_bh(&pacl_node_q->lock);
if (added)
@@ -1121,8 +1105,8 @@
int rtw_acl_remove_sta23a(struct rtw_adapter *padapter, u8 *addr)
{
- struct list_head *plist, *phead, *ptmp;
- struct rtw_wlan_acl_node *paclnode;
+ struct list_head *phead;
+ struct rtw_wlan_acl_node *paclnode, *ptmp;
struct sta_priv *pstapriv = &padapter->stapriv;
struct wlan_acl_pool *pacl_list = &pstapriv->acl_list;
struct rtw_queue *pacl_node_q = &pacl_list->acl_node_q;
@@ -1130,12 +1114,8 @@
DBG_8723A("%s(acl_num =%d) = %pM\n", __func__, pacl_list->num, addr);
spin_lock_bh(&pacl_node_q->lock);
-
phead = get_list_head(pacl_node_q);
-
- list_for_each_safe(plist, ptmp, phead) {
- paclnode = container_of(plist, struct rtw_wlan_acl_node, list);
-
+ list_for_each_entry_safe(paclnode, ptmp, phead, list) {
if (!memcmp(paclnode->addr, addr, ETH_ALEN)) {
if (paclnode->valid) {
paclnode->valid = false;
@@ -1146,7 +1126,6 @@
}
}
}
-
spin_unlock_bh(&pacl_node_q->lock);
DBG_8723A("%s, acl_num =%d\n", __func__, pacl_list->num);
@@ -1354,20 +1333,14 @@
{
/* update associated stations cap. */
if (updated == true) {
- struct list_head *phead, *plist, *ptmp;
- struct sta_info *psta;
+ struct list_head *phead;
+ struct sta_info *psta, *ptmp;
struct sta_priv *pstapriv = &padapter->stapriv;
spin_lock_bh(&pstapriv->asoc_list_lock);
-
phead = &pstapriv->asoc_list;
-
- list_for_each_safe(plist, ptmp, phead) {
- psta = container_of(plist, struct sta_info, asoc_list);
-
+ list_for_each_entry_safe(psta, ptmp, phead, asoc_list)
VCS_update23a(padapter, psta);
- }
-
spin_unlock_bh(&pstapriv->asoc_list_lock);
}
}
@@ -1627,7 +1600,7 @@
int rtw_ap_inform_ch_switch23a(struct rtw_adapter *padapter, u8 new_ch, u8 ch_offset)
{
- struct list_head *phead, *plist;
+ struct list_head *phead;
struct sta_info *psta = NULL;
struct sta_priv *pstapriv = &padapter->stapriv;
struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv;
@@ -1642,10 +1615,7 @@
spin_lock_bh(&pstapriv->asoc_list_lock);
phead = &pstapriv->asoc_list;
-
- list_for_each(plist, phead) {
- psta = container_of(plist, struct sta_info, asoc_list);
-
+ list_for_each_entry(psta, phead, asoc_list) {
issue_action_spct_ch_switch23a(padapter, psta->hwaddr, new_ch, ch_offset);
psta->expire_to = ((pstapriv->expire_to * 2) > 5) ? 5 : (pstapriv->expire_to * 2);
}
@@ -1658,8 +1628,8 @@
int rtw_sta_flush23a(struct rtw_adapter *padapter)
{
- struct list_head *phead, *plist, *ptmp;
- struct sta_info *psta;
+ struct list_head *phead;
+ struct sta_info *psta, *ptmp;
struct sta_priv *pstapriv = &padapter->stapriv;
struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv;
struct mlme_ext_info *pmlmeinfo = &pmlmeext->mlmext_info;
@@ -1675,10 +1645,7 @@
spin_lock_bh(&pstapriv->asoc_list_lock);
phead = &pstapriv->asoc_list;
-
- list_for_each_safe(plist, ptmp, phead) {
- psta = container_of(plist, struct sta_info, asoc_list);
-
+ list_for_each_entry_safe(psta, ptmp, phead, asoc_list) {
/* Remove sta from asoc_list */
list_del_init(&psta->asoc_list);
pstapriv->asoc_list_cnt--;
@@ -1744,9 +1711,9 @@
struct mlme_priv *mlmepriv = &padapter->mlmepriv;
struct mlme_ext_priv *pmlmeext = &padapter->mlmeextpriv;
struct sta_priv *pstapriv = &padapter->stapriv;
- struct sta_info *psta;
+ struct sta_info *psta, *ptmp;
struct security_priv *psecuritypriv = &padapter->securitypriv;
- struct list_head *phead, *plist, *ptmp;
+ struct list_head *phead;
u8 chk_alive_num = 0;
struct sta_info *chk_alive_list[NUM_STA];
int i;
@@ -1775,15 +1742,9 @@
}
spin_lock_bh(&pstapriv->asoc_list_lock);
-
phead = &pstapriv->asoc_list;
-
- list_for_each_safe(plist, ptmp, phead) {
- psta = container_of(plist, struct sta_info, asoc_list);
-
+ list_for_each_entry_safe(psta, ptmp, phead, asoc_list)
chk_alive_list[chk_alive_num++] = psta;
- }
-
spin_unlock_bh(&pstapriv->asoc_list_lock);
for (i = 0; i < chk_alive_num; i++) {
@@ -1841,8 +1802,8 @@
void stop_ap_mode23a(struct rtw_adapter *padapter)
{
- struct list_head *phead, *plist, *ptmp;
- struct rtw_wlan_acl_node *paclnode;
+ struct list_head *phead;
+ struct rtw_wlan_acl_node *paclnode, *ptmp;
struct sta_info *psta = NULL;
struct sta_priv *pstapriv = &padapter->stapriv;
struct mlme_priv *pmlmepriv = &padapter->mlmepriv;
@@ -1864,15 +1825,10 @@
/* for ACL */
spin_lock_bh(&pacl_node_q->lock);
phead = get_list_head(pacl_node_q);
-
- list_for_each_safe(plist, ptmp, phead) {
- paclnode = container_of(plist, struct rtw_wlan_acl_node, list);
-
+ list_for_each_entry_safe(paclnode, ptmp, phead, list) {
if (paclnode->valid == true) {
paclnode->valid = false;
-
list_del_init(&paclnode->list);
-
pacl_list->num--;
}
}
diff --git a/drivers/staging/rtl8723au/core/rtw_mlme.c b/drivers/staging/rtl8723au/core/rtw_mlme.c
index 3c09ea9..3adda55 100644
--- a/drivers/staging/rtl8723au/core/rtw_mlme.c
+++ b/drivers/staging/rtl8723au/core/rtw_mlme.c
@@ -171,21 +171,15 @@
void rtw_free_network_queue23a(struct rtw_adapter *padapter)
{
- struct list_head *phead, *plist, *ptmp;
- struct wlan_network *pnetwork;
+ struct list_head *phead;
+ struct wlan_network *pnetwork, *ptmp;
struct mlme_priv *pmlmepriv = &padapter->mlmepriv;
struct rtw_queue *scanned_queue = &pmlmepriv->scanned_queue;
spin_lock_bh(&scanned_queue->lock);
-
phead = get_list_head(scanned_queue);
-
- list_for_each_safe(plist, ptmp, phead) {
- pnetwork = container_of(plist, struct wlan_network, list);
-
+ list_for_each_entry_safe(pnetwork, ptmp, phead, list)
_rtw_free_network23a(pmlmepriv, pnetwork);
- }
-
spin_unlock_bh(&scanned_queue->lock);
}
@@ -329,15 +323,12 @@
struct wlan_network *
rtw_get_oldest_wlan_network23a(struct rtw_queue *scanned_queue)
{
- struct list_head *plist, *phead;
+ struct list_head *phead;
struct wlan_network *pwlan;
struct wlan_network *oldest = NULL;
phead = get_list_head(scanned_queue);
-
- list_for_each(plist, phead) {
- pwlan = container_of(plist, struct wlan_network, list);
-
+ list_for_each_entry(pwlan, phead, list) {
if (pwlan->fixed != true) {
if (!oldest || time_after(oldest->last_scanned,
pwlan->last_scanned))
@@ -445,7 +436,6 @@
spin_lock_bh(&queue->lock);
phead = get_list_head(queue);
-
list_for_each(plist, phead) {
pnetwork = container_of(plist, struct wlan_network, list);
@@ -710,21 +700,17 @@
static void free_scanqueue(struct mlme_priv *pmlmepriv)
{
- struct wlan_network *pnetwork;
+ struct wlan_network *pnetwork, *ptemp;
struct rtw_queue *scan_queue = &pmlmepriv->scanned_queue;
- struct list_head *plist, *phead, *ptemp;
+ struct list_head *phead;
RT_TRACE(_module_rtl871x_mlme_c_, _drv_notice_, "+free_scanqueue\n");
spin_lock_bh(&scan_queue->lock);
-
phead = get_list_head(scan_queue);
-
- list_for_each_safe(plist, ptemp, phead) {
- pnetwork = container_of(plist, struct wlan_network, list);
+ list_for_each_entry_safe(pnetwork, ptemp, phead, list) {
pnetwork->fixed = false;
_rtw_free_network23a(pmlmepriv, pnetwork);
}
-
spin_unlock_bh(&scan_queue->lock);
}
@@ -1625,15 +1611,13 @@
static struct wlan_network *
rtw_select_candidate_from_queue(struct mlme_priv *pmlmepriv)
{
- struct wlan_network *pnetwork, *candidate = NULL;
+ struct wlan_network *pnetwork, *ptmp, *candidate = NULL;
struct rtw_queue *queue = &pmlmepriv->scanned_queue;
- struct list_head *phead, *plist, *ptmp;
+ struct list_head *phead;
spin_lock_bh(&pmlmepriv->scanned_queue.lock);
phead = get_list_head(queue);
-
- list_for_each_safe(plist, ptmp, phead) {
- pnetwork = container_of(plist, struct wlan_network, list);
+ list_for_each_entry_safe(pnetwork, ptmp, phead, list) {
if (!pnetwork) {
RT_TRACE(_module_rtl871x_mlme_c_, _drv_err_,
"%s: return _FAIL:(pnetwork == NULL)\n",
diff --git a/drivers/staging/rtl8723au/core/rtw_mlme_ext.c b/drivers/staging/rtl8723au/core/rtw_mlme_ext.c
index a39e441..f4fff38 100644
--- a/drivers/staging/rtl8723au/core/rtw_mlme_ext.c
+++ b/drivers/staging/rtl8723au/core/rtw_mlme_ext.c
@@ -6061,10 +6061,10 @@
#ifdef CONFIG_8723AU_AP_MODE
else { /* tx bc/mc frames after update TIM */
struct sta_info *psta_bmc;
- struct list_head *plist, *phead, *ptmp;
- struct xmit_frame *pxmitframe;
+ struct list_head *phead;
+ struct xmit_frame *pxmitframe, *ptmp;
struct xmit_priv *pxmitpriv = &padapter->xmitpriv;
- struct sta_priv *pstapriv = &padapter->stapriv;
+ struct sta_priv *pstapriv = &padapter->stapriv;
/* for BC/MC Frames */
psta_bmc = rtw_get_bcmc_stainfo23a(padapter);
@@ -6078,10 +6078,8 @@
phead = get_list_head(&psta_bmc->sleep_q);
- list_for_each_safe(plist, ptmp, phead) {
- pxmitframe = container_of(plist,
- struct xmit_frame,
- list);
+ list_for_each_entry_safe(pxmitframe, ptmp,
+ phead, list) {
list_del_init(&pxmitframe->list);
@@ -6098,7 +6096,6 @@
rtl8723au_hal_xmitframe_enqueue(padapter,
pxmitframe);
}
-
/* spin_unlock_bh(&psta_bmc->sleep_q.lock); */
spin_unlock_bh(&pxmitpriv->lock);
}
diff --git a/drivers/staging/rtl8723au/core/rtw_recv.c b/drivers/staging/rtl8723au/core/rtw_recv.c
index 81abe50..0a7741c 100644
--- a/drivers/staging/rtl8723au/core/rtw_recv.c
+++ b/drivers/staging/rtl8723au/core/rtw_recv.c
@@ -85,16 +85,15 @@
return res;
}
-void _rtw_free_recv_priv23a (struct recv_priv *precvpriv)
+void _rtw_free_recv_priv23a(struct recv_priv *precvpriv)
{
struct rtw_adapter *padapter = precvpriv->adapter;
- struct recv_frame *precvframe;
- struct list_head *plist, *ptmp;
+ struct recv_frame *precvframe, *ptmp;
rtw_free_uc_swdec_pending_queue23a(padapter);
- list_for_each_safe(plist, ptmp, &precvpriv->free_recv_queue.queue) {
- precvframe = container_of(plist, struct recv_frame, list);
+ list_for_each_entry_safe(precvframe, ptmp,
+ &precvpriv->free_recv_queue.queue, list) {
list_del_init(&precvframe->list);
kfree(precvframe);
}
@@ -195,19 +194,13 @@
static void rtw_free_recvframe23a_queue(struct rtw_queue *pframequeue)
{
- struct recv_frame *hdr;
- struct list_head *plist, *phead, *ptmp;
+ struct recv_frame *hdr, *ptmp;
+ struct list_head *phead;
spin_lock(&pframequeue->lock);
-
phead = get_list_head(pframequeue);
- plist = phead->next;
-
- list_for_each_safe(plist, ptmp, phead) {
- hdr = container_of(plist, struct recv_frame, list);
+ list_for_each_entry_safe(hdr, ptmp, phead, list)
rtw_free_recvframe23a(hdr);
- }
-
spin_unlock(&pframequeue->lock);
}
@@ -1549,16 +1542,14 @@
struct recv_frame *recvframe_defrag(struct rtw_adapter *adapter,
struct rtw_queue *defrag_q)
{
- struct list_head *plist, *phead, *ptmp;
- u8 *data, wlanhdr_offset;
- u8 curfragnum;
- struct recv_frame *pnfhdr;
+ struct list_head *plist, *phead;
+ u8 wlanhdr_offset;
+ u8 curfragnum;
+ struct recv_frame *pnfhdr, *ptmp;
struct recv_frame *prframe, *pnextrframe;
- struct rtw_queue *pfree_recv_queue;
+ struct rtw_queue *pfree_recv_queue;
struct sk_buff *skb;
-
-
curfragnum = 0;
pfree_recv_queue = &adapter->recvpriv.free_recv_queue;
@@ -1579,12 +1570,7 @@
curfragnum++;
- phead = get_list_head(defrag_q);
-
- data = prframe->pkt->data;
-
- list_for_each_safe(plist, ptmp, phead) {
- pnfhdr = container_of(plist, struct recv_frame, list);
+ list_for_each_entry_safe(pnfhdr, ptmp, phead, list) {
pnextrframe = (struct recv_frame *)pnfhdr;
/* check the fragment sequence (2nd ~n fragment frame) */
@@ -1626,8 +1612,6 @@
RT_TRACE(_module_rtl871x_recv_c_, _drv_info_,
"Performance defrag!!!!!\n");
-
-
return prframe;
}
diff --git a/drivers/staging/rtl8723au/core/rtw_sta_mgt.c b/drivers/staging/rtl8723au/core/rtw_sta_mgt.c
index b06bff7..22d857b 100644
--- a/drivers/staging/rtl8723au/core/rtw_sta_mgt.c
+++ b/drivers/staging/rtl8723au/core/rtw_sta_mgt.c
@@ -83,8 +83,8 @@
int _rtw_free_sta_priv23a(struct sta_priv *pstapriv)
{
- struct list_head *phead, *plist, *ptmp;
- struct sta_info *psta;
+ struct list_head *phead;
+ struct sta_info *psta, *ptmp;
struct recv_reorder_ctrl *preorder_ctrl;
int index;
@@ -93,12 +93,9 @@
spin_lock_bh(&pstapriv->sta_hash_lock);
for (index = 0; index < NUM_STA; index++) {
phead = &pstapriv->sta_hash[index];
-
- list_for_each_safe(plist, ptmp, phead) {
+ list_for_each_entry_safe(psta, ptmp, phead, hash_list) {
int i;
- psta = container_of(plist, struct sta_info,
- hash_list);
for (i = 0; i < 16 ; i++) {
preorder_ctrl = &psta->recvreorder_ctrl[i];
del_timer_sync(&preorder_ctrl->reordering_ctrl_timer);
@@ -325,8 +322,8 @@
/* free all stainfo which in sta_hash[all] */
void rtw_free_all_stainfo23a(struct rtw_adapter *padapter)
{
- struct list_head *plist, *phead, *ptmp;
- struct sta_info *psta;
+ struct list_head *phead;
+ struct sta_info *psta, *ptmp;
struct sta_priv *pstapriv = &padapter->stapriv;
struct sta_info *pbcmc_stainfo = rtw_get_bcmc_stainfo23a(padapter);
s32 index;
@@ -335,13 +332,9 @@
return;
spin_lock_bh(&pstapriv->sta_hash_lock);
-
for (index = 0; index < NUM_STA; index++) {
phead = &pstapriv->sta_hash[index];
-
- list_for_each_safe(plist, ptmp, phead) {
- psta = container_of(plist, struct sta_info, hash_list);
-
+ list_for_each_entry_safe(psta, ptmp, phead, hash_list) {
if (pbcmc_stainfo != psta)
rtw_free_stainfo23a(padapter, psta);
}
@@ -352,9 +345,9 @@
/* any station allocated can be searched by hash list */
struct sta_info *rtw_get_stainfo23a(struct sta_priv *pstapriv, const u8 *hwaddr)
{
- struct list_head *plist, *phead;
+ struct list_head *phead;
struct sta_info *psta = NULL;
- u32 index;
+ u32 index;
const u8 *addr;
if (hwaddr == NULL)
@@ -368,12 +361,8 @@
index = wifi_mac_hash(addr);
spin_lock_bh(&pstapriv->sta_hash_lock);
-
phead = &pstapriv->sta_hash[index];
-
- list_for_each(plist, phead) {
- psta = container_of(plist, struct sta_info, hash_list);
-
+ list_for_each_entry(psta, phead, hash_list) {
/* if found the matched address */
if (ether_addr_equal(psta->hwaddr, addr))
break;
@@ -418,7 +407,7 @@
{
bool res = true;
#ifdef CONFIG_8723AU_AP_MODE
- struct list_head *plist, *phead;
+ struct list_head *phead;
struct rtw_wlan_acl_node *paclnode;
bool match = false;
struct sta_priv *pstapriv = &padapter->stapriv;
@@ -427,10 +416,7 @@
spin_lock_bh(&pacl_node_q->lock);
phead = get_list_head(pacl_node_q);
-
- list_for_each(plist, phead) {
- paclnode = container_of(plist, struct rtw_wlan_acl_node, list);
-
+ list_for_each_entry(paclnode, phead, list) {
if (ether_addr_equal(paclnode->addr, mac_addr)) {
if (paclnode->valid) {
match = true;
diff --git a/drivers/staging/rtl8723au/core/rtw_xmit.c b/drivers/staging/rtl8723au/core/rtw_xmit.c
index a4b6bb6..b82b182 100644
--- a/drivers/staging/rtl8723au/core/rtw_xmit.c
+++ b/drivers/staging/rtl8723au/core/rtw_xmit.c
@@ -193,39 +193,38 @@
goto exit;
}
-void _rtw_free_xmit_priv23a (struct xmit_priv *pxmitpriv)
+void _rtw_free_xmit_priv23a(struct xmit_priv *pxmitpriv)
{
struct rtw_adapter *padapter = pxmitpriv->adapter;
- struct xmit_frame *pxframe;
- struct xmit_buf *pxmitbuf;
- struct list_head *plist, *ptmp;
+ struct xmit_frame *pxframe, *ptmp;
+ struct xmit_buf *pxmitbuf, *ptmp2;
- list_for_each_safe(plist, ptmp, &pxmitpriv->free_xmit_queue.queue) {
- pxframe = container_of(plist, struct xmit_frame, list);
+ list_for_each_entry_safe(pxframe, ptmp,
+ &pxmitpriv->free_xmit_queue.queue, list) {
list_del_init(&pxframe->list);
rtw_os_xmit_complete23a(padapter, pxframe);
kfree(pxframe);
}
- list_for_each_safe(plist, ptmp, &pxmitpriv->xmitbuf_list) {
- pxmitbuf = container_of(plist, struct xmit_buf, list2);
+ list_for_each_entry_safe(pxmitbuf, ptmp2,
+ &pxmitpriv->xmitbuf_list, list2) {
list_del_init(&pxmitbuf->list2);
rtw_os_xmit_resource_free23a(padapter, pxmitbuf);
kfree(pxmitbuf);
}
/* free xframe_ext queue, the same count as extbuf */
- list_for_each_safe(plist, ptmp,
- &pxmitpriv->free_xframe_ext_queue.queue) {
- pxframe = container_of(plist, struct xmit_frame, list);
+ list_for_each_entry_safe(pxframe, ptmp,
+ &pxmitpriv->free_xframe_ext_queue.queue,
+ list) {
list_del_init(&pxframe->list);
rtw_os_xmit_complete23a(padapter, pxframe);
kfree(pxframe);
}
/* free xmit extension buff */
- list_for_each_safe(plist, ptmp, &pxmitpriv->xmitextbuf_list) {
- pxmitbuf = container_of(plist, struct xmit_buf, list2);
+ list_for_each_entry_safe(pxmitbuf, ptmp2,
+ &pxmitpriv->xmitextbuf_list, list2) {
list_del_init(&pxmitbuf->list2);
rtw_os_xmit_resource_free23a(padapter, pxmitbuf);
kfree(pxmitbuf);
@@ -1563,18 +1562,13 @@
void rtw_free_xmitframe_queue23a(struct xmit_priv *pxmitpriv,
struct rtw_queue *pframequeue)
{
- struct list_head *plist, *phead, *ptmp;
- struct xmit_frame *pxmitframe;
+ struct list_head *phead;
+ struct xmit_frame *pxmitframe, *ptmp;
spin_lock_bh(&pframequeue->lock);
-
phead = get_list_head(pframequeue);
-
- list_for_each_safe(plist, ptmp, phead) {
- pxmitframe = container_of(plist, struct xmit_frame, list);
-
+ list_for_each_entry_safe(pxmitframe, ptmp, phead, list)
rtw_free_xmitframe23a(pxmitpriv, pxmitframe);
- }
spin_unlock_bh(&pframequeue->lock);
}
@@ -1612,9 +1606,9 @@
rtw_dequeue_xframe23a(struct xmit_priv *pxmitpriv, struct hw_xmit *phwxmit_i,
int entry)
{
- struct list_head *sta_plist, *sta_phead, *ptmp;
+ struct list_head *sta_phead;
struct hw_xmit *phwxmit;
- struct tx_servq *ptxservq = NULL;
+ struct tx_servq *ptxservq = NULL, *ptmp;
struct rtw_queue *pframe_queue = NULL;
struct xmit_frame *pxmitframe = NULL;
struct rtw_adapter *padapter = pxmitpriv->adapter;
@@ -1638,11 +1632,8 @@
phwxmit = phwxmit_i + inx[i];
sta_phead = get_list_head(phwxmit->sta_queue);
-
- list_for_each_safe(sta_plist, ptmp, sta_phead) {
- ptxservq = container_of(sta_plist, struct tx_servq,
- tx_pending);
-
+ list_for_each_entry_safe(ptxservq, ptmp, sta_phead,
+ tx_pending) {
pframe_queue = &ptxservq->sta_pending;
pxmitframe = dequeue_one_xmitframe(pxmitpriv, phwxmit, ptxservq, pframe_queue);
@@ -2052,18 +2043,15 @@
struct rtw_queue *pframequeue)
{
int ret;
- struct list_head *plist, *phead, *ptmp;
- u8 ac_index;
+ struct list_head *phead;
+ u8 ac_index;
struct tx_servq *ptxservq;
- struct pkt_attrib *pattrib;
- struct xmit_frame *pxmitframe;
- struct hw_xmit *phwxmits = padapter->xmitpriv.hwxmits;
+ struct pkt_attrib *pattrib;
+ struct xmit_frame *pxmitframe, *ptmp;
+ struct hw_xmit *phwxmits = padapter->xmitpriv.hwxmits;
phead = get_list_head(pframequeue);
-
- list_for_each_safe(plist, ptmp, phead) {
- pxmitframe = container_of(plist, struct xmit_frame, list);
-
+ list_for_each_entry_safe(pxmitframe, ptmp, phead, list) {
ret = xmitframe_enqueue_for_sleeping_sta23a(padapter, pxmitframe);
if (ret == true) {
@@ -2124,17 +2112,14 @@
{
u8 update_mask = 0, wmmps_ac = 0;
struct sta_info *psta_bmc;
- struct list_head *plist, *phead, *ptmp;
- struct xmit_frame *pxmitframe = NULL;
+ struct list_head *phead;
+ struct xmit_frame *pxmitframe = NULL, *ptmp;
struct sta_priv *pstapriv = &padapter->stapriv;
struct xmit_priv *pxmitpriv = &padapter->xmitpriv;
spin_lock_bh(&pxmitpriv->lock);
-
phead = get_list_head(&psta->sleep_q);
-
- list_for_each_safe(plist, ptmp, phead) {
- pxmitframe = container_of(plist, struct xmit_frame, list);
+ list_for_each_entry_safe(pxmitframe, ptmp, phead, list) {
list_del_init(&pxmitframe->list);
switch (pxmitframe->attrib.priority) {
@@ -2194,7 +2179,6 @@
pstapriv->sta_dz_bitmap &= ~CHKBIT(psta->aid);
}
-
/* spin_unlock_bh(&psta->sleep_q.lock); */
spin_unlock_bh(&pxmitpriv->lock);
@@ -2206,13 +2190,8 @@
if ((pstapriv->sta_dz_bitmap&0xfffe) == 0x0) {
/* no any sta in ps mode */
spin_lock_bh(&pxmitpriv->lock);
-
phead = get_list_head(&psta_bmc->sleep_q);
-
- list_for_each_safe(plist, ptmp, phead) {
- pxmitframe = container_of(plist, struct xmit_frame,
- list);
-
+ list_for_each_entry_safe(pxmitframe, ptmp, phead, list) {
list_del_init(&pxmitframe->list);
psta_bmc->sleepq_len--;
@@ -2232,7 +2211,6 @@
/* update_BCNTIM(padapter); */
update_mask |= BIT(1);
}
-
/* spin_unlock_bh(&psta_bmc->sleep_q.lock); */
spin_unlock_bh(&pxmitpriv->lock);
}
@@ -2245,19 +2223,15 @@
struct sta_info *psta)
{
u8 wmmps_ac = 0;
- struct list_head *plist, *phead, *ptmp;
- struct xmit_frame *pxmitframe;
+ struct list_head *phead;
+ struct xmit_frame *pxmitframe, *ptmp;
struct sta_priv *pstapriv = &padapter->stapriv;
struct xmit_priv *pxmitpriv = &padapter->xmitpriv;
/* spin_lock_bh(&psta->sleep_q.lock); */
spin_lock_bh(&pxmitpriv->lock);
-
phead = get_list_head(&psta->sleep_q);
-
- list_for_each_safe(plist, ptmp, phead) {
- pxmitframe = container_of(plist, struct xmit_frame, list);
-
+ list_for_each_entry_safe(pxmitframe, ptmp, phead, list) {
switch (pxmitframe->attrib.priority) {
case 1:
case 2:
diff --git a/drivers/staging/rtl8723au/hal/HalHWImg8723A_RF.c b/drivers/staging/rtl8723au/hal/HalHWImg8723A_RF.c
index dbf571e..286f3ea 100644
--- a/drivers/staging/rtl8723au/hal/HalHWImg8723A_RF.c
+++ b/drivers/staging/rtl8723au/hal/HalHWImg8723A_RF.c
@@ -215,7 +215,7 @@
u32 i = 0;
u8 platform = 0x04;
u8 board = pDM_Odm->BoardType;
- u32 ArrayLen = sizeof(Array_RadioA_1T_8723A)/sizeof(u32);
+ u32 ArrayLen = ARRAY_SIZE(Array_RadioA_1T_8723A);
u32 *Array = Array_RadioA_1T_8723A;
hex += board;
diff --git a/drivers/staging/rtl8723au/hal/hal_com.c b/drivers/staging/rtl8723au/hal/hal_com.c
index 530db57..9d7b11b 100644
--- a/drivers/staging/rtl8723au/hal/hal_com.c
+++ b/drivers/staging/rtl8723au/hal/hal_com.c
@@ -328,7 +328,7 @@
if (trigger == C2H_EVT_HOST_CLOSE)
goto exit; /* Not ready */
- else if (trigger != C2H_EVT_FW_CLOSE)
+ if (trigger != C2H_EVT_FW_CLOSE)
goto clear_evt; /* Not a valid value */
c2h_evt = (struct c2h_evt_hdr *)buf;
diff --git a/drivers/staging/rtl8723au/hal/rtl8723a_bt-coexist.c b/drivers/staging/rtl8723au/hal/rtl8723a_bt-coexist.c
index 33eabf4..1688f66 100644
--- a/drivers/staging/rtl8723au/hal/rtl8723a_bt-coexist.c
+++ b/drivers/staging/rtl8723au/hal/rtl8723a_bt-coexist.c
@@ -77,8 +77,6 @@
#define PlatformZeroMemory(ptr, sz) memset(ptr, 0, sz)
-#define PlatformProcessHCICommands(...)
-#define PlatformTxBTQueuedPackets(...)
#define PlatformIndicateBTACLData(...) (RT_STATUS_SUCCESS)
#define GET_UNDECORATED_AVERAGE_RSSI(padapter) \
diff --git a/drivers/staging/rtl8723au/os_dep/ioctl_cfg80211.c b/drivers/staging/rtl8723au/os_dep/ioctl_cfg80211.c
index 0ae2180..908b84c 100644
--- a/drivers/staging/rtl8723au/os_dep/ioctl_cfg80211.c
+++ b/drivers/staging/rtl8723au/os_dep/ioctl_cfg80211.c
@@ -1270,18 +1270,14 @@
void rtw_cfg80211_surveydone_event_callback(struct rtw_adapter *padapter)
{
- struct list_head *plist, *phead, *ptmp;
+ struct list_head *phead;
struct mlme_priv *pmlmepriv = &padapter->mlmepriv;
struct rtw_queue *queue = &pmlmepriv->scanned_queue;
- struct wlan_network *pnetwork;
+ struct wlan_network *pnetwork, *ptmp;
spin_lock_bh(&pmlmepriv->scanned_queue.lock);
-
phead = get_list_head(queue);
-
- list_for_each_safe(plist, ptmp, phead) {
- pnetwork = container_of(plist, struct wlan_network, list);
-
+ list_for_each_entry_safe(pnetwork, ptmp, phead, list) {
/* report network only if the current channel set
contains the channel to which this network belongs */
if (rtw_ch_set_search_ch23a
@@ -1289,7 +1285,6 @@
pnetwork->network.DSConfig) >= 0)
rtw_cfg80211_inform_bss(padapter, pnetwork);
}
-
spin_unlock_bh(&pmlmepriv->scanned_queue.lock);
/* call this after other things have been done */
@@ -2850,9 +2845,9 @@
{
const u8 *mac = params->mac;
int ret = 0;
- struct list_head *phead, *plist, *ptmp;
+ struct list_head *phead;
u8 updated = 0;
- struct sta_info *psta;
+ struct sta_info *psta, *ptmp;
struct rtw_adapter *padapter = netdev_priv(ndev);
struct mlme_priv *pmlmepriv = &padapter->mlmepriv;
struct sta_priv *pstapriv = &padapter->stapriv;
@@ -2881,13 +2876,9 @@
return -EINVAL;
spin_lock_bh(&pstapriv->asoc_list_lock);
-
phead = &pstapriv->asoc_list;
-
/* check asoc_queue */
- list_for_each_safe(plist, ptmp, phead) {
- psta = container_of(plist, struct sta_info, asoc_list);
-
+ list_for_each_entry_safe(psta, ptmp, phead, asoc_list) {
if (ether_addr_equal(mac, psta->hwaddr)) {
if (psta->dot8021xalg == 1 &&
psta->bpairwise_key_installed == false) {
@@ -2912,7 +2903,6 @@
}
}
}
-
spin_unlock_bh(&pstapriv->asoc_list_lock);
associated_clients_update23a(padapter, updated);
diff --git a/drivers/staging/rtl8723au/os_dep/usb_ops_linux.c b/drivers/staging/rtl8723au/os_dep/usb_ops_linux.c
index 0cdaef0..cf4a506 100644
--- a/drivers/staging/rtl8723au/os_dep/usb_ops_linux.c
+++ b/drivers/staging/rtl8723au/os_dep/usb_ops_linux.c
@@ -210,22 +210,21 @@
void rtl8723au_write_port_cancel(struct rtw_adapter *padapter)
{
struct xmit_buf *pxmitbuf;
- struct list_head *plist;
int j;
DBG_8723A("%s\n", __func__);
padapter->bWritePortCancel = true;
- list_for_each(plist, &padapter->xmitpriv.xmitbuf_list) {
- pxmitbuf = container_of(plist, struct xmit_buf, list2);
+ list_for_each_entry(pxmitbuf, &padapter->xmitpriv.xmitbuf_list,
+ list2) {
for (j = 0; j < 8; j++) {
if (pxmitbuf->pxmit_urb[j])
usb_kill_urb(pxmitbuf->pxmit_urb[j]);
}
}
- list_for_each(plist, &padapter->xmitpriv.xmitextbuf_list) {
- pxmitbuf = container_of(plist, struct xmit_buf, list2);
+ list_for_each_entry(pxmitbuf, &padapter->xmitpriv.xmitextbuf_list,
+ list2) {
for (j = 0; j < 8; j++) {
if (pxmitbuf->pxmit_urb[j])
usb_kill_urb(pxmitbuf->pxmit_urb[j]);
diff --git a/drivers/staging/rts5208/rtsx_transport.c b/drivers/staging/rts5208/rtsx_transport.c
index f2eb18e..f564a74 100644
--- a/drivers/staging/rts5208/rtsx_transport.c
+++ b/drivers/staging/rts5208/rtsx_transport.c
@@ -1,4 +1,5 @@
-/* Driver for Realtek PCI-Express card reader
+/*
+ * Driver for Realtek PCI-Express card reader
*
* Copyright(c) 2009-2013 Realtek Semiconductor Corp. All rights reserved.
*
@@ -30,74 +31,76 @@
* Scatter-gather transfer buffer access routines
***********************************************************************/
-/* Copy a buffer of length buflen to/from the srb's transfer buffer.
+/*
+ * Copy a buffer of length buflen to/from the srb's transfer buffer.
* (Note: for scatter-gather transfers (srb->use_sg > 0), srb->request_buffer
* points to a list of s-g entries and we ignore srb->request_bufflen.
* For non-scatter-gather transfers, srb->request_buffer points to the
* transfer buffer itself and srb->request_bufflen is the buffer's length.)
* Update the *index and *offset variables so that the next copy will
- * pick up from where this one left off. */
+ * pick up from where this one left off.
+ */
unsigned int rtsx_stor_access_xfer_buf(unsigned char *buffer,
- unsigned int buflen, struct scsi_cmnd *srb, unsigned int *index,
- unsigned int *offset, enum xfer_buf_dir dir)
+ unsigned int buflen,
+ struct scsi_cmnd *srb,
+ unsigned int *index,
+ unsigned int *offset,
+ enum xfer_buf_dir dir)
{
unsigned int cnt;
- /* If not using scatter-gather, just transfer the data directly.
- * Make certain it will fit in the available buffer space. */
+ /* If not using scatter-gather, just transfer the data directly. */
if (scsi_sg_count(srb) == 0) {
+ unsigned char *sgbuffer;
+
if (*offset >= scsi_bufflen(srb))
return 0;
cnt = min(buflen, scsi_bufflen(srb) - *offset);
+
+ sgbuffer = (unsigned char *)scsi_sglist(srb) + *offset;
+
if (dir == TO_XFER_BUF)
- memcpy((unsigned char *) scsi_sglist(srb) + *offset,
- buffer, cnt);
+ memcpy(sgbuffer, buffer, cnt);
else
- memcpy(buffer, (unsigned char *) scsi_sglist(srb) +
- *offset, cnt);
+ memcpy(buffer, sgbuffer, cnt);
*offset += cnt;
- /* Using scatter-gather. We have to go through the list one entry
+ /*
+ * Using scatter-gather. We have to go through the list one entry
* at a time. Each s-g entry contains some number of pages, and
- * each page has to be kmap()'ed separately. If the page is already
- * in kernel-addressable memory then kmap() will return its address.
- * If the page is not directly accessible -- such as a user buffer
- * located in high memory -- then kmap() will map it to a temporary
- * position in the kernel's virtual address space. */
+ * each page has to be kmap()'ed separately.
+ */
} else {
struct scatterlist *sg =
- (struct scatterlist *) scsi_sglist(srb)
+ (struct scatterlist *)scsi_sglist(srb)
+ *index;
- /* This loop handles a single s-g list entry, which may
+ /*
+ * This loop handles a single s-g list entry, which may
* include multiple pages. Find the initial page structure
* and the starting offset within the page, and update
- * the *offset and *index values for the next loop. */
+ * the *offset and *index values for the next loop.
+ */
cnt = 0;
while (cnt < buflen && *index < scsi_sg_count(srb)) {
struct page *page = sg_page(sg) +
((sg->offset + *offset) >> PAGE_SHIFT);
- unsigned int poff =
- (sg->offset + *offset) & (PAGE_SIZE-1);
+ unsigned int poff = (sg->offset + *offset) &
+ (PAGE_SIZE - 1);
unsigned int sglen = sg->length - *offset;
if (sglen > buflen - cnt) {
-
/* Transfer ends within this s-g entry */
sglen = buflen - cnt;
*offset += sglen;
} else {
-
/* Transfer continues to next s-g entry */
*offset = 0;
++*index;
++sg;
}
- /* Transfer the data for all the pages in this
- * s-g entry. For each page: call kmap(), do the
- * transfer, and call kunmap() immediately after. */
while (sglen > 0) {
unsigned int plen = min(sglen, (unsigned int)
PAGE_SIZE - poff);
@@ -122,10 +125,12 @@
return cnt;
}
-/* Store the contents of buffer into srb's transfer buffer and set the
-* SCSI residue. */
+/*
+ * Store the contents of buffer into srb's transfer buffer and set the
+ * SCSI residue.
+ */
void rtsx_stor_set_xfer_buf(unsigned char *buffer,
- unsigned int buflen, struct scsi_cmnd *srb)
+ unsigned int buflen, struct scsi_cmnd *srb)
{
unsigned int index = 0, offset = 0;
@@ -136,7 +141,7 @@
}
void rtsx_stor_get_xfer_buf(unsigned char *buffer,
- unsigned int buflen, struct scsi_cmnd *srb)
+ unsigned int buflen, struct scsi_cmnd *srb)
{
unsigned int index = 0, offset = 0;
@@ -146,12 +151,12 @@
scsi_set_resid(srb, scsi_bufflen(srb) - buflen);
}
-
/***********************************************************************
* Transport routines
***********************************************************************/
-/* Invoke the transport and basic error-handling/recovery methods
+/*
+ * Invoke the transport and basic error-handling/recovery methods
*
* This is used to send the message to the device and receive the response.
*/
@@ -161,20 +166,21 @@
result = rtsx_scsi_handler(srb, chip);
- /* if the command gets aborted by the higher layers, we need to
- * short-circuit all other processing
+ /*
+ * if the command gets aborted by the higher layers, we need to
+ * short-circuit all other processing.
*/
if (rtsx_chk_stat(chip, RTSX_STAT_ABORT)) {
dev_dbg(rtsx_dev(chip), "-- command was aborted\n");
srb->result = DID_ABORT << 16;
- goto Handle_Errors;
+ goto handle_errors;
}
/* if there is a transport error, reset and don't auto-sense */
if (result == TRANSPORT_ERROR) {
dev_dbg(rtsx_dev(chip), "-- transport indicates error, resetting\n");
srb->result = DID_ERROR << 16;
- goto Handle_Errors;
+ goto handle_errors;
}
srb->result = SAM_STAT_GOOD;
@@ -188,21 +194,18 @@
/* set the result so the higher layers expect this data */
srb->result = SAM_STAT_CHECK_CONDITION;
memcpy(srb->sense_buffer,
- (unsigned char *)&(chip->sense_buffer[SCSI_LUN(srb)]),
- sizeof(struct sense_data_t));
+ (unsigned char *)&chip->sense_buffer[SCSI_LUN(srb)],
+ sizeof(struct sense_data_t));
}
return;
- /* Error and abort processing: try to resynchronize with the device
- * by issuing a port reset. If that fails, try a class-specific
- * device reset. */
-Handle_Errors:
+handle_errors:
return;
}
void rtsx_add_cmd(struct rtsx_chip *chip,
- u8 cmd_type, u16 reg_addr, u8 mask, u8 data)
+ u8 cmd_type, u16 reg_addr, u8 mask, u8 data)
{
u32 *cb = (u32 *)(chip->host_cmds_ptr);
u32 val = 0;
@@ -321,9 +324,11 @@
}
static int rtsx_transfer_sglist_adma_partial(struct rtsx_chip *chip, u8 card,
- struct scatterlist *sg, int num_sg, unsigned int *index,
- unsigned int *offset, int size,
- enum dma_data_direction dma_dir, int timeout)
+ struct scatterlist *sg, int num_sg,
+ unsigned int *index,
+ unsigned int *offset, int size,
+ enum dma_data_direction dma_dir,
+ int timeout)
{
struct rtsx_dev *rtsx = chip->rtsx;
struct completion trans_done;
@@ -334,7 +339,7 @@
struct scatterlist *sg_ptr;
u32 val = TRIG_DMA;
- if ((sg == NULL) || (num_sg <= 0) || !offset || !index)
+ if (!sg || (num_sg <= 0) || !offset || !index)
return -EIO;
if (dma_dir == DMA_TO_DEVICE)
@@ -363,15 +368,16 @@
spin_unlock_irq(&rtsx->reg_lock);
- sg_cnt = dma_map_sg(&(rtsx->pci->dev), sg, num_sg, dma_dir);
+ sg_cnt = dma_map_sg(&rtsx->pci->dev, sg, num_sg, dma_dir);
resid = size;
sg_ptr = sg;
chip->sgi = 0;
- /* Usually the next entry will be @sg@ + 1, but if this sg element
+ /*
+ * Usually the next entry will be @sg@ + 1, but if this sg element
* is part of a chained scatterlist, it could jump to the start of
* a new scatterlist array. So here we use sg_next to move to
- * the proper sg
+ * the proper sg.
*/
for (i = 0; i < *index; i++)
sg_ptr = sg_next(sg_ptr);
@@ -476,7 +482,7 @@
out:
rtsx->done = NULL;
rtsx->trans_state = STATE_TRANS_NONE;
- dma_unmap_sg(&(rtsx->pci->dev), sg, num_sg, dma_dir);
+ dma_unmap_sg(&rtsx->pci->dev, sg, num_sg, dma_dir);
if (err < 0)
rtsx_stop_cmd(chip, card);
@@ -485,8 +491,9 @@
}
static int rtsx_transfer_sglist_adma(struct rtsx_chip *chip, u8 card,
- struct scatterlist *sg, int num_sg,
- enum dma_data_direction dma_dir, int timeout)
+ struct scatterlist *sg, int num_sg,
+ enum dma_data_direction dma_dir,
+ int timeout)
{
struct rtsx_dev *rtsx = chip->rtsx;
struct completion trans_done;
@@ -496,7 +503,7 @@
long timeleft;
struct scatterlist *sg_ptr;
- if ((sg == NULL) || (num_sg <= 0))
+ if (!sg || (num_sg <= 0))
return -EIO;
if (dma_dir == DMA_TO_DEVICE)
@@ -525,7 +532,7 @@
spin_unlock_irq(&rtsx->reg_lock);
- buf_cnt = dma_map_sg(&(rtsx->pci->dev), sg, num_sg, dma_dir);
+ buf_cnt = dma_map_sg(&rtsx->pci->dev, sg, num_sg, dma_dir);
sg_ptr = sg;
@@ -623,7 +630,7 @@
out:
rtsx->done = NULL;
rtsx->trans_state = STATE_TRANS_NONE;
- dma_unmap_sg(&(rtsx->pci->dev), sg, num_sg, dma_dir);
+ dma_unmap_sg(&rtsx->pci->dev, sg, num_sg, dma_dir);
if (err < 0)
rtsx_stop_cmd(chip, card);
@@ -632,7 +639,8 @@
}
static int rtsx_transfer_buf(struct rtsx_chip *chip, u8 card, void *buf,
- size_t len, enum dma_data_direction dma_dir, int timeout)
+ size_t len, enum dma_data_direction dma_dir,
+ int timeout)
{
struct rtsx_dev *rtsx = chip->rtsx;
struct completion trans_done;
@@ -642,7 +650,7 @@
u32 val = 1 << 31;
long timeleft;
- if ((buf == NULL) || (len <= 0))
+ if (!buf || (len <= 0))
return -EIO;
if (dma_dir == DMA_TO_DEVICE)
@@ -706,7 +714,7 @@
out:
rtsx->done = NULL;
rtsx->trans_state = STATE_TRANS_NONE;
- dma_unmap_single(&(rtsx->pci->dev), addr, len, dma_dir);
+ dma_unmap_single(&rtsx->pci->dev, addr, len, dma_dir);
if (err < 0)
rtsx_stop_cmd(chip, card);
@@ -715,9 +723,9 @@
}
int rtsx_transfer_data_partial(struct rtsx_chip *chip, u8 card,
- void *buf, size_t len, int use_sg, unsigned int *index,
- unsigned int *offset, enum dma_data_direction dma_dir,
- int timeout)
+ void *buf, size_t len, int use_sg,
+ unsigned int *index, unsigned int *offset,
+ enum dma_data_direction dma_dir, int timeout)
{
int err = 0;
@@ -725,13 +733,16 @@
if (rtsx_chk_stat(chip, RTSX_STAT_ABORT))
return -EIO;
- if (use_sg)
- err = rtsx_transfer_sglist_adma_partial(chip, card,
- (struct scatterlist *)buf, use_sg,
- index, offset, (int)len, dma_dir, timeout);
- else
+ if (use_sg) {
+ struct scatterlist *sg = (struct scatterlist *)buf;
+
+ err = rtsx_transfer_sglist_adma_partial(chip, card, sg, use_sg,
+ index, offset, (int)len,
+ dma_dir, timeout);
+ } else {
err = rtsx_transfer_buf(chip, card,
buf, len, dma_dir, timeout);
+ }
if (err < 0) {
if (RTSX_TST_DELINK(chip)) {
RTSX_CLR_DELINK(chip);
@@ -744,7 +755,7 @@
}
int rtsx_transfer_data(struct rtsx_chip *chip, u8 card, void *buf, size_t len,
- int use_sg, enum dma_data_direction dma_dir, int timeout)
+ int use_sg, enum dma_data_direction dma_dir, int timeout)
{
int err = 0;
@@ -756,8 +767,8 @@
if (use_sg) {
err = rtsx_transfer_sglist_adma(chip, card,
- (struct scatterlist *)buf,
- use_sg, dma_dir, timeout);
+ (struct scatterlist *)buf,
+ use_sg, dma_dir, timeout);
} else {
err = rtsx_transfer_buf(chip, card, buf, len, dma_dir, timeout);
}
diff --git a/drivers/staging/sm750fb/ddk750_chip.c b/drivers/staging/sm750fb/ddk750_chip.c
index e53a3d1..95f7cae 100644
--- a/drivers/staging/sm750fb/ddk750_chip.c
+++ b/drivers/staging/sm750fb/ddk750_chip.c
@@ -1,3 +1,4 @@
+#include <linux/kernel.h>
#include <linux/sizes.h>
#include "ddk750_help.h"
@@ -5,6 +6,10 @@
#include "ddk750_chip.h"
#include "ddk750_power.h"
+/* n / d + 1 / 2 = (2n + d) / 2d */
+#define roundedDiv(num, denom) ((2 * (num) + (denom)) / (2 * (denom)))
+#define MHz(x) ((x) * 1000000)
+
logical_chip_type_t getChipType(void)
{
unsigned short physicalID;
@@ -335,7 +340,7 @@
unsigned int diff;
tmpClock = pll->inputFreq * M / N / X;
- diff = absDiff(tmpClock, request_orig);
+ diff = abs(tmpClock - request_orig);
if (diff < mini_diff) {
pll->M = M;
pll->N = N;
diff --git a/drivers/staging/sm750fb/ddk750_help.h b/drivers/staging/sm750fb/ddk750_help.h
index 5be814e..009db92 100644
--- a/drivers/staging/sm750fb/ddk750_help.h
+++ b/drivers/staging/sm750fb/ddk750_help.h
@@ -6,7 +6,6 @@
#include <linux/ioport.h>
#include <linux/io.h>
#include <linux/uaccess.h>
-#include "sm750_help.h"
/* software control endianness */
#define PEEK32(addr) readl(addr + mmio750)
diff --git a/drivers/staging/sm750fb/ddk750_mode.c b/drivers/staging/sm750fb/ddk750_mode.c
index 7e57b57..ccb4e06 100644
--- a/drivers/staging/sm750fb/ddk750_mode.c
+++ b/drivers/staging/sm750fb/ddk750_mode.c
@@ -25,13 +25,12 @@
Note that normal SM750/SM718 only use those two register for
auto-centering mode.
*/
- POKE32(CRT_AUTO_CENTERING_TL,
- FIELD_VALUE(0, CRT_AUTO_CENTERING_TL, TOP, 0)
- | FIELD_VALUE(0, CRT_AUTO_CENTERING_TL, LEFT, 0));
+ POKE32(CRT_AUTO_CENTERING_TL, 0);
POKE32(CRT_AUTO_CENTERING_BR,
- FIELD_VALUE(0, CRT_AUTO_CENTERING_BR, BOTTOM, y - 1)
- | FIELD_VALUE(0, CRT_AUTO_CENTERING_BR, RIGHT, x - 1));
+ (((y - 1) << CRT_AUTO_CENTERING_BR_BOTTOM_SHIFT) &
+ CRT_AUTO_CENTERING_BR_BOTTOM_MASK) |
+ ((x - 1) & CRT_AUTO_CENTERING_BR_RIGHT_MASK));
/* Assume common fields in dispControl have been properly set before
calling this function.
@@ -84,20 +83,32 @@
/* programe secondary pixel clock */
POKE32(CRT_PLL_CTRL, formatPllReg(pll));
POKE32(CRT_HORIZONTAL_TOTAL,
- FIELD_VALUE(0, CRT_HORIZONTAL_TOTAL, TOTAL, pModeParam->horizontal_total - 1)
- | FIELD_VALUE(0, CRT_HORIZONTAL_TOTAL, DISPLAY_END, pModeParam->horizontal_display_end - 1));
+ (((pModeParam->horizontal_total - 1) <<
+ CRT_HORIZONTAL_TOTAL_TOTAL_SHIFT) &
+ CRT_HORIZONTAL_TOTAL_TOTAL_MASK) |
+ ((pModeParam->horizontal_display_end - 1) &
+ CRT_HORIZONTAL_TOTAL_DISPLAY_END_MASK));
POKE32(CRT_HORIZONTAL_SYNC,
- FIELD_VALUE(0, CRT_HORIZONTAL_SYNC, WIDTH, pModeParam->horizontal_sync_width)
- | FIELD_VALUE(0, CRT_HORIZONTAL_SYNC, START, pModeParam->horizontal_sync_start - 1));
+ ((pModeParam->horizontal_sync_width <<
+ CRT_HORIZONTAL_SYNC_WIDTH_SHIFT) &
+ CRT_HORIZONTAL_SYNC_WIDTH_MASK) |
+ ((pModeParam->horizontal_sync_start - 1) &
+ CRT_HORIZONTAL_SYNC_START_MASK));
POKE32(CRT_VERTICAL_TOTAL,
- FIELD_VALUE(0, CRT_VERTICAL_TOTAL, TOTAL, pModeParam->vertical_total - 1)
- | FIELD_VALUE(0, CRT_VERTICAL_TOTAL, DISPLAY_END, pModeParam->vertical_display_end - 1));
+ (((pModeParam->vertical_total - 1) <<
+ CRT_VERTICAL_TOTAL_TOTAL_SHIFT) &
+ CRT_VERTICAL_TOTAL_TOTAL_MASK) |
+ ((pModeParam->vertical_display_end - 1) &
+ CRT_VERTICAL_TOTAL_DISPLAY_END_MASK));
POKE32(CRT_VERTICAL_SYNC,
- FIELD_VALUE(0, CRT_VERTICAL_SYNC, HEIGHT, pModeParam->vertical_sync_height)
- | FIELD_VALUE(0, CRT_VERTICAL_SYNC, START, pModeParam->vertical_sync_start - 1));
+ ((pModeParam->vertical_sync_height <<
+ CRT_VERTICAL_SYNC_HEIGHT_SHIFT) &
+ CRT_VERTICAL_SYNC_HEIGHT_MASK) |
+ ((pModeParam->vertical_sync_start - 1) &
+ CRT_VERTICAL_SYNC_START_MASK));
tmp = DISPLAY_CTRL_TIMING | DISPLAY_CTRL_PLANE;
@@ -130,16 +141,25 @@
POKE32(PANEL_HORIZONTAL_TOTAL, reg);
POKE32(PANEL_HORIZONTAL_SYNC,
- FIELD_VALUE(0, PANEL_HORIZONTAL_SYNC, WIDTH, pModeParam->horizontal_sync_width)
- | FIELD_VALUE(0, PANEL_HORIZONTAL_SYNC, START, pModeParam->horizontal_sync_start - 1));
+ ((pModeParam->horizontal_sync_width <<
+ PANEL_HORIZONTAL_SYNC_WIDTH_SHIFT) &
+ PANEL_HORIZONTAL_SYNC_WIDTH_MASK) |
+ ((pModeParam->horizontal_sync_start - 1) &
+ PANEL_HORIZONTAL_SYNC_START_MASK));
POKE32(PANEL_VERTICAL_TOTAL,
- FIELD_VALUE(0, PANEL_VERTICAL_TOTAL, TOTAL, pModeParam->vertical_total - 1)
- | FIELD_VALUE(0, PANEL_VERTICAL_TOTAL, DISPLAY_END, pModeParam->vertical_display_end - 1));
+ (((pModeParam->vertical_total - 1) <<
+ PANEL_VERTICAL_TOTAL_TOTAL_SHIFT) &
+ PANEL_VERTICAL_TOTAL_TOTAL_MASK) |
+ ((pModeParam->vertical_display_end - 1) &
+ PANEL_VERTICAL_TOTAL_DISPLAY_END_MASK));
POKE32(PANEL_VERTICAL_SYNC,
- FIELD_VALUE(0, PANEL_VERTICAL_SYNC, HEIGHT, pModeParam->vertical_sync_height)
- | FIELD_VALUE(0, PANEL_VERTICAL_SYNC, START, pModeParam->vertical_sync_start - 1));
+ ((pModeParam->vertical_sync_height <<
+ PANEL_VERTICAL_SYNC_HEIGHT_SHIFT) &
+ PANEL_VERTICAL_SYNC_HEIGHT_MASK) |
+ ((pModeParam->vertical_sync_start - 1) &
+ PANEL_VERTICAL_SYNC_START_MASK));
tmp = DISPLAY_CTRL_TIMING | DISPLAY_CTRL_PLANE;
if (pModeParam->vertical_sync_polarity)
diff --git a/drivers/staging/sm750fb/ddk750_reg.h b/drivers/staging/sm750fb/ddk750_reg.h
index 4702897..9552479 100644
--- a/drivers/staging/sm750fb/ddk750_reg.h
+++ b/drivers/staging/sm750fb/ddk750_reg.h
@@ -109,286 +109,187 @@
#define GPIO_MUX_0 BIT(0)
#define LOCALMEM_ARBITRATION 0x00000C
-#define LOCALMEM_ARBITRATION_ROTATE 28:28
-#define LOCALMEM_ARBITRATION_ROTATE_OFF 0
-#define LOCALMEM_ARBITRATION_ROTATE_ON 1
-#define LOCALMEM_ARBITRATION_VGA 26:24
-#define LOCALMEM_ARBITRATION_VGA_OFF 0
-#define LOCALMEM_ARBITRATION_VGA_PRIORITY_1 1
-#define LOCALMEM_ARBITRATION_VGA_PRIORITY_2 2
-#define LOCALMEM_ARBITRATION_VGA_PRIORITY_3 3
-#define LOCALMEM_ARBITRATION_VGA_PRIORITY_4 4
-#define LOCALMEM_ARBITRATION_VGA_PRIORITY_5 5
-#define LOCALMEM_ARBITRATION_VGA_PRIORITY_6 6
-#define LOCALMEM_ARBITRATION_VGA_PRIORITY_7 7
-#define LOCALMEM_ARBITRATION_DMA 22:20
-#define LOCALMEM_ARBITRATION_DMA_OFF 0
-#define LOCALMEM_ARBITRATION_DMA_PRIORITY_1 1
-#define LOCALMEM_ARBITRATION_DMA_PRIORITY_2 2
-#define LOCALMEM_ARBITRATION_DMA_PRIORITY_3 3
-#define LOCALMEM_ARBITRATION_DMA_PRIORITY_4 4
-#define LOCALMEM_ARBITRATION_DMA_PRIORITY_5 5
-#define LOCALMEM_ARBITRATION_DMA_PRIORITY_6 6
-#define LOCALMEM_ARBITRATION_DMA_PRIORITY_7 7
-#define LOCALMEM_ARBITRATION_ZVPORT1 18:16
-#define LOCALMEM_ARBITRATION_ZVPORT1_OFF 0
-#define LOCALMEM_ARBITRATION_ZVPORT1_PRIORITY_1 1
-#define LOCALMEM_ARBITRATION_ZVPORT1_PRIORITY_2 2
-#define LOCALMEM_ARBITRATION_ZVPORT1_PRIORITY_3 3
-#define LOCALMEM_ARBITRATION_ZVPORT1_PRIORITY_4 4
-#define LOCALMEM_ARBITRATION_ZVPORT1_PRIORITY_5 5
-#define LOCALMEM_ARBITRATION_ZVPORT1_PRIORITY_6 6
-#define LOCALMEM_ARBITRATION_ZVPORT1_PRIORITY_7 7
-#define LOCALMEM_ARBITRATION_ZVPORT0 14:12
-#define LOCALMEM_ARBITRATION_ZVPORT0_OFF 0
-#define LOCALMEM_ARBITRATION_ZVPORT0_PRIORITY_1 1
-#define LOCALMEM_ARBITRATION_ZVPORT0_PRIORITY_2 2
-#define LOCALMEM_ARBITRATION_ZVPORT0_PRIORITY_3 3
-#define LOCALMEM_ARBITRATION_ZVPORT0_PRIORITY_4 4
-#define LOCALMEM_ARBITRATION_ZVPORT0_PRIORITY_5 5
-#define LOCALMEM_ARBITRATION_ZVPORT0_PRIORITY_6 6
-#define LOCALMEM_ARBITRATION_ZVPORT0_PRIORITY_7 7
-#define LOCALMEM_ARBITRATION_VIDEO 10:8
-#define LOCALMEM_ARBITRATION_VIDEO_OFF 0
-#define LOCALMEM_ARBITRATION_VIDEO_PRIORITY_1 1
-#define LOCALMEM_ARBITRATION_VIDEO_PRIORITY_2 2
-#define LOCALMEM_ARBITRATION_VIDEO_PRIORITY_3 3
-#define LOCALMEM_ARBITRATION_VIDEO_PRIORITY_4 4
-#define LOCALMEM_ARBITRATION_VIDEO_PRIORITY_5 5
-#define LOCALMEM_ARBITRATION_VIDEO_PRIORITY_6 6
-#define LOCALMEM_ARBITRATION_VIDEO_PRIORITY_7 7
-#define LOCALMEM_ARBITRATION_PANEL 6:4
-#define LOCALMEM_ARBITRATION_PANEL_OFF 0
-#define LOCALMEM_ARBITRATION_PANEL_PRIORITY_1 1
-#define LOCALMEM_ARBITRATION_PANEL_PRIORITY_2 2
-#define LOCALMEM_ARBITRATION_PANEL_PRIORITY_3 3
-#define LOCALMEM_ARBITRATION_PANEL_PRIORITY_4 4
-#define LOCALMEM_ARBITRATION_PANEL_PRIORITY_5 5
-#define LOCALMEM_ARBITRATION_PANEL_PRIORITY_6 6
-#define LOCALMEM_ARBITRATION_PANEL_PRIORITY_7 7
-#define LOCALMEM_ARBITRATION_CRT 2:0
-#define LOCALMEM_ARBITRATION_CRT_OFF 0
-#define LOCALMEM_ARBITRATION_CRT_PRIORITY_1 1
-#define LOCALMEM_ARBITRATION_CRT_PRIORITY_2 2
-#define LOCALMEM_ARBITRATION_CRT_PRIORITY_3 3
-#define LOCALMEM_ARBITRATION_CRT_PRIORITY_4 4
-#define LOCALMEM_ARBITRATION_CRT_PRIORITY_5 5
-#define LOCALMEM_ARBITRATION_CRT_PRIORITY_6 6
-#define LOCALMEM_ARBITRATION_CRT_PRIORITY_7 7
+#define LOCALMEM_ARBITRATION_ROTATE BIT(28)
+#define LOCALMEM_ARBITRATION_VGA_MASK (0x7 << 24)
+#define LOCALMEM_ARBITRATION_VGA_OFF (0x0 << 24)
+#define LOCALMEM_ARBITRATION_VGA_PRIORITY_1 (0x1 << 24)
+#define LOCALMEM_ARBITRATION_VGA_PRIORITY_2 (0x2 << 24)
+#define LOCALMEM_ARBITRATION_VGA_PRIORITY_3 (0x3 << 24)
+#define LOCALMEM_ARBITRATION_VGA_PRIORITY_4 (0x4 << 24)
+#define LOCALMEM_ARBITRATION_VGA_PRIORITY_5 (0x5 << 24)
+#define LOCALMEM_ARBITRATION_VGA_PRIORITY_6 (0x6 << 24)
+#define LOCALMEM_ARBITRATION_VGA_PRIORITY_7 (0x7 << 24)
+#define LOCALMEM_ARBITRATION_DMA_MASK (0x7 << 20)
+#define LOCALMEM_ARBITRATION_DMA_OFF (0x0 << 20)
+#define LOCALMEM_ARBITRATION_DMA_PRIORITY_1 (0x1 << 20)
+#define LOCALMEM_ARBITRATION_DMA_PRIORITY_2 (0x2 << 20)
+#define LOCALMEM_ARBITRATION_DMA_PRIORITY_3 (0x3 << 20)
+#define LOCALMEM_ARBITRATION_DMA_PRIORITY_4 (0x4 << 20)
+#define LOCALMEM_ARBITRATION_DMA_PRIORITY_5 (0x5 << 20)
+#define LOCALMEM_ARBITRATION_DMA_PRIORITY_6 (0x6 << 20)
+#define LOCALMEM_ARBITRATION_DMA_PRIORITY_7 (0x7 << 20)
+#define LOCALMEM_ARBITRATION_ZVPORT1_MASK (0x7 << 16)
+#define LOCALMEM_ARBITRATION_ZVPORT1_OFF (0x0 << 16)
+#define LOCALMEM_ARBITRATION_ZVPORT1_PRIORITY_1 (0x1 << 16)
+#define LOCALMEM_ARBITRATION_ZVPORT1_PRIORITY_2 (0x2 << 16)
+#define LOCALMEM_ARBITRATION_ZVPORT1_PRIORITY_3 (0x3 << 16)
+#define LOCALMEM_ARBITRATION_ZVPORT1_PRIORITY_4 (0x4 << 16)
+#define LOCALMEM_ARBITRATION_ZVPORT1_PRIORITY_5 (0x5 << 16)
+#define LOCALMEM_ARBITRATION_ZVPORT1_PRIORITY_6 (0x6 << 16)
+#define LOCALMEM_ARBITRATION_ZVPORT1_PRIORITY_7 (0x7 << 16)
+#define LOCALMEM_ARBITRATION_ZVPORT0_MASK (0x7 << 12)
+#define LOCALMEM_ARBITRATION_ZVPORT0_OFF (0x0 << 12)
+#define LOCALMEM_ARBITRATION_ZVPORT0_PRIORITY_1 (0x1 << 12)
+#define LOCALMEM_ARBITRATION_ZVPORT0_PRIORITY_2 (0x2 << 12)
+#define LOCALMEM_ARBITRATION_ZVPORT0_PRIORITY_3 (0x3 << 12)
+#define LOCALMEM_ARBITRATION_ZVPORT0_PRIORITY_4 (0x4 << 12)
+#define LOCALMEM_ARBITRATION_ZVPORT0_PRIORITY_5 (0x5 << 12)
+#define LOCALMEM_ARBITRATION_ZVPORT0_PRIORITY_6 (0x6 << 12)
+#define LOCALMEM_ARBITRATION_ZVPORT0_PRIORITY_7 (0x7 << 12)
+#define LOCALMEM_ARBITRATION_VIDEO_MASK (0x7 << 8)
+#define LOCALMEM_ARBITRATION_VIDEO_OFF (0x0 << 8)
+#define LOCALMEM_ARBITRATION_VIDEO_PRIORITY_1 (0x1 << 8)
+#define LOCALMEM_ARBITRATION_VIDEO_PRIORITY_2 (0x2 << 8)
+#define LOCALMEM_ARBITRATION_VIDEO_PRIORITY_3 (0x3 << 8)
+#define LOCALMEM_ARBITRATION_VIDEO_PRIORITY_4 (0x4 << 8)
+#define LOCALMEM_ARBITRATION_VIDEO_PRIORITY_5 (0x5 << 8)
+#define LOCALMEM_ARBITRATION_VIDEO_PRIORITY_6 (0x6 << 8)
+#define LOCALMEM_ARBITRATION_VIDEO_PRIORITY_7 (0x7 << 8)
+#define LOCALMEM_ARBITRATION_PANEL_MASK (0x7 << 4)
+#define LOCALMEM_ARBITRATION_PANEL_OFF (0x0 << 4)
+#define LOCALMEM_ARBITRATION_PANEL_PRIORITY_1 (0x1 << 4)
+#define LOCALMEM_ARBITRATION_PANEL_PRIORITY_2 (0x2 << 4)
+#define LOCALMEM_ARBITRATION_PANEL_PRIORITY_3 (0x3 << 4)
+#define LOCALMEM_ARBITRATION_PANEL_PRIORITY_4 (0x4 << 4)
+#define LOCALMEM_ARBITRATION_PANEL_PRIORITY_5 (0x5 << 4)
+#define LOCALMEM_ARBITRATION_PANEL_PRIORITY_6 (0x6 << 4)
+#define LOCALMEM_ARBITRATION_PANEL_PRIORITY_7 (0x7 << 4)
+#define LOCALMEM_ARBITRATION_CRT_MASK 0x7
+#define LOCALMEM_ARBITRATION_CRT_OFF 0x0
+#define LOCALMEM_ARBITRATION_CRT_PRIORITY_1 0x1
+#define LOCALMEM_ARBITRATION_CRT_PRIORITY_2 0x2
+#define LOCALMEM_ARBITRATION_CRT_PRIORITY_3 0x3
+#define LOCALMEM_ARBITRATION_CRT_PRIORITY_4 0x4
+#define LOCALMEM_ARBITRATION_CRT_PRIORITY_5 0x5
+#define LOCALMEM_ARBITRATION_CRT_PRIORITY_6 0x6
+#define LOCALMEM_ARBITRATION_CRT_PRIORITY_7 0x7
#define PCIMEM_ARBITRATION 0x000010
-#define PCIMEM_ARBITRATION_ROTATE 28:28
-#define PCIMEM_ARBITRATION_ROTATE_OFF 0
-#define PCIMEM_ARBITRATION_ROTATE_ON 1
-#define PCIMEM_ARBITRATION_VGA 26:24
-#define PCIMEM_ARBITRATION_VGA_OFF 0
-#define PCIMEM_ARBITRATION_VGA_PRIORITY_1 1
-#define PCIMEM_ARBITRATION_VGA_PRIORITY_2 2
-#define PCIMEM_ARBITRATION_VGA_PRIORITY_3 3
-#define PCIMEM_ARBITRATION_VGA_PRIORITY_4 4
-#define PCIMEM_ARBITRATION_VGA_PRIORITY_5 5
-#define PCIMEM_ARBITRATION_VGA_PRIORITY_6 6
-#define PCIMEM_ARBITRATION_VGA_PRIORITY_7 7
-#define PCIMEM_ARBITRATION_DMA 22:20
-#define PCIMEM_ARBITRATION_DMA_OFF 0
-#define PCIMEM_ARBITRATION_DMA_PRIORITY_1 1
-#define PCIMEM_ARBITRATION_DMA_PRIORITY_2 2
-#define PCIMEM_ARBITRATION_DMA_PRIORITY_3 3
-#define PCIMEM_ARBITRATION_DMA_PRIORITY_4 4
-#define PCIMEM_ARBITRATION_DMA_PRIORITY_5 5
-#define PCIMEM_ARBITRATION_DMA_PRIORITY_6 6
-#define PCIMEM_ARBITRATION_DMA_PRIORITY_7 7
-#define PCIMEM_ARBITRATION_ZVPORT1 18:16
-#define PCIMEM_ARBITRATION_ZVPORT1_OFF 0
-#define PCIMEM_ARBITRATION_ZVPORT1_PRIORITY_1 1
-#define PCIMEM_ARBITRATION_ZVPORT1_PRIORITY_2 2
-#define PCIMEM_ARBITRATION_ZVPORT1_PRIORITY_3 3
-#define PCIMEM_ARBITRATION_ZVPORT1_PRIORITY_4 4
-#define PCIMEM_ARBITRATION_ZVPORT1_PRIORITY_5 5
-#define PCIMEM_ARBITRATION_ZVPORT1_PRIORITY_6 6
-#define PCIMEM_ARBITRATION_ZVPORT1_PRIORITY_7 7
-#define PCIMEM_ARBITRATION_ZVPORT0 14:12
-#define PCIMEM_ARBITRATION_ZVPORT0_OFF 0
-#define PCIMEM_ARBITRATION_ZVPORT0_PRIORITY_1 1
-#define PCIMEM_ARBITRATION_ZVPORT0_PRIORITY_2 2
-#define PCIMEM_ARBITRATION_ZVPORT0_PRIORITY_3 3
-#define PCIMEM_ARBITRATION_ZVPORT0_PRIORITY_4 4
-#define PCIMEM_ARBITRATION_ZVPORT0_PRIORITY_5 5
-#define PCIMEM_ARBITRATION_ZVPORT0_PRIORITY_6 6
-#define PCIMEM_ARBITRATION_ZVPORT0_PRIORITY_7 7
-#define PCIMEM_ARBITRATION_VIDEO 10:8
-#define PCIMEM_ARBITRATION_VIDEO_OFF 0
-#define PCIMEM_ARBITRATION_VIDEO_PRIORITY_1 1
-#define PCIMEM_ARBITRATION_VIDEO_PRIORITY_2 2
-#define PCIMEM_ARBITRATION_VIDEO_PRIORITY_3 3
-#define PCIMEM_ARBITRATION_VIDEO_PRIORITY_4 4
-#define PCIMEM_ARBITRATION_VIDEO_PRIORITY_5 5
-#define PCIMEM_ARBITRATION_VIDEO_PRIORITY_6 6
-#define PCIMEM_ARBITRATION_VIDEO_PRIORITY_7 7
-#define PCIMEM_ARBITRATION_PANEL 6:4
-#define PCIMEM_ARBITRATION_PANEL_OFF 0
-#define PCIMEM_ARBITRATION_PANEL_PRIORITY_1 1
-#define PCIMEM_ARBITRATION_PANEL_PRIORITY_2 2
-#define PCIMEM_ARBITRATION_PANEL_PRIORITY_3 3
-#define PCIMEM_ARBITRATION_PANEL_PRIORITY_4 4
-#define PCIMEM_ARBITRATION_PANEL_PRIORITY_5 5
-#define PCIMEM_ARBITRATION_PANEL_PRIORITY_6 6
-#define PCIMEM_ARBITRATION_PANEL_PRIORITY_7 7
-#define PCIMEM_ARBITRATION_CRT 2:0
-#define PCIMEM_ARBITRATION_CRT_OFF 0
-#define PCIMEM_ARBITRATION_CRT_PRIORITY_1 1
-#define PCIMEM_ARBITRATION_CRT_PRIORITY_2 2
-#define PCIMEM_ARBITRATION_CRT_PRIORITY_3 3
-#define PCIMEM_ARBITRATION_CRT_PRIORITY_4 4
-#define PCIMEM_ARBITRATION_CRT_PRIORITY_5 5
-#define PCIMEM_ARBITRATION_CRT_PRIORITY_6 6
-#define PCIMEM_ARBITRATION_CRT_PRIORITY_7 7
+#define PCIMEM_ARBITRATION_ROTATE BIT(28)
+#define PCIMEM_ARBITRATION_VGA_MASK (0x7 << 24)
+#define PCIMEM_ARBITRATION_VGA_OFF (0x0 << 24)
+#define PCIMEM_ARBITRATION_VGA_PRIORITY_1 (0x1 << 24)
+#define PCIMEM_ARBITRATION_VGA_PRIORITY_2 (0x2 << 24)
+#define PCIMEM_ARBITRATION_VGA_PRIORITY_3 (0x3 << 24)
+#define PCIMEM_ARBITRATION_VGA_PRIORITY_4 (0x4 << 24)
+#define PCIMEM_ARBITRATION_VGA_PRIORITY_5 (0x5 << 24)
+#define PCIMEM_ARBITRATION_VGA_PRIORITY_6 (0x6 << 24)
+#define PCIMEM_ARBITRATION_VGA_PRIORITY_7 (0x7 << 24)
+#define PCIMEM_ARBITRATION_DMA_MASK (0x7 << 20)
+#define PCIMEM_ARBITRATION_DMA_OFF (0x0 << 20)
+#define PCIMEM_ARBITRATION_DMA_PRIORITY_1 (0x1 << 20)
+#define PCIMEM_ARBITRATION_DMA_PRIORITY_2 (0x2 << 20)
+#define PCIMEM_ARBITRATION_DMA_PRIORITY_3 (0x3 << 20)
+#define PCIMEM_ARBITRATION_DMA_PRIORITY_4 (0x4 << 20)
+#define PCIMEM_ARBITRATION_DMA_PRIORITY_5 (0x5 << 20)
+#define PCIMEM_ARBITRATION_DMA_PRIORITY_6 (0x6 << 20)
+#define PCIMEM_ARBITRATION_DMA_PRIORITY_7 (0x7 << 20)
+#define PCIMEM_ARBITRATION_ZVPORT1_MASK (0x7 << 16)
+#define PCIMEM_ARBITRATION_ZVPORT1_OFF (0x0 << 16)
+#define PCIMEM_ARBITRATION_ZVPORT1_PRIORITY_1 (0x1 << 16)
+#define PCIMEM_ARBITRATION_ZVPORT1_PRIORITY_2 (0x2 << 16)
+#define PCIMEM_ARBITRATION_ZVPORT1_PRIORITY_3 (0x3 << 16)
+#define PCIMEM_ARBITRATION_ZVPORT1_PRIORITY_4 (0x4 << 16)
+#define PCIMEM_ARBITRATION_ZVPORT1_PRIORITY_5 (0x5 << 16)
+#define PCIMEM_ARBITRATION_ZVPORT1_PRIORITY_6 (0x6 << 16)
+#define PCIMEM_ARBITRATION_ZVPORT1_PRIORITY_7 (0x7 << 16)
+#define PCIMEM_ARBITRATION_ZVPORT0_MASK (0x7 << 12)
+#define PCIMEM_ARBITRATION_ZVPORT0_OFF (0x0 << 12)
+#define PCIMEM_ARBITRATION_ZVPORT0_PRIORITY_1 (0x1 << 12)
+#define PCIMEM_ARBITRATION_ZVPORT0_PRIORITY_2 (0x2 << 12)
+#define PCIMEM_ARBITRATION_ZVPORT0_PRIORITY_3 (0x3 << 12)
+#define PCIMEM_ARBITRATION_ZVPORT0_PRIORITY_4 (0x4 << 12)
+#define PCIMEM_ARBITRATION_ZVPORT0_PRIORITY_5 (0x5 << 12)
+#define PCIMEM_ARBITRATION_ZVPORT0_PRIORITY_6 (0x6 << 12)
+#define PCIMEM_ARBITRATION_ZVPORT0_PRIORITY_7 (0x7 << 12)
+#define PCIMEM_ARBITRATION_VIDEO_MASK (0x7 << 8)
+#define PCIMEM_ARBITRATION_VIDEO_OFF (0x0 << 8)
+#define PCIMEM_ARBITRATION_VIDEO_PRIORITY_1 (0x1 << 8)
+#define PCIMEM_ARBITRATION_VIDEO_PRIORITY_2 (0x2 << 8)
+#define PCIMEM_ARBITRATION_VIDEO_PRIORITY_3 (0x3 << 8)
+#define PCIMEM_ARBITRATION_VIDEO_PRIORITY_4 (0x4 << 8)
+#define PCIMEM_ARBITRATION_VIDEO_PRIORITY_5 (0x5 << 8)
+#define PCIMEM_ARBITRATION_VIDEO_PRIORITY_6 (0x6 << 8)
+#define PCIMEM_ARBITRATION_VIDEO_PRIORITY_7 (0x7 << 8)
+#define PCIMEM_ARBITRATION_PANEL_MASK (0x7 << 4)
+#define PCIMEM_ARBITRATION_PANEL_OFF (0x0 << 4)
+#define PCIMEM_ARBITRATION_PANEL_PRIORITY_1 (0x1 << 4)
+#define PCIMEM_ARBITRATION_PANEL_PRIORITY_2 (0x2 << 4)
+#define PCIMEM_ARBITRATION_PANEL_PRIORITY_3 (0x3 << 4)
+#define PCIMEM_ARBITRATION_PANEL_PRIORITY_4 (0x4 << 4)
+#define PCIMEM_ARBITRATION_PANEL_PRIORITY_5 (0x5 << 4)
+#define PCIMEM_ARBITRATION_PANEL_PRIORITY_6 (0x6 << 4)
+#define PCIMEM_ARBITRATION_PANEL_PRIORITY_7 (0x7 << 4)
+#define PCIMEM_ARBITRATION_CRT_MASK 0x7
+#define PCIMEM_ARBITRATION_CRT_OFF 0x0
+#define PCIMEM_ARBITRATION_CRT_PRIORITY_1 0x1
+#define PCIMEM_ARBITRATION_CRT_PRIORITY_2 0x2
+#define PCIMEM_ARBITRATION_CRT_PRIORITY_3 0x3
+#define PCIMEM_ARBITRATION_CRT_PRIORITY_4 0x4
+#define PCIMEM_ARBITRATION_CRT_PRIORITY_5 0x5
+#define PCIMEM_ARBITRATION_CRT_PRIORITY_6 0x6
+#define PCIMEM_ARBITRATION_CRT_PRIORITY_7 0x7
#define RAW_INT 0x000020
-#define RAW_INT_ZVPORT1_VSYNC 4:4
-#define RAW_INT_ZVPORT1_VSYNC_INACTIVE 0
-#define RAW_INT_ZVPORT1_VSYNC_ACTIVE 1
-#define RAW_INT_ZVPORT1_VSYNC_CLEAR 1
-#define RAW_INT_ZVPORT0_VSYNC 3:3
-#define RAW_INT_ZVPORT0_VSYNC_INACTIVE 0
-#define RAW_INT_ZVPORT0_VSYNC_ACTIVE 1
-#define RAW_INT_ZVPORT0_VSYNC_CLEAR 1
-#define RAW_INT_CRT_VSYNC 2:2
-#define RAW_INT_CRT_VSYNC_INACTIVE 0
-#define RAW_INT_CRT_VSYNC_ACTIVE 1
-#define RAW_INT_CRT_VSYNC_CLEAR 1
-#define RAW_INT_PANEL_VSYNC 1:1
-#define RAW_INT_PANEL_VSYNC_INACTIVE 0
-#define RAW_INT_PANEL_VSYNC_ACTIVE 1
-#define RAW_INT_PANEL_VSYNC_CLEAR 1
-#define RAW_INT_VGA_VSYNC 0:0
-#define RAW_INT_VGA_VSYNC_INACTIVE 0
-#define RAW_INT_VGA_VSYNC_ACTIVE 1
-#define RAW_INT_VGA_VSYNC_CLEAR 1
+#define RAW_INT_ZVPORT1_VSYNC BIT(4)
+#define RAW_INT_ZVPORT0_VSYNC BIT(3)
+#define RAW_INT_CRT_VSYNC BIT(2)
+#define RAW_INT_PANEL_VSYNC BIT(1)
+#define RAW_INT_VGA_VSYNC BIT(0)
#define INT_STATUS 0x000024
-#define INT_STATUS_GPIO31 31:31
-#define INT_STATUS_GPIO31_INACTIVE 0
-#define INT_STATUS_GPIO31_ACTIVE 1
-#define INT_STATUS_GPIO30 30:30
-#define INT_STATUS_GPIO30_INACTIVE 0
-#define INT_STATUS_GPIO30_ACTIVE 1
-#define INT_STATUS_GPIO29 29:29
-#define INT_STATUS_GPIO29_INACTIVE 0
-#define INT_STATUS_GPIO29_ACTIVE 1
-#define INT_STATUS_GPIO28 28:28
-#define INT_STATUS_GPIO28_INACTIVE 0
-#define INT_STATUS_GPIO28_ACTIVE 1
-#define INT_STATUS_GPIO27 27:27
-#define INT_STATUS_GPIO27_INACTIVE 0
-#define INT_STATUS_GPIO27_ACTIVE 1
-#define INT_STATUS_GPIO26 26:26
-#define INT_STATUS_GPIO26_INACTIVE 0
-#define INT_STATUS_GPIO26_ACTIVE 1
-#define INT_STATUS_GPIO25 25:25
-#define INT_STATUS_GPIO25_INACTIVE 0
-#define INT_STATUS_GPIO25_ACTIVE 1
-#define INT_STATUS_I2C 12:12
-#define INT_STATUS_I2C_INACTIVE 0
-#define INT_STATUS_I2C_ACTIVE 1
-#define INT_STATUS_PWM 11:11
-#define INT_STATUS_PWM_INACTIVE 0
-#define INT_STATUS_PWM_ACTIVE 1
-#define INT_STATUS_DMA1 10:10
-#define INT_STATUS_DMA1_INACTIVE 0
-#define INT_STATUS_DMA1_ACTIVE 1
-#define INT_STATUS_DMA0 9:9
-#define INT_STATUS_DMA0_INACTIVE 0
-#define INT_STATUS_DMA0_ACTIVE 1
-#define INT_STATUS_PCI 8:8
-#define INT_STATUS_PCI_INACTIVE 0
-#define INT_STATUS_PCI_ACTIVE 1
-#define INT_STATUS_SSP1 7:7
-#define INT_STATUS_SSP1_INACTIVE 0
-#define INT_STATUS_SSP1_ACTIVE 1
-#define INT_STATUS_SSP0 6:6
-#define INT_STATUS_SSP0_INACTIVE 0
-#define INT_STATUS_SSP0_ACTIVE 1
-#define INT_STATUS_DE 5:5
-#define INT_STATUS_DE_INACTIVE 0
-#define INT_STATUS_DE_ACTIVE 1
-#define INT_STATUS_ZVPORT1_VSYNC 4:4
-#define INT_STATUS_ZVPORT1_VSYNC_INACTIVE 0
-#define INT_STATUS_ZVPORT1_VSYNC_ACTIVE 1
-#define INT_STATUS_ZVPORT0_VSYNC 3:3
-#define INT_STATUS_ZVPORT0_VSYNC_INACTIVE 0
-#define INT_STATUS_ZVPORT0_VSYNC_ACTIVE 1
-#define INT_STATUS_CRT_VSYNC 2:2
-#define INT_STATUS_CRT_VSYNC_INACTIVE 0
-#define INT_STATUS_CRT_VSYNC_ACTIVE 1
-#define INT_STATUS_PANEL_VSYNC 1:1
-#define INT_STATUS_PANEL_VSYNC_INACTIVE 0
-#define INT_STATUS_PANEL_VSYNC_ACTIVE 1
-#define INT_STATUS_VGA_VSYNC 0:0
-#define INT_STATUS_VGA_VSYNC_INACTIVE 0
-#define INT_STATUS_VGA_VSYNC_ACTIVE 1
+#define INT_STATUS_GPIO31 BIT(31)
+#define INT_STATUS_GPIO30 BIT(30)
+#define INT_STATUS_GPIO29 BIT(29)
+#define INT_STATUS_GPIO28 BIT(28)
+#define INT_STATUS_GPIO27 BIT(27)
+#define INT_STATUS_GPIO26 BIT(26)
+#define INT_STATUS_GPIO25 BIT(25)
+#define INT_STATUS_I2C BIT(12)
+#define INT_STATUS_PWM BIT(11)
+#define INT_STATUS_DMA1 BIT(10)
+#define INT_STATUS_DMA0 BIT(9)
+#define INT_STATUS_PCI BIT(8)
+#define INT_STATUS_SSP1 BIT(7)
+#define INT_STATUS_SSP0 BIT(6)
+#define INT_STATUS_DE BIT(5)
+#define INT_STATUS_ZVPORT1_VSYNC BIT(4)
+#define INT_STATUS_ZVPORT0_VSYNC BIT(3)
+#define INT_STATUS_CRT_VSYNC BIT(2)
+#define INT_STATUS_PANEL_VSYNC BIT(1)
+#define INT_STATUS_VGA_VSYNC BIT(0)
#define INT_MASK 0x000028
-#define INT_MASK_GPIO31 31:31
-#define INT_MASK_GPIO31_DISABLE 0
-#define INT_MASK_GPIO31_ENABLE 1
-#define INT_MASK_GPIO30 30:30
-#define INT_MASK_GPIO30_DISABLE 0
-#define INT_MASK_GPIO30_ENABLE 1
-#define INT_MASK_GPIO29 29:29
-#define INT_MASK_GPIO29_DISABLE 0
-#define INT_MASK_GPIO29_ENABLE 1
-#define INT_MASK_GPIO28 28:28
-#define INT_MASK_GPIO28_DISABLE 0
-#define INT_MASK_GPIO28_ENABLE 1
-#define INT_MASK_GPIO27 27:27
-#define INT_MASK_GPIO27_DISABLE 0
-#define INT_MASK_GPIO27_ENABLE 1
-#define INT_MASK_GPIO26 26:26
-#define INT_MASK_GPIO26_DISABLE 0
-#define INT_MASK_GPIO26_ENABLE 1
-#define INT_MASK_GPIO25 25:25
-#define INT_MASK_GPIO25_DISABLE 0
-#define INT_MASK_GPIO25_ENABLE 1
-#define INT_MASK_I2C 12:12
-#define INT_MASK_I2C_DISABLE 0
-#define INT_MASK_I2C_ENABLE 1
-#define INT_MASK_PWM 11:11
-#define INT_MASK_PWM_DISABLE 0
-#define INT_MASK_PWM_ENABLE 1
-#define INT_MASK_DMA1 10:10
-#define INT_MASK_DMA1_DISABLE 0
-#define INT_MASK_DMA1_ENABLE 1
-#define INT_MASK_DMA 9:9
-#define INT_MASK_DMA_DISABLE 0
-#define INT_MASK_DMA_ENABLE 1
-#define INT_MASK_PCI 8:8
-#define INT_MASK_PCI_DISABLE 0
-#define INT_MASK_PCI_ENABLE 1
-#define INT_MASK_SSP1 7:7
-#define INT_MASK_SSP1_DISABLE 0
-#define INT_MASK_SSP1_ENABLE 1
-#define INT_MASK_SSP0 6:6
-#define INT_MASK_SSP0_DISABLE 0
-#define INT_MASK_SSP0_ENABLE 1
-#define INT_MASK_DE 5:5
-#define INT_MASK_DE_DISABLE 0
-#define INT_MASK_DE_ENABLE 1
-#define INT_MASK_ZVPORT1_VSYNC 4:4
-#define INT_MASK_ZVPORT1_VSYNC_DISABLE 0
-#define INT_MASK_ZVPORT1_VSYNC_ENABLE 1
-#define INT_MASK_ZVPORT0_VSYNC 3:3
-#define INT_MASK_ZVPORT0_VSYNC_DISABLE 0
-#define INT_MASK_ZVPORT0_VSYNC_ENABLE 1
-#define INT_MASK_CRT_VSYNC 2:2
-#define INT_MASK_CRT_VSYNC_DISABLE 0
-#define INT_MASK_CRT_VSYNC_ENABLE 1
-#define INT_MASK_PANEL_VSYNC 1:1
-#define INT_MASK_PANEL_VSYNC_DISABLE 0
-#define INT_MASK_PANEL_VSYNC_ENABLE 1
-#define INT_MASK_VGA_VSYNC 0:0
-#define INT_MASK_VGA_VSYNC_DISABLE 0
-#define INT_MASK_VGA_VSYNC_ENABLE 1
+#define INT_MASK_GPIO31 BIT(31)
+#define INT_MASK_GPIO30 BIT(30)
+#define INT_MASK_GPIO29 BIT(29)
+#define INT_MASK_GPIO28 BIT(28)
+#define INT_MASK_GPIO27 BIT(27)
+#define INT_MASK_GPIO26 BIT(26)
+#define INT_MASK_GPIO25 BIT(25)
+#define INT_MASK_I2C BIT(12)
+#define INT_MASK_PWM BIT(11)
+#define INT_MASK_DMA1 BIT(10)
+#define INT_MASK_DMA BIT(9)
+#define INT_MASK_PCI BIT(8)
+#define INT_MASK_SSP1 BIT(7)
+#define INT_MASK_SSP0 BIT(6)
+#define INT_MASK_DE BIT(5)
+#define INT_MASK_ZVPORT1_VSYNC BIT(4)
+#define INT_MASK_ZVPORT0_VSYNC BIT(3)
+#define INT_MASK_CRT_VSYNC BIT(2)
+#define INT_MASK_PANEL_VSYNC BIT(1)
+#define INT_MASK_VGA_VSYNC BIT(0)
#define CURRENT_GATE 0x000040
#define CURRENT_GATE_MCLK_MASK (0x3 << 14)
@@ -451,49 +352,27 @@
#define MODE0_GATE_DMA BIT(0)
#define MODE1_GATE 0x000048
-#define MODE1_GATE_MCLK 15:14
-#define MODE1_GATE_MCLK_112MHZ 0
-#define MODE1_GATE_MCLK_84MHZ 1
-#define MODE1_GATE_MCLK_56MHZ 2
-#define MODE1_GATE_MCLK_42MHZ 3
-#define MODE1_GATE_M2XCLK 13:12
-#define MODE1_GATE_M2XCLK_336MHZ 0
-#define MODE1_GATE_M2XCLK_168MHZ 1
-#define MODE1_GATE_M2XCLK_112MHZ 2
-#define MODE1_GATE_M2XCLK_84MHZ 3
-#define MODE1_GATE_VGA 10:10
-#define MODE1_GATE_VGA_OFF 0
-#define MODE1_GATE_VGA_ON 1
-#define MODE1_GATE_PWM 9:9
-#define MODE1_GATE_PWM_OFF 0
-#define MODE1_GATE_PWM_ON 1
-#define MODE1_GATE_I2C 8:8
-#define MODE1_GATE_I2C_OFF 0
-#define MODE1_GATE_I2C_ON 1
-#define MODE1_GATE_SSP 7:7
-#define MODE1_GATE_SSP_OFF 0
-#define MODE1_GATE_SSP_ON 1
-#define MODE1_GATE_GPIO 6:6
-#define MODE1_GATE_GPIO_OFF 0
-#define MODE1_GATE_GPIO_ON 1
-#define MODE1_GATE_ZVPORT 5:5
-#define MODE1_GATE_ZVPORT_OFF 0
-#define MODE1_GATE_ZVPORT_ON 1
-#define MODE1_GATE_CSC 4:4
-#define MODE1_GATE_CSC_OFF 0
-#define MODE1_GATE_CSC_ON 1
-#define MODE1_GATE_DE 3:3
-#define MODE1_GATE_DE_OFF 0
-#define MODE1_GATE_DE_ON 1
-#define MODE1_GATE_DISPLAY 2:2
-#define MODE1_GATE_DISPLAY_OFF 0
-#define MODE1_GATE_DISPLAY_ON 1
-#define MODE1_GATE_LOCALMEM 1:1
-#define MODE1_GATE_LOCALMEM_OFF 0
-#define MODE1_GATE_LOCALMEM_ON 1
-#define MODE1_GATE_DMA 0:0
-#define MODE1_GATE_DMA_OFF 0
-#define MODE1_GATE_DMA_ON 1
+#define MODE1_GATE_MCLK_MASK (0x3 << 14)
+#define MODE1_GATE_MCLK_112MHZ (0x0 << 14)
+#define MODE1_GATE_MCLK_84MHZ (0x1 << 14)
+#define MODE1_GATE_MCLK_56MHZ (0x2 << 14)
+#define MODE1_GATE_MCLK_42MHZ (0x3 << 14)
+#define MODE1_GATE_M2XCLK_MASK (0x3 << 12)
+#define MODE1_GATE_M2XCLK_336MHZ (0x0 << 12)
+#define MODE1_GATE_M2XCLK_168MHZ (0x1 << 12)
+#define MODE1_GATE_M2XCLK_112MHZ (0x2 << 12)
+#define MODE1_GATE_M2XCLK_84MHZ (0x3 << 12)
+#define MODE1_GATE_VGA BIT(10)
+#define MODE1_GATE_PWM BIT(9)
+#define MODE1_GATE_I2C BIT(8)
+#define MODE1_GATE_SSP BIT(7)
+#define MODE1_GATE_GPIO BIT(6)
+#define MODE1_GATE_ZVPORT BIT(5)
+#define MODE1_GATE_CSC BIT(4)
+#define MODE1_GATE_DE BIT(3)
+#define MODE1_GATE_DISPLAY BIT(2)
+#define MODE1_GATE_LOCALMEM BIT(1)
+#define MODE1_GATE_DMA BIT(0)
#define POWER_MODE_CTRL 0x00004C
#ifdef VALIDATION_CHIP
@@ -507,14 +386,14 @@
#define POWER_MODE_CTRL_MODE_SLEEP (0x2 << 0)
#define PCI_MASTER_BASE 0x000050
-#define PCI_MASTER_BASE_ADDRESS 7:0
+#define PCI_MASTER_BASE_ADDRESS_MASK 0xff
#define DEVICE_ID 0x000054
-#define DEVICE_ID_DEVICE_ID 31:16
-#define DEVICE_ID_REVISION_ID 7:0
+#define DEVICE_ID_DEVICE_ID_MASK (0xffff << 16)
+#define DEVICE_ID_REVISION_ID_MASK 0xff
#define PLL_CLK_COUNT 0x000058
-#define PLL_CLK_COUNT_COUNTER 15:0
+#define PLL_CLK_COUNT_COUNTER_MASK 0xffff
#define PANEL_PLL_CTRL 0x00005C
#define PLL_CTRL_BYPASS BIT(18)
@@ -554,231 +433,104 @@
#endif
#define GPIO_DATA 0x010000
-#define GPIO_DATA_31 31:31
-#define GPIO_DATA_30 30:30
-#define GPIO_DATA_29 29:29
-#define GPIO_DATA_28 28:28
-#define GPIO_DATA_27 27:27
-#define GPIO_DATA_26 26:26
-#define GPIO_DATA_25 25:25
-#define GPIO_DATA_24 24:24
-#define GPIO_DATA_23 23:23
-#define GPIO_DATA_22 22:22
-#define GPIO_DATA_21 21:21
-#define GPIO_DATA_20 20:20
-#define GPIO_DATA_19 19:19
-#define GPIO_DATA_18 18:18
-#define GPIO_DATA_17 17:17
-#define GPIO_DATA_16 16:16
-#define GPIO_DATA_15 15:15
-#define GPIO_DATA_14 14:14
-#define GPIO_DATA_13 13:13
-#define GPIO_DATA_12 12:12
-#define GPIO_DATA_11 11:11
-#define GPIO_DATA_10 10:10
-#define GPIO_DATA_9 9:9
-#define GPIO_DATA_8 8:8
-#define GPIO_DATA_7 7:7
-#define GPIO_DATA_6 6:6
-#define GPIO_DATA_5 5:5
-#define GPIO_DATA_4 4:4
-#define GPIO_DATA_3 3:3
-#define GPIO_DATA_2 2:2
-#define GPIO_DATA_1 1:1
-#define GPIO_DATA_0 0:0
+#define GPIO_DATA_31 BIT(31)
+#define GPIO_DATA_30 BIT(30)
+#define GPIO_DATA_29 BIT(29)
+#define GPIO_DATA_28 BIT(28)
+#define GPIO_DATA_27 BIT(27)
+#define GPIO_DATA_26 BIT(26)
+#define GPIO_DATA_25 BIT(25)
+#define GPIO_DATA_24 BIT(24)
+#define GPIO_DATA_23 BIT(23)
+#define GPIO_DATA_22 BIT(22)
+#define GPIO_DATA_21 BIT(21)
+#define GPIO_DATA_20 BIT(20)
+#define GPIO_DATA_19 BIT(19)
+#define GPIO_DATA_18 BIT(18)
+#define GPIO_DATA_17 BIT(17)
+#define GPIO_DATA_16 BIT(16)
+#define GPIO_DATA_15 BIT(15)
+#define GPIO_DATA_14 BIT(14)
+#define GPIO_DATA_13 BIT(13)
+#define GPIO_DATA_12 BIT(12)
+#define GPIO_DATA_11 BIT(11)
+#define GPIO_DATA_10 BIT(10)
+#define GPIO_DATA_9 BIT(9)
+#define GPIO_DATA_8 BIT(8)
+#define GPIO_DATA_7 BIT(7)
+#define GPIO_DATA_6 BIT(6)
+#define GPIO_DATA_5 BIT(5)
+#define GPIO_DATA_4 BIT(4)
+#define GPIO_DATA_3 BIT(3)
+#define GPIO_DATA_2 BIT(2)
+#define GPIO_DATA_1 BIT(1)
+#define GPIO_DATA_0 BIT(0)
#define GPIO_DATA_DIRECTION 0x010004
-#define GPIO_DATA_DIRECTION_31 31:31
-#define GPIO_DATA_DIRECTION_31_INPUT 0
-#define GPIO_DATA_DIRECTION_31_OUTPUT 1
-#define GPIO_DATA_DIRECTION_30 30:30
-#define GPIO_DATA_DIRECTION_30_INPUT 0
-#define GPIO_DATA_DIRECTION_30_OUTPUT 1
-#define GPIO_DATA_DIRECTION_29 29:29
-#define GPIO_DATA_DIRECTION_29_INPUT 0
-#define GPIO_DATA_DIRECTION_29_OUTPUT 1
-#define GPIO_DATA_DIRECTION_28 28:28
-#define GPIO_DATA_DIRECTION_28_INPUT 0
-#define GPIO_DATA_DIRECTION_28_OUTPUT 1
-#define GPIO_DATA_DIRECTION_27 27:27
-#define GPIO_DATA_DIRECTION_27_INPUT 0
-#define GPIO_DATA_DIRECTION_27_OUTPUT 1
-#define GPIO_DATA_DIRECTION_26 26:26
-#define GPIO_DATA_DIRECTION_26_INPUT 0
-#define GPIO_DATA_DIRECTION_26_OUTPUT 1
-#define GPIO_DATA_DIRECTION_25 25:25
-#define GPIO_DATA_DIRECTION_25_INPUT 0
-#define GPIO_DATA_DIRECTION_25_OUTPUT 1
-#define GPIO_DATA_DIRECTION_24 24:24
-#define GPIO_DATA_DIRECTION_24_INPUT 0
-#define GPIO_DATA_DIRECTION_24_OUTPUT 1
-#define GPIO_DATA_DIRECTION_23 23:23
-#define GPIO_DATA_DIRECTION_23_INPUT 0
-#define GPIO_DATA_DIRECTION_23_OUTPUT 1
-#define GPIO_DATA_DIRECTION_22 22:22
-#define GPIO_DATA_DIRECTION_22_INPUT 0
-#define GPIO_DATA_DIRECTION_22_OUTPUT 1
-#define GPIO_DATA_DIRECTION_21 21:21
-#define GPIO_DATA_DIRECTION_21_INPUT 0
-#define GPIO_DATA_DIRECTION_21_OUTPUT 1
-#define GPIO_DATA_DIRECTION_20 20:20
-#define GPIO_DATA_DIRECTION_20_INPUT 0
-#define GPIO_DATA_DIRECTION_20_OUTPUT 1
-#define GPIO_DATA_DIRECTION_19 19:19
-#define GPIO_DATA_DIRECTION_19_INPUT 0
-#define GPIO_DATA_DIRECTION_19_OUTPUT 1
-#define GPIO_DATA_DIRECTION_18 18:18
-#define GPIO_DATA_DIRECTION_18_INPUT 0
-#define GPIO_DATA_DIRECTION_18_OUTPUT 1
-#define GPIO_DATA_DIRECTION_17 17:17
-#define GPIO_DATA_DIRECTION_17_INPUT 0
-#define GPIO_DATA_DIRECTION_17_OUTPUT 1
-#define GPIO_DATA_DIRECTION_16 16:16
-#define GPIO_DATA_DIRECTION_16_INPUT 0
-#define GPIO_DATA_DIRECTION_16_OUTPUT 1
-#define GPIO_DATA_DIRECTION_15 15:15
-#define GPIO_DATA_DIRECTION_15_INPUT 0
-#define GPIO_DATA_DIRECTION_15_OUTPUT 1
-#define GPIO_DATA_DIRECTION_14 14:14
-#define GPIO_DATA_DIRECTION_14_INPUT 0
-#define GPIO_DATA_DIRECTION_14_OUTPUT 1
-#define GPIO_DATA_DIRECTION_13 13:13
-#define GPIO_DATA_DIRECTION_13_INPUT 0
-#define GPIO_DATA_DIRECTION_13_OUTPUT 1
-#define GPIO_DATA_DIRECTION_12 12:12
-#define GPIO_DATA_DIRECTION_12_INPUT 0
-#define GPIO_DATA_DIRECTION_12_OUTPUT 1
-#define GPIO_DATA_DIRECTION_11 11:11
-#define GPIO_DATA_DIRECTION_11_INPUT 0
-#define GPIO_DATA_DIRECTION_11_OUTPUT 1
-#define GPIO_DATA_DIRECTION_10 10:10
-#define GPIO_DATA_DIRECTION_10_INPUT 0
-#define GPIO_DATA_DIRECTION_10_OUTPUT 1
-#define GPIO_DATA_DIRECTION_9 9:9
-#define GPIO_DATA_DIRECTION_9_INPUT 0
-#define GPIO_DATA_DIRECTION_9_OUTPUT 1
-#define GPIO_DATA_DIRECTION_8 8:8
-#define GPIO_DATA_DIRECTION_8_INPUT 0
-#define GPIO_DATA_DIRECTION_8_OUTPUT 1
-#define GPIO_DATA_DIRECTION_7 7:7
-#define GPIO_DATA_DIRECTION_7_INPUT 0
-#define GPIO_DATA_DIRECTION_7_OUTPUT 1
-#define GPIO_DATA_DIRECTION_6 6:6
-#define GPIO_DATA_DIRECTION_6_INPUT 0
-#define GPIO_DATA_DIRECTION_6_OUTPUT 1
-#define GPIO_DATA_DIRECTION_5 5:5
-#define GPIO_DATA_DIRECTION_5_INPUT 0
-#define GPIO_DATA_DIRECTION_5_OUTPUT 1
-#define GPIO_DATA_DIRECTION_4 4:4
-#define GPIO_DATA_DIRECTION_4_INPUT 0
-#define GPIO_DATA_DIRECTION_4_OUTPUT 1
-#define GPIO_DATA_DIRECTION_3 3:3
-#define GPIO_DATA_DIRECTION_3_INPUT 0
-#define GPIO_DATA_DIRECTION_3_OUTPUT 1
-#define GPIO_DATA_DIRECTION_2 2:2
-#define GPIO_DATA_DIRECTION_2_INPUT 0
-#define GPIO_DATA_DIRECTION_2_OUTPUT 1
-#define GPIO_DATA_DIRECTION_1 131
-#define GPIO_DATA_DIRECTION_1_INPUT 0
-#define GPIO_DATA_DIRECTION_1_OUTPUT 1
-#define GPIO_DATA_DIRECTION_0 0:0
-#define GPIO_DATA_DIRECTION_0_INPUT 0
-#define GPIO_DATA_DIRECTION_0_OUTPUT 1
+#define GPIO_DATA_DIRECTION_31 BIT(31)
+#define GPIO_DATA_DIRECTION_30 BIT(30)
+#define GPIO_DATA_DIRECTION_29 BIT(29)
+#define GPIO_DATA_DIRECTION_28 BIT(28)
+#define GPIO_DATA_DIRECTION_27 BIT(27)
+#define GPIO_DATA_DIRECTION_26 BIT(26)
+#define GPIO_DATA_DIRECTION_25 BIT(25)
+#define GPIO_DATA_DIRECTION_24 BIT(24)
+#define GPIO_DATA_DIRECTION_23 BIT(23)
+#define GPIO_DATA_DIRECTION_22 BIT(22)
+#define GPIO_DATA_DIRECTION_21 BIT(21)
+#define GPIO_DATA_DIRECTION_20 BIT(20)
+#define GPIO_DATA_DIRECTION_19 BIT(19)
+#define GPIO_DATA_DIRECTION_18 BIT(18)
+#define GPIO_DATA_DIRECTION_17 BIT(17)
+#define GPIO_DATA_DIRECTION_16 BIT(16)
+#define GPIO_DATA_DIRECTION_15 BIT(15)
+#define GPIO_DATA_DIRECTION_14 BIT(14)
+#define GPIO_DATA_DIRECTION_13 BIT(13)
+#define GPIO_DATA_DIRECTION_12 BIT(12)
+#define GPIO_DATA_DIRECTION_11 BIT(11)
+#define GPIO_DATA_DIRECTION_10 BIT(10)
+#define GPIO_DATA_DIRECTION_9 BIT(9)
+#define GPIO_DATA_DIRECTION_8 BIT(8)
+#define GPIO_DATA_DIRECTION_7 BIT(7)
+#define GPIO_DATA_DIRECTION_6 BIT(6)
+#define GPIO_DATA_DIRECTION_5 BIT(5)
+#define GPIO_DATA_DIRECTION_4 BIT(4)
+#define GPIO_DATA_DIRECTION_3 BIT(3)
+#define GPIO_DATA_DIRECTION_2 BIT(2)
+#define GPIO_DATA_DIRECTION_1 BIT(1)
+#define GPIO_DATA_DIRECTION_0 BIT(0)
#define GPIO_INTERRUPT_SETUP 0x010008
-#define GPIO_INTERRUPT_SETUP_TRIGGER_31 22:22
-#define GPIO_INTERRUPT_SETUP_TRIGGER_31_EDGE 0
-#define GPIO_INTERRUPT_SETUP_TRIGGER_31_LEVEL 1
-#define GPIO_INTERRUPT_SETUP_TRIGGER_30 21:21
-#define GPIO_INTERRUPT_SETUP_TRIGGER_30_EDGE 0
-#define GPIO_INTERRUPT_SETUP_TRIGGER_30_LEVEL 1
-#define GPIO_INTERRUPT_SETUP_TRIGGER_29 20:20
-#define GPIO_INTERRUPT_SETUP_TRIGGER_29_EDGE 0
-#define GPIO_INTERRUPT_SETUP_TRIGGER_29_LEVEL 1
-#define GPIO_INTERRUPT_SETUP_TRIGGER_28 19:19
-#define GPIO_INTERRUPT_SETUP_TRIGGER_28_EDGE 0
-#define GPIO_INTERRUPT_SETUP_TRIGGER_28_LEVEL 1
-#define GPIO_INTERRUPT_SETUP_TRIGGER_27 18:18
-#define GPIO_INTERRUPT_SETUP_TRIGGER_27_EDGE 0
-#define GPIO_INTERRUPT_SETUP_TRIGGER_27_LEVEL 1
-#define GPIO_INTERRUPT_SETUP_TRIGGER_26 17:17
-#define GPIO_INTERRUPT_SETUP_TRIGGER_26_EDGE 0
-#define GPIO_INTERRUPT_SETUP_TRIGGER_26_LEVEL 1
-#define GPIO_INTERRUPT_SETUP_TRIGGER_25 16:16
-#define GPIO_INTERRUPT_SETUP_TRIGGER_25_EDGE 0
-#define GPIO_INTERRUPT_SETUP_TRIGGER_25_LEVEL 1
-#define GPIO_INTERRUPT_SETUP_ACTIVE_31 14:14
-#define GPIO_INTERRUPT_SETUP_ACTIVE_31_LOW 0
-#define GPIO_INTERRUPT_SETUP_ACTIVE_31_HIGH 1
-#define GPIO_INTERRUPT_SETUP_ACTIVE_30 13:13
-#define GPIO_INTERRUPT_SETUP_ACTIVE_30_LOW 0
-#define GPIO_INTERRUPT_SETUP_ACTIVE_30_HIGH 1
-#define GPIO_INTERRUPT_SETUP_ACTIVE_29 12:12
-#define GPIO_INTERRUPT_SETUP_ACTIVE_29_LOW 0
-#define GPIO_INTERRUPT_SETUP_ACTIVE_29_HIGH 1
-#define GPIO_INTERRUPT_SETUP_ACTIVE_28 11:11
-#define GPIO_INTERRUPT_SETUP_ACTIVE_28_LOW 0
-#define GPIO_INTERRUPT_SETUP_ACTIVE_28_HIGH 1
-#define GPIO_INTERRUPT_SETUP_ACTIVE_27 10:10
-#define GPIO_INTERRUPT_SETUP_ACTIVE_27_LOW 0
-#define GPIO_INTERRUPT_SETUP_ACTIVE_27_HIGH 1
-#define GPIO_INTERRUPT_SETUP_ACTIVE_26 9:9
-#define GPIO_INTERRUPT_SETUP_ACTIVE_26_LOW 0
-#define GPIO_INTERRUPT_SETUP_ACTIVE_26_HIGH 1
-#define GPIO_INTERRUPT_SETUP_ACTIVE_25 8:8
-#define GPIO_INTERRUPT_SETUP_ACTIVE_25_LOW 0
-#define GPIO_INTERRUPT_SETUP_ACTIVE_25_HIGH 1
-#define GPIO_INTERRUPT_SETUP_ENABLE_31 6:6
-#define GPIO_INTERRUPT_SETUP_ENABLE_31_GPIO 0
-#define GPIO_INTERRUPT_SETUP_ENABLE_31_INTERRUPT 1
-#define GPIO_INTERRUPT_SETUP_ENABLE_30 5:5
-#define GPIO_INTERRUPT_SETUP_ENABLE_30_GPIO 0
-#define GPIO_INTERRUPT_SETUP_ENABLE_30_INTERRUPT 1
-#define GPIO_INTERRUPT_SETUP_ENABLE_29 4:4
-#define GPIO_INTERRUPT_SETUP_ENABLE_29_GPIO 0
-#define GPIO_INTERRUPT_SETUP_ENABLE_29_INTERRUPT 1
-#define GPIO_INTERRUPT_SETUP_ENABLE_28 3:3
-#define GPIO_INTERRUPT_SETUP_ENABLE_28_GPIO 0
-#define GPIO_INTERRUPT_SETUP_ENABLE_28_INTERRUPT 1
-#define GPIO_INTERRUPT_SETUP_ENABLE_27 2:2
-#define GPIO_INTERRUPT_SETUP_ENABLE_27_GPIO 0
-#define GPIO_INTERRUPT_SETUP_ENABLE_27_INTERRUPT 1
-#define GPIO_INTERRUPT_SETUP_ENABLE_26 1:1
-#define GPIO_INTERRUPT_SETUP_ENABLE_26_GPIO 0
-#define GPIO_INTERRUPT_SETUP_ENABLE_26_INTERRUPT 1
-#define GPIO_INTERRUPT_SETUP_ENABLE_25 0:0
-#define GPIO_INTERRUPT_SETUP_ENABLE_25_GPIO 0
-#define GPIO_INTERRUPT_SETUP_ENABLE_25_INTERRUPT 1
+#define GPIO_INTERRUPT_SETUP_TRIGGER_31 BIT(22)
+#define GPIO_INTERRUPT_SETUP_TRIGGER_30 BIT(21)
+#define GPIO_INTERRUPT_SETUP_TRIGGER_29 BIT(20)
+#define GPIO_INTERRUPT_SETUP_TRIGGER_28 BIT(19)
+#define GPIO_INTERRUPT_SETUP_TRIGGER_27 BIT(18)
+#define GPIO_INTERRUPT_SETUP_TRIGGER_26 BIT(17)
+#define GPIO_INTERRUPT_SETUP_TRIGGER_25 BIT(16)
+#define GPIO_INTERRUPT_SETUP_ACTIVE_31 BIT(14)
+#define GPIO_INTERRUPT_SETUP_ACTIVE_30 BIT(13)
+#define GPIO_INTERRUPT_SETUP_ACTIVE_29 BIT(12)
+#define GPIO_INTERRUPT_SETUP_ACTIVE_28 BIT(11)
+#define GPIO_INTERRUPT_SETUP_ACTIVE_27 BIT(10)
+#define GPIO_INTERRUPT_SETUP_ACTIVE_26 BIT(9)
+#define GPIO_INTERRUPT_SETUP_ACTIVE_25 BIT(8)
+#define GPIO_INTERRUPT_SETUP_ENABLE_31 BIT(6)
+#define GPIO_INTERRUPT_SETUP_ENABLE_30 BIT(5)
+#define GPIO_INTERRUPT_SETUP_ENABLE_29 BIT(4)
+#define GPIO_INTERRUPT_SETUP_ENABLE_28 BIT(3)
+#define GPIO_INTERRUPT_SETUP_ENABLE_27 BIT(2)
+#define GPIO_INTERRUPT_SETUP_ENABLE_26 BIT(1)
+#define GPIO_INTERRUPT_SETUP_ENABLE_25 BIT(0)
#define GPIO_INTERRUPT_STATUS 0x01000C
-#define GPIO_INTERRUPT_STATUS_31 22:22
-#define GPIO_INTERRUPT_STATUS_31_INACTIVE 0
-#define GPIO_INTERRUPT_STATUS_31_ACTIVE 1
-#define GPIO_INTERRUPT_STATUS_31_RESET 1
-#define GPIO_INTERRUPT_STATUS_30 21:21
-#define GPIO_INTERRUPT_STATUS_30_INACTIVE 0
-#define GPIO_INTERRUPT_STATUS_30_ACTIVE 1
-#define GPIO_INTERRUPT_STATUS_30_RESET 1
-#define GPIO_INTERRUPT_STATUS_29 20:20
-#define GPIO_INTERRUPT_STATUS_29_INACTIVE 0
-#define GPIO_INTERRUPT_STATUS_29_ACTIVE 1
-#define GPIO_INTERRUPT_STATUS_29_RESET 1
-#define GPIO_INTERRUPT_STATUS_28 19:19
-#define GPIO_INTERRUPT_STATUS_28_INACTIVE 0
-#define GPIO_INTERRUPT_STATUS_28_ACTIVE 1
-#define GPIO_INTERRUPT_STATUS_28_RESET 1
-#define GPIO_INTERRUPT_STATUS_27 18:18
-#define GPIO_INTERRUPT_STATUS_27_INACTIVE 0
-#define GPIO_INTERRUPT_STATUS_27_ACTIVE 1
-#define GPIO_INTERRUPT_STATUS_27_RESET 1
-#define GPIO_INTERRUPT_STATUS_26 17:17
-#define GPIO_INTERRUPT_STATUS_26_INACTIVE 0
-#define GPIO_INTERRUPT_STATUS_26_ACTIVE 1
-#define GPIO_INTERRUPT_STATUS_26_RESET 1
-#define GPIO_INTERRUPT_STATUS_25 16:16
-#define GPIO_INTERRUPT_STATUS_25_INACTIVE 0
-#define GPIO_INTERRUPT_STATUS_25_ACTIVE 1
-#define GPIO_INTERRUPT_STATUS_25_RESET 1
+#define GPIO_INTERRUPT_STATUS_31 BIT(22)
+#define GPIO_INTERRUPT_STATUS_30 BIT(21)
+#define GPIO_INTERRUPT_STATUS_29 BIT(20)
+#define GPIO_INTERRUPT_STATUS_28 BIT(19)
+#define GPIO_INTERRUPT_STATUS_27 BIT(18)
+#define GPIO_INTERRUPT_STATUS_26 BIT(17)
+#define GPIO_INTERRUPT_STATUS_25 BIT(16)
#define PANEL_DISPLAY_CTRL 0x080000
@@ -818,14 +570,14 @@
#define PANEL_DISPLAY_CTRL_FORMAT_32 (0x2 << 0)
#define PANEL_PAN_CTRL 0x080004
-#define PANEL_PAN_CTRL_VERTICAL_PAN 31:24
-#define PANEL_PAN_CTRL_VERTICAL_VSYNC 21:16
-#define PANEL_PAN_CTRL_HORIZONTAL_PAN 15:8
-#define PANEL_PAN_CTRL_HORIZONTAL_VSYNC 5:0
+#define PANEL_PAN_CTRL_VERTICAL_PAN_MASK (0xff << 24)
+#define PANEL_PAN_CTRL_VERTICAL_VSYNC_MASK (0x3f << 16)
+#define PANEL_PAN_CTRL_HORIZONTAL_PAN_MASK (0xff << 8)
+#define PANEL_PAN_CTRL_HORIZONTAL_VSYNC_MASK 0x3f
#define PANEL_COLOR_KEY 0x080008
-#define PANEL_COLOR_KEY_MASK 31:16
-#define PANEL_COLOR_KEY_VALUE 15:0
+#define PANEL_COLOR_KEY_MASK_MASK (0xffff << 16)
+#define PANEL_COLOR_KEY_VALUE_MASK 0xffff
#define PANEL_FB_ADDRESS 0x08000C
#define PANEL_FB_ADDRESS_STATUS BIT(31)
@@ -863,454 +615,383 @@
#define PANEL_HORIZONTAL_TOTAL_DISPLAY_END_MASK 0xfff
#define PANEL_HORIZONTAL_SYNC 0x080028
-#define PANEL_HORIZONTAL_SYNC_WIDTH 23:16
-#define PANEL_HORIZONTAL_SYNC_START 11:0
+#define PANEL_HORIZONTAL_SYNC_WIDTH_SHIFT 16
+#define PANEL_HORIZONTAL_SYNC_WIDTH_MASK (0xff << 16)
+#define PANEL_HORIZONTAL_SYNC_START_MASK 0xfff
#define PANEL_VERTICAL_TOTAL 0x08002C
-#define PANEL_VERTICAL_TOTAL_TOTAL 26:16
-#define PANEL_VERTICAL_TOTAL_DISPLAY_END 10:0
+#define PANEL_VERTICAL_TOTAL_TOTAL_SHIFT 16
+#define PANEL_VERTICAL_TOTAL_TOTAL_MASK (0x7ff << 16)
+#define PANEL_VERTICAL_TOTAL_DISPLAY_END_MASK 0x7ff
#define PANEL_VERTICAL_SYNC 0x080030
-#define PANEL_VERTICAL_SYNC_HEIGHT 21:16
-#define PANEL_VERTICAL_SYNC_START 10:0
+#define PANEL_VERTICAL_SYNC_HEIGHT_SHIFT 16
+#define PANEL_VERTICAL_SYNC_HEIGHT_MASK (0x3f << 16)
+#define PANEL_VERTICAL_SYNC_START_MASK 0x7ff
#define PANEL_CURRENT_LINE 0x080034
-#define PANEL_CURRENT_LINE_LINE 10:0
+#define PANEL_CURRENT_LINE_LINE_MASK 0x7ff
/* Video Control */
#define VIDEO_DISPLAY_CTRL 0x080040
-#define VIDEO_DISPLAY_CTRL_LINE_BUFFER 18:18
-#define VIDEO_DISPLAY_CTRL_LINE_BUFFER_DISABLE 0
-#define VIDEO_DISPLAY_CTRL_LINE_BUFFER_ENABLE 1
-#define VIDEO_DISPLAY_CTRL_FIFO 17:16
-#define VIDEO_DISPLAY_CTRL_FIFO_1 0
-#define VIDEO_DISPLAY_CTRL_FIFO_3 1
-#define VIDEO_DISPLAY_CTRL_FIFO_7 2
-#define VIDEO_DISPLAY_CTRL_FIFO_11 3
-#define VIDEO_DISPLAY_CTRL_BUFFER 15:15
-#define VIDEO_DISPLAY_CTRL_BUFFER_0 0
-#define VIDEO_DISPLAY_CTRL_BUFFER_1 1
-#define VIDEO_DISPLAY_CTRL_CAPTURE 14:14
-#define VIDEO_DISPLAY_CTRL_CAPTURE_DISABLE 0
-#define VIDEO_DISPLAY_CTRL_CAPTURE_ENABLE 1
-#define VIDEO_DISPLAY_CTRL_DOUBLE_BUFFER 13:13
-#define VIDEO_DISPLAY_CTRL_DOUBLE_BUFFER_DISABLE 0
-#define VIDEO_DISPLAY_CTRL_DOUBLE_BUFFER_ENABLE 1
-#define VIDEO_DISPLAY_CTRL_BYTE_SWAP 12:12
-#define VIDEO_DISPLAY_CTRL_BYTE_SWAP_DISABLE 0
-#define VIDEO_DISPLAY_CTRL_BYTE_SWAP_ENABLE 1
-#define VIDEO_DISPLAY_CTRL_VERTICAL_SCALE 11:11
-#define VIDEO_DISPLAY_CTRL_VERTICAL_SCALE_NORMAL 0
-#define VIDEO_DISPLAY_CTRL_VERTICAL_SCALE_HALF 1
-#define VIDEO_DISPLAY_CTRL_HORIZONTAL_SCALE 10:10
-#define VIDEO_DISPLAY_CTRL_HORIZONTAL_SCALE_NORMAL 0
-#define VIDEO_DISPLAY_CTRL_HORIZONTAL_SCALE_HALF 1
-#define VIDEO_DISPLAY_CTRL_VERTICAL_MODE 9:9
-#define VIDEO_DISPLAY_CTRL_VERTICAL_MODE_REPLICATE 0
-#define VIDEO_DISPLAY_CTRL_VERTICAL_MODE_INTERPOLATE 1
-#define VIDEO_DISPLAY_CTRL_HORIZONTAL_MODE 8:8
-#define VIDEO_DISPLAY_CTRL_HORIZONTAL_MODE_REPLICATE 0
-#define VIDEO_DISPLAY_CTRL_HORIZONTAL_MODE_INTERPOLATE 1
-#define VIDEO_DISPLAY_CTRL_PIXEL 7:4
-#define VIDEO_DISPLAY_CTRL_GAMMA 3:3
-#define VIDEO_DISPLAY_CTRL_GAMMA_DISABLE 0
-#define VIDEO_DISPLAY_CTRL_GAMMA_ENABLE 1
-#define VIDEO_DISPLAY_CTRL_FORMAT 1:0
-#define VIDEO_DISPLAY_CTRL_FORMAT_8 0
-#define VIDEO_DISPLAY_CTRL_FORMAT_16 1
-#define VIDEO_DISPLAY_CTRL_FORMAT_32 2
-#define VIDEO_DISPLAY_CTRL_FORMAT_YUV 3
+#define VIDEO_DISPLAY_CTRL_LINE_BUFFER BIT(18)
+#define VIDEO_DISPLAY_CTRL_FIFO_MASK (0x3 << 16)
+#define VIDEO_DISPLAY_CTRL_FIFO_1 (0x0 << 16)
+#define VIDEO_DISPLAY_CTRL_FIFO_3 (0x1 << 16)
+#define VIDEO_DISPLAY_CTRL_FIFO_7 (0x2 << 16)
+#define VIDEO_DISPLAY_CTRL_FIFO_11 (0x3 << 16)
+#define VIDEO_DISPLAY_CTRL_BUFFER BIT(15)
+#define VIDEO_DISPLAY_CTRL_CAPTURE BIT(14)
+#define VIDEO_DISPLAY_CTRL_DOUBLE_BUFFER BIT(13)
+#define VIDEO_DISPLAY_CTRL_BYTE_SWAP BIT(12)
+#define VIDEO_DISPLAY_CTRL_VERTICAL_SCALE BIT(11)
+#define VIDEO_DISPLAY_CTRL_HORIZONTAL_SCALE BIT(10)
+#define VIDEO_DISPLAY_CTRL_VERTICAL_MODE BIT(9)
+#define VIDEO_DISPLAY_CTRL_HORIZONTAL_MODE BIT(8)
+#define VIDEO_DISPLAY_CTRL_PIXEL_MASK (0xf << 4)
+#define VIDEO_DISPLAY_CTRL_GAMMA BIT(3)
+#define VIDEO_DISPLAY_CTRL_FORMAT_MASK 0x3
+#define VIDEO_DISPLAY_CTRL_FORMAT_8 0x0
+#define VIDEO_DISPLAY_CTRL_FORMAT_16 0x1
+#define VIDEO_DISPLAY_CTRL_FORMAT_32 0x2
+#define VIDEO_DISPLAY_CTRL_FORMAT_YUV 0x3
#define VIDEO_FB_0_ADDRESS 0x080044
-#define VIDEO_FB_0_ADDRESS_STATUS 31:31
-#define VIDEO_FB_0_ADDRESS_STATUS_CURRENT 0
-#define VIDEO_FB_0_ADDRESS_STATUS_PENDING 1
-#define VIDEO_FB_0_ADDRESS_EXT 27:27
-#define VIDEO_FB_0_ADDRESS_EXT_LOCAL 0
-#define VIDEO_FB_0_ADDRESS_EXT_EXTERNAL 1
-#define VIDEO_FB_0_ADDRESS_ADDRESS 25:0
+#define VIDEO_FB_0_ADDRESS_STATUS BIT(31)
+#define VIDEO_FB_0_ADDRESS_EXT BIT(27)
+#define VIDEO_FB_0_ADDRESS_ADDRESS_MASK 0x3ffffff
#define VIDEO_FB_WIDTH 0x080048
-#define VIDEO_FB_WIDTH_WIDTH 29:16
-#define VIDEO_FB_WIDTH_OFFSET 13:0
+#define VIDEO_FB_WIDTH_WIDTH_MASK (0x3fff << 16)
+#define VIDEO_FB_WIDTH_OFFSET_MASK 0x3fff
#define VIDEO_FB_0_LAST_ADDRESS 0x08004C
-#define VIDEO_FB_0_LAST_ADDRESS_EXT 27:27
-#define VIDEO_FB_0_LAST_ADDRESS_EXT_LOCAL 0
-#define VIDEO_FB_0_LAST_ADDRESS_EXT_EXTERNAL 1
-#define VIDEO_FB_0_LAST_ADDRESS_ADDRESS 25:0
+#define VIDEO_FB_0_LAST_ADDRESS_EXT BIT(27)
+#define VIDEO_FB_0_LAST_ADDRESS_ADDRESS_MASK 0x3ffffff
#define VIDEO_PLANE_TL 0x080050
-#define VIDEO_PLANE_TL_TOP 26:16
-#define VIDEO_PLANE_TL_LEFT 10:0
+#define VIDEO_PLANE_TL_TOP_MASK (0x7ff << 16)
+#define VIDEO_PLANE_TL_LEFT_MASK 0x7ff
#define VIDEO_PLANE_BR 0x080054
-#define VIDEO_PLANE_BR_BOTTOM 26:16
-#define VIDEO_PLANE_BR_RIGHT 10:0
+#define VIDEO_PLANE_BR_BOTTOM_MASK (0x7ff << 16)
+#define VIDEO_PLANE_BR_RIGHT_MASK 0x7ff
#define VIDEO_SCALE 0x080058
-#define VIDEO_SCALE_VERTICAL_MODE 31:31
-#define VIDEO_SCALE_VERTICAL_MODE_EXPAND 0
-#define VIDEO_SCALE_VERTICAL_MODE_SHRINK 1
-#define VIDEO_SCALE_VERTICAL_SCALE 27:16
-#define VIDEO_SCALE_HORIZONTAL_MODE 15:15
-#define VIDEO_SCALE_HORIZONTAL_MODE_EXPAND 0
-#define VIDEO_SCALE_HORIZONTAL_MODE_SHRINK 1
-#define VIDEO_SCALE_HORIZONTAL_SCALE 11:0
+#define VIDEO_SCALE_VERTICAL_MODE BIT(31)
+#define VIDEO_SCALE_VERTICAL_SCALE_MASK (0xfff << 16)
+#define VIDEO_SCALE_HORIZONTAL_MODE BIT(15)
+#define VIDEO_SCALE_HORIZONTAL_SCALE_MASK 0xfff
#define VIDEO_INITIAL_SCALE 0x08005C
-#define VIDEO_INITIAL_SCALE_FB_1 27:16
-#define VIDEO_INITIAL_SCALE_FB_0 11:0
+#define VIDEO_INITIAL_SCALE_FB_1_MASK (0xfff << 16)
+#define VIDEO_INITIAL_SCALE_FB_0_MASK 0xfff
#define VIDEO_YUV_CONSTANTS 0x080060
-#define VIDEO_YUV_CONSTANTS_Y 31:24
-#define VIDEO_YUV_CONSTANTS_R 23:16
-#define VIDEO_YUV_CONSTANTS_G 15:8
-#define VIDEO_YUV_CONSTANTS_B 7:0
+#define VIDEO_YUV_CONSTANTS_Y_MASK (0xff << 24)
+#define VIDEO_YUV_CONSTANTS_R_MASK (0xff << 16)
+#define VIDEO_YUV_CONSTANTS_G_MASK (0xff << 8)
+#define VIDEO_YUV_CONSTANTS_B_MASK 0xff
#define VIDEO_FB_1_ADDRESS 0x080064
-#define VIDEO_FB_1_ADDRESS_STATUS 31:31
-#define VIDEO_FB_1_ADDRESS_STATUS_CURRENT 0
-#define VIDEO_FB_1_ADDRESS_STATUS_PENDING 1
-#define VIDEO_FB_1_ADDRESS_EXT 27:27
-#define VIDEO_FB_1_ADDRESS_EXT_LOCAL 0
-#define VIDEO_FB_1_ADDRESS_EXT_EXTERNAL 1
-#define VIDEO_FB_1_ADDRESS_ADDRESS 25:0
+#define VIDEO_FB_1_ADDRESS_STATUS BIT(31)
+#define VIDEO_FB_1_ADDRESS_EXT BIT(27)
+#define VIDEO_FB_1_ADDRESS_ADDRESS_MASK 0x3ffffff
#define VIDEO_FB_1_LAST_ADDRESS 0x080068
-#define VIDEO_FB_1_LAST_ADDRESS_EXT 27:27
-#define VIDEO_FB_1_LAST_ADDRESS_EXT_LOCAL 0
-#define VIDEO_FB_1_LAST_ADDRESS_EXT_EXTERNAL 1
-#define VIDEO_FB_1_LAST_ADDRESS_ADDRESS 25:0
+#define VIDEO_FB_1_LAST_ADDRESS_EXT BIT(27)
+#define VIDEO_FB_1_LAST_ADDRESS_ADDRESS_MASK 0x3ffffff
/* Video Alpha Control */
#define VIDEO_ALPHA_DISPLAY_CTRL 0x080080
-#define VIDEO_ALPHA_DISPLAY_CTRL_SELECT 28:28
-#define VIDEO_ALPHA_DISPLAY_CTRL_SELECT_PER_PIXEL 0
-#define VIDEO_ALPHA_DISPLAY_CTRL_SELECT_ALPHA 1
-#define VIDEO_ALPHA_DISPLAY_CTRL_ALPHA 27:24
-#define VIDEO_ALPHA_DISPLAY_CTRL_FIFO 17:16
-#define VIDEO_ALPHA_DISPLAY_CTRL_FIFO_1 0
-#define VIDEO_ALPHA_DISPLAY_CTRL_FIFO_3 1
-#define VIDEO_ALPHA_DISPLAY_CTRL_FIFO_7 2
-#define VIDEO_ALPHA_DISPLAY_CTRL_FIFO_11 3
-#define VIDEO_ALPHA_DISPLAY_CTRL_VERT_SCALE 11:11
-#define VIDEO_ALPHA_DISPLAY_CTRL_VERT_SCALE_NORMAL 0
-#define VIDEO_ALPHA_DISPLAY_CTRL_VERT_SCALE_HALF 1
-#define VIDEO_ALPHA_DISPLAY_CTRL_HORZ_SCALE 10:10
-#define VIDEO_ALPHA_DISPLAY_CTRL_HORZ_SCALE_NORMAL 0
-#define VIDEO_ALPHA_DISPLAY_CTRL_HORZ_SCALE_HALF 1
-#define VIDEO_ALPHA_DISPLAY_CTRL_VERT_MODE 9:9
-#define VIDEO_ALPHA_DISPLAY_CTRL_VERT_MODE_REPLICATE 0
-#define VIDEO_ALPHA_DISPLAY_CTRL_VERT_MODE_INTERPOLATE 1
-#define VIDEO_ALPHA_DISPLAY_CTRL_HORZ_MODE 8:8
-#define VIDEO_ALPHA_DISPLAY_CTRL_HORZ_MODE_REPLICATE 0
-#define VIDEO_ALPHA_DISPLAY_CTRL_HORZ_MODE_INTERPOLATE 1
-#define VIDEO_ALPHA_DISPLAY_CTRL_PIXEL 7:4
-#define VIDEO_ALPHA_DISPLAY_CTRL_CHROMA_KEY 3:3
-#define VIDEO_ALPHA_DISPLAY_CTRL_CHROMA_KEY_DISABLE 0
-#define VIDEO_ALPHA_DISPLAY_CTRL_CHROMA_KEY_ENABLE 1
-#define VIDEO_ALPHA_DISPLAY_CTRL_FORMAT 1:0
-#define VIDEO_ALPHA_DISPLAY_CTRL_FORMAT_8 0
-#define VIDEO_ALPHA_DISPLAY_CTRL_FORMAT_16 1
-#define VIDEO_ALPHA_DISPLAY_CTRL_FORMAT_ALPHA_4_4 2
-#define VIDEO_ALPHA_DISPLAY_CTRL_FORMAT_ALPHA_4_4_4_4 3
+#define VIDEO_ALPHA_DISPLAY_CTRL_SELECT BIT(28)
+#define VIDEO_ALPHA_DISPLAY_CTRL_ALPHA_MASK (0xf << 24)
+#define VIDEO_ALPHA_DISPLAY_CTRL_FIFO_MASK (0x3 << 16)
+#define VIDEO_ALPHA_DISPLAY_CTRL_FIFO_1 (0x0 << 16)
+#define VIDEO_ALPHA_DISPLAY_CTRL_FIFO_3 (0x1 << 16)
+#define VIDEO_ALPHA_DISPLAY_CTRL_FIFO_7 (0x2 << 16)
+#define VIDEO_ALPHA_DISPLAY_CTRL_FIFO_11 (0x3 << 16)
+#define VIDEO_ALPHA_DISPLAY_CTRL_VERT_SCALE BIT(11)
+#define VIDEO_ALPHA_DISPLAY_CTRL_HORZ_SCALE BIT(10)
+#define VIDEO_ALPHA_DISPLAY_CTRL_VERT_MODE BIT(9)
+#define VIDEO_ALPHA_DISPLAY_CTRL_HORZ_MODE BIT(8)
+#define VIDEO_ALPHA_DISPLAY_CTRL_PIXEL_MASK (0xf << 4)
+#define VIDEO_ALPHA_DISPLAY_CTRL_CHROMA_KEY BIT(3)
+#define VIDEO_ALPHA_DISPLAY_CTRL_FORMAT_MASK 0x3
+#define VIDEO_ALPHA_DISPLAY_CTRL_FORMAT_8 0x0
+#define VIDEO_ALPHA_DISPLAY_CTRL_FORMAT_16 0x1
+#define VIDEO_ALPHA_DISPLAY_CTRL_FORMAT_ALPHA_4_4 0x2
+#define VIDEO_ALPHA_DISPLAY_CTRL_FORMAT_ALPHA_4_4_4_4 0x3
#define VIDEO_ALPHA_FB_ADDRESS 0x080084
-#define VIDEO_ALPHA_FB_ADDRESS_STATUS 31:31
-#define VIDEO_ALPHA_FB_ADDRESS_STATUS_CURRENT 0
-#define VIDEO_ALPHA_FB_ADDRESS_STATUS_PENDING 1
-#define VIDEO_ALPHA_FB_ADDRESS_EXT 27:27
-#define VIDEO_ALPHA_FB_ADDRESS_EXT_LOCAL 0
-#define VIDEO_ALPHA_FB_ADDRESS_EXT_EXTERNAL 1
-#define VIDEO_ALPHA_FB_ADDRESS_ADDRESS 25:0
+#define VIDEO_ALPHA_FB_ADDRESS_STATUS BIT(31)
+#define VIDEO_ALPHA_FB_ADDRESS_EXT BIT(27)
+#define VIDEO_ALPHA_FB_ADDRESS_ADDRESS_MASK 0x3ffffff
#define VIDEO_ALPHA_FB_WIDTH 0x080088
-#define VIDEO_ALPHA_FB_WIDTH_WIDTH 29:16
-#define VIDEO_ALPHA_FB_WIDTH_OFFSET 13:0
+#define VIDEO_ALPHA_FB_WIDTH_WIDTH_MASK (0x3fff << 16)
+#define VIDEO_ALPHA_FB_WIDTH_OFFSET_MASK 0x3fff
#define VIDEO_ALPHA_FB_LAST_ADDRESS 0x08008C
-#define VIDEO_ALPHA_FB_LAST_ADDRESS_EXT 27:27
-#define VIDEO_ALPHA_FB_LAST_ADDRESS_EXT_LOCAL 0
-#define VIDEO_ALPHA_FB_LAST_ADDRESS_EXT_EXTERNAL 1
-#define VIDEO_ALPHA_FB_LAST_ADDRESS_ADDRESS 25:0
+#define VIDEO_ALPHA_FB_LAST_ADDRESS_EXT BIT(27)
+#define VIDEO_ALPHA_FB_LAST_ADDRESS_ADDRESS_MASK 0x3ffffff
#define VIDEO_ALPHA_PLANE_TL 0x080090
-#define VIDEO_ALPHA_PLANE_TL_TOP 26:16
-#define VIDEO_ALPHA_PLANE_TL_LEFT 10:0
+#define VIDEO_ALPHA_PLANE_TL_TOP_MASK (0x7ff << 16)
+#define VIDEO_ALPHA_PLANE_TL_LEFT_MASK 0x7ff
#define VIDEO_ALPHA_PLANE_BR 0x080094
-#define VIDEO_ALPHA_PLANE_BR_BOTTOM 26:16
-#define VIDEO_ALPHA_PLANE_BR_RIGHT 10:0
+#define VIDEO_ALPHA_PLANE_BR_BOTTOM_MASK (0x7ff << 16)
+#define VIDEO_ALPHA_PLANE_BR_RIGHT_MASK 0x7ff
#define VIDEO_ALPHA_SCALE 0x080098
-#define VIDEO_ALPHA_SCALE_VERTICAL_MODE 31:31
-#define VIDEO_ALPHA_SCALE_VERTICAL_MODE_EXPAND 0
-#define VIDEO_ALPHA_SCALE_VERTICAL_MODE_SHRINK 1
-#define VIDEO_ALPHA_SCALE_VERTICAL_SCALE 27:16
-#define VIDEO_ALPHA_SCALE_HORIZONTAL_MODE 15:15
-#define VIDEO_ALPHA_SCALE_HORIZONTAL_MODE_EXPAND 0
-#define VIDEO_ALPHA_SCALE_HORIZONTAL_MODE_SHRINK 1
-#define VIDEO_ALPHA_SCALE_HORIZONTAL_SCALE 11:0
+#define VIDEO_ALPHA_SCALE_VERTICAL_MODE BIT(31)
+#define VIDEO_ALPHA_SCALE_VERTICAL_SCALE_MASK (0xfff << 16)
+#define VIDEO_ALPHA_SCALE_HORIZONTAL_MODE BIT(15)
+#define VIDEO_ALPHA_SCALE_HORIZONTAL_SCALE_MASK 0xfff
#define VIDEO_ALPHA_INITIAL_SCALE 0x08009C
-#define VIDEO_ALPHA_INITIAL_SCALE_VERTICAL 27:16
-#define VIDEO_ALPHA_INITIAL_SCALE_HORIZONTAL 11:0
+#define VIDEO_ALPHA_INITIAL_SCALE_VERTICAL_MASK (0xfff << 16)
+#define VIDEO_ALPHA_INITIAL_SCALE_HORIZONTAL_MASK 0xfff
#define VIDEO_ALPHA_CHROMA_KEY 0x0800A0
-#define VIDEO_ALPHA_CHROMA_KEY_MASK 31:16
-#define VIDEO_ALPHA_CHROMA_KEY_VALUE 15:0
+#define VIDEO_ALPHA_CHROMA_KEY_MASK_MASK (0xffff << 16)
+#define VIDEO_ALPHA_CHROMA_KEY_VALUE_MASK 0xffff
#define VIDEO_ALPHA_COLOR_LOOKUP_01 0x0800A4
-#define VIDEO_ALPHA_COLOR_LOOKUP_01_1 31:16
-#define VIDEO_ALPHA_COLOR_LOOKUP_01_1_RED 31:27
-#define VIDEO_ALPHA_COLOR_LOOKUP_01_1_GREEN 26:21
-#define VIDEO_ALPHA_COLOR_LOOKUP_01_1_BLUE 20:16
-#define VIDEO_ALPHA_COLOR_LOOKUP_01_0 15:0
-#define VIDEO_ALPHA_COLOR_LOOKUP_01_0_RED 15:11
-#define VIDEO_ALPHA_COLOR_LOOKUP_01_0_GREEN 10:5
-#define VIDEO_ALPHA_COLOR_LOOKUP_01_0_BLUE 4:0
+#define VIDEO_ALPHA_COLOR_LOOKUP_01_1_MASK (0xffff << 16)
+#define VIDEO_ALPHA_COLOR_LOOKUP_01_1_RED_MASK (0x1f << 27)
+#define VIDEO_ALPHA_COLOR_LOOKUP_01_1_GREEN_MASK (0x3f << 21)
+#define VIDEO_ALPHA_COLOR_LOOKUP_01_1_BLUE_MASK (0x1f << 16)
+#define VIDEO_ALPHA_COLOR_LOOKUP_01_0_MASK 0xffff
+#define VIDEO_ALPHA_COLOR_LOOKUP_01_0_RED_MASK (0x1f << 11)
+#define VIDEO_ALPHA_COLOR_LOOKUP_01_0_GREEN_MASK (0x3f << 5)
+#define VIDEO_ALPHA_COLOR_LOOKUP_01_0_BLUE_MASK 0x1f
#define VIDEO_ALPHA_COLOR_LOOKUP_23 0x0800A8
-#define VIDEO_ALPHA_COLOR_LOOKUP_23_3 31:16
-#define VIDEO_ALPHA_COLOR_LOOKUP_23_3_RED 31:27
-#define VIDEO_ALPHA_COLOR_LOOKUP_23_3_GREEN 26:21
-#define VIDEO_ALPHA_COLOR_LOOKUP_23_3_BLUE 20:16
-#define VIDEO_ALPHA_COLOR_LOOKUP_23_2 15:0
-#define VIDEO_ALPHA_COLOR_LOOKUP_23_2_RED 15:11
-#define VIDEO_ALPHA_COLOR_LOOKUP_23_2_GREEN 10:5
-#define VIDEO_ALPHA_COLOR_LOOKUP_23_2_BLUE 4:0
+#define VIDEO_ALPHA_COLOR_LOOKUP_23_3_MASK (0xffff << 16)
+#define VIDEO_ALPHA_COLOR_LOOKUP_23_3_RED_MASK (0x1f << 27)
+#define VIDEO_ALPHA_COLOR_LOOKUP_23_3_GREEN_MASK (0x3f << 21)
+#define VIDEO_ALPHA_COLOR_LOOKUP_23_3_BLUE_MASK (0x1f << 16)
+#define VIDEO_ALPHA_COLOR_LOOKUP_23_2_MASK 0xffff
+#define VIDEO_ALPHA_COLOR_LOOKUP_23_2_RED_MASK (0x1f << 11)
+#define VIDEO_ALPHA_COLOR_LOOKUP_23_2_GREEN_MASK (0x3f << 5)
+#define VIDEO_ALPHA_COLOR_LOOKUP_23_2_BLUE_MASK 0x1f
#define VIDEO_ALPHA_COLOR_LOOKUP_45 0x0800AC
-#define VIDEO_ALPHA_COLOR_LOOKUP_45_5 31:16
-#define VIDEO_ALPHA_COLOR_LOOKUP_45_5_RED 31:27
-#define VIDEO_ALPHA_COLOR_LOOKUP_45_5_GREEN 26:21
-#define VIDEO_ALPHA_COLOR_LOOKUP_45_5_BLUE 20:16
-#define VIDEO_ALPHA_COLOR_LOOKUP_45_4 15:0
-#define VIDEO_ALPHA_COLOR_LOOKUP_45_4_RED 15:11
-#define VIDEO_ALPHA_COLOR_LOOKUP_45_4_GREEN 10:5
-#define VIDEO_ALPHA_COLOR_LOOKUP_45_4_BLUE 4:0
+#define VIDEO_ALPHA_COLOR_LOOKUP_45_5_MASK (0xffff << 16)
+#define VIDEO_ALPHA_COLOR_LOOKUP_45_5_RED_MASK (0x1f << 27)
+#define VIDEO_ALPHA_COLOR_LOOKUP_45_5_GREEN_MASK (0x3f << 21)
+#define VIDEO_ALPHA_COLOR_LOOKUP_45_5_BLUE_MASK (0x1f << 16)
+#define VIDEO_ALPHA_COLOR_LOOKUP_45_4_MASK 0xffff
+#define VIDEO_ALPHA_COLOR_LOOKUP_45_4_RED_MASK (0x1f << 11)
+#define VIDEO_ALPHA_COLOR_LOOKUP_45_4_GREEN_MASK (0x3f << 5)
+#define VIDEO_ALPHA_COLOR_LOOKUP_45_4_BLUE_MASK 0x1f
#define VIDEO_ALPHA_COLOR_LOOKUP_67 0x0800B0
-#define VIDEO_ALPHA_COLOR_LOOKUP_67_7 31:16
-#define VIDEO_ALPHA_COLOR_LOOKUP_67_7_RED 31:27
-#define VIDEO_ALPHA_COLOR_LOOKUP_67_7_GREEN 26:21
-#define VIDEO_ALPHA_COLOR_LOOKUP_67_7_BLUE 20:16
-#define VIDEO_ALPHA_COLOR_LOOKUP_67_6 15:0
-#define VIDEO_ALPHA_COLOR_LOOKUP_67_6_RED 15:11
-#define VIDEO_ALPHA_COLOR_LOOKUP_67_6_GREEN 10:5
-#define VIDEO_ALPHA_COLOR_LOOKUP_67_6_BLUE 4:0
+#define VIDEO_ALPHA_COLOR_LOOKUP_67_7_MASK (0xffff << 16)
+#define VIDEO_ALPHA_COLOR_LOOKUP_67_7_RED_MASK (0x1f << 27)
+#define VIDEO_ALPHA_COLOR_LOOKUP_67_7_GREEN_MASK (0x3f << 21)
+#define VIDEO_ALPHA_COLOR_LOOKUP_67_7_BLUE_MASK (0x1f << 16)
+#define VIDEO_ALPHA_COLOR_LOOKUP_67_6_MASK 0xffff
+#define VIDEO_ALPHA_COLOR_LOOKUP_67_6_RED_MASK (0x1f << 11)
+#define VIDEO_ALPHA_COLOR_LOOKUP_67_6_GREEN_MASK (0x3f << 5)
+#define VIDEO_ALPHA_COLOR_LOOKUP_67_6_BLUE_MASK 0x1f
#define VIDEO_ALPHA_COLOR_LOOKUP_89 0x0800B4
-#define VIDEO_ALPHA_COLOR_LOOKUP_89_9 31:16
-#define VIDEO_ALPHA_COLOR_LOOKUP_89_9_RED 31:27
-#define VIDEO_ALPHA_COLOR_LOOKUP_89_9_GREEN 26:21
-#define VIDEO_ALPHA_COLOR_LOOKUP_89_9_BLUE 20:16
-#define VIDEO_ALPHA_COLOR_LOOKUP_89_8 15:0
-#define VIDEO_ALPHA_COLOR_LOOKUP_89_8_RED 15:11
-#define VIDEO_ALPHA_COLOR_LOOKUP_89_8_GREEN 10:5
-#define VIDEO_ALPHA_COLOR_LOOKUP_89_8_BLUE 4:0
+#define VIDEO_ALPHA_COLOR_LOOKUP_89_9_MASK (0xffff << 16)
+#define VIDEO_ALPHA_COLOR_LOOKUP_89_9_RED_MASK (0x1f << 27)
+#define VIDEO_ALPHA_COLOR_LOOKUP_89_9_GREEN_MASK (0x3f << 21)
+#define VIDEO_ALPHA_COLOR_LOOKUP_89_9_BLUE_MASK (0x1f << 16)
+#define VIDEO_ALPHA_COLOR_LOOKUP_89_8_MASK 0xffff
+#define VIDEO_ALPHA_COLOR_LOOKUP_89_8_RED_MASK (0x1f << 11)
+#define VIDEO_ALPHA_COLOR_LOOKUP_89_8_GREEN_MASK (0x3f << 5)
+#define VIDEO_ALPHA_COLOR_LOOKUP_89_8_BLUE_MASK 0x1f
#define VIDEO_ALPHA_COLOR_LOOKUP_AB 0x0800B8
-#define VIDEO_ALPHA_COLOR_LOOKUP_AB_B 31:16
-#define VIDEO_ALPHA_COLOR_LOOKUP_AB_B_RED 31:27
-#define VIDEO_ALPHA_COLOR_LOOKUP_AB_B_GREEN 26:21
-#define VIDEO_ALPHA_COLOR_LOOKUP_AB_B_BLUE 20:16
-#define VIDEO_ALPHA_COLOR_LOOKUP_AB_A 15:0
-#define VIDEO_ALPHA_COLOR_LOOKUP_AB_A_RED 15:11
-#define VIDEO_ALPHA_COLOR_LOOKUP_AB_A_GREEN 10:5
-#define VIDEO_ALPHA_COLOR_LOOKUP_AB_A_BLUE 4:0
+#define VIDEO_ALPHA_COLOR_LOOKUP_AB_B_MASK (0xffff << 16)
+#define VIDEO_ALPHA_COLOR_LOOKUP_AB_B_RED_MASK (0x1f << 27)
+#define VIDEO_ALPHA_COLOR_LOOKUP_AB_B_GREEN_MASK (0x3f << 21)
+#define VIDEO_ALPHA_COLOR_LOOKUP_AB_B_BLUE_MASK (0x1f << 16)
+#define VIDEO_ALPHA_COLOR_LOOKUP_AB_A_MASK 0xffff
+#define VIDEO_ALPHA_COLOR_LOOKUP_AB_A_RED_MASK (0x1f << 11)
+#define VIDEO_ALPHA_COLOR_LOOKUP_AB_A_GREEN_MASK (0x3f << 5)
+#define VIDEO_ALPHA_COLOR_LOOKUP_AB_A_BLUE_MASK 0x1f
#define VIDEO_ALPHA_COLOR_LOOKUP_CD 0x0800BC
-#define VIDEO_ALPHA_COLOR_LOOKUP_CD_D 31:16
-#define VIDEO_ALPHA_COLOR_LOOKUP_CD_D_RED 31:27
-#define VIDEO_ALPHA_COLOR_LOOKUP_CD_D_GREEN 26:21
-#define VIDEO_ALPHA_COLOR_LOOKUP_CD_D_BLUE 20:16
-#define VIDEO_ALPHA_COLOR_LOOKUP_CD_C 15:0
-#define VIDEO_ALPHA_COLOR_LOOKUP_CD_C_RED 15:11
-#define VIDEO_ALPHA_COLOR_LOOKUP_CD_C_GREEN 10:5
-#define VIDEO_ALPHA_COLOR_LOOKUP_CD_C_BLUE 4:0
+#define VIDEO_ALPHA_COLOR_LOOKUP_CD_D_MASK (0xffff << 16)
+#define VIDEO_ALPHA_COLOR_LOOKUP_CD_D_RED_MASK (0x1f << 27)
+#define VIDEO_ALPHA_COLOR_LOOKUP_CD_D_GREEN_MASK (0x3f << 21)
+#define VIDEO_ALPHA_COLOR_LOOKUP_CD_D_BLUE_MASK (0x1f << 16)
+#define VIDEO_ALPHA_COLOR_LOOKUP_CD_C_MASK 0xffff
+#define VIDEO_ALPHA_COLOR_LOOKUP_CD_C_RED_MASK (0x1f << 11)
+#define VIDEO_ALPHA_COLOR_LOOKUP_CD_C_GREEN_MASK (0x3f << 5)
+#define VIDEO_ALPHA_COLOR_LOOKUP_CD_C_BLUE_MASK 0x1f
#define VIDEO_ALPHA_COLOR_LOOKUP_EF 0x0800C0
-#define VIDEO_ALPHA_COLOR_LOOKUP_EF_F 31:16
-#define VIDEO_ALPHA_COLOR_LOOKUP_EF_F_RED 31:27
-#define VIDEO_ALPHA_COLOR_LOOKUP_EF_F_GREEN 26:21
-#define VIDEO_ALPHA_COLOR_LOOKUP_EF_F_BLUE 20:16
-#define VIDEO_ALPHA_COLOR_LOOKUP_EF_E 15:0
-#define VIDEO_ALPHA_COLOR_LOOKUP_EF_E_RED 15:11
-#define VIDEO_ALPHA_COLOR_LOOKUP_EF_E_GREEN 10:5
-#define VIDEO_ALPHA_COLOR_LOOKUP_EF_E_BLUE 4:0
+#define VIDEO_ALPHA_COLOR_LOOKUP_EF_F_MASK (0xffff << 16)
+#define VIDEO_ALPHA_COLOR_LOOKUP_EF_F_RED_MASK (0x1f << 27)
+#define VIDEO_ALPHA_COLOR_LOOKUP_EF_F_GREEN_MASK (0x3f << 21)
+#define VIDEO_ALPHA_COLOR_LOOKUP_EF_F_BLUE_MASK (0x1f << 16)
+#define VIDEO_ALPHA_COLOR_LOOKUP_EF_E_MASK 0xffff
+#define VIDEO_ALPHA_COLOR_LOOKUP_EF_E_RED_MASK (0x1f << 11)
+#define VIDEO_ALPHA_COLOR_LOOKUP_EF_E_GREEN_MASK (0x3f << 5)
+#define VIDEO_ALPHA_COLOR_LOOKUP_EF_E_BLUE_MASK 0x1f
/* Panel Cursor Control */
#define PANEL_HWC_ADDRESS 0x0800F0
-#define PANEL_HWC_ADDRESS_ENABLE 31:31
-#define PANEL_HWC_ADDRESS_ENABLE_DISABLE 0
-#define PANEL_HWC_ADDRESS_ENABLE_ENABLE 1
-#define PANEL_HWC_ADDRESS_EXT 27:27
-#define PANEL_HWC_ADDRESS_EXT_LOCAL 0
-#define PANEL_HWC_ADDRESS_EXT_EXTERNAL 1
-#define PANEL_HWC_ADDRESS_ADDRESS 25:0
+#define PANEL_HWC_ADDRESS_ENABLE BIT(31)
+#define PANEL_HWC_ADDRESS_EXT BIT(27)
+#define PANEL_HWC_ADDRESS_ADDRESS_MASK 0x3ffffff
#define PANEL_HWC_LOCATION 0x0800F4
-#define PANEL_HWC_LOCATION_TOP 27:27
-#define PANEL_HWC_LOCATION_TOP_INSIDE 0
-#define PANEL_HWC_LOCATION_TOP_OUTSIDE 1
-#define PANEL_HWC_LOCATION_Y 26:16
-#define PANEL_HWC_LOCATION_LEFT 11:11
-#define PANEL_HWC_LOCATION_LEFT_INSIDE 0
-#define PANEL_HWC_LOCATION_LEFT_OUTSIDE 1
-#define PANEL_HWC_LOCATION_X 10:0
+#define PANEL_HWC_LOCATION_TOP BIT(27)
+#define PANEL_HWC_LOCATION_Y_MASK (0x7ff << 16)
+#define PANEL_HWC_LOCATION_LEFT BIT(11)
+#define PANEL_HWC_LOCATION_X_MASK 0x7ff
#define PANEL_HWC_COLOR_12 0x0800F8
-#define PANEL_HWC_COLOR_12_2_RGB565 31:16
-#define PANEL_HWC_COLOR_12_1_RGB565 15:0
+#define PANEL_HWC_COLOR_12_2_RGB565_MASK (0xffff << 16)
+#define PANEL_HWC_COLOR_12_1_RGB565_MASK 0xffff
#define PANEL_HWC_COLOR_3 0x0800FC
-#define PANEL_HWC_COLOR_3_RGB565 15:0
+#define PANEL_HWC_COLOR_3_RGB565_MASK 0xffff
/* Old Definitions +++ */
#define PANEL_HWC_COLOR_01 0x0800F8
-#define PANEL_HWC_COLOR_01_1_RED 31:27
-#define PANEL_HWC_COLOR_01_1_GREEN 26:21
-#define PANEL_HWC_COLOR_01_1_BLUE 20:16
-#define PANEL_HWC_COLOR_01_0_RED 15:11
-#define PANEL_HWC_COLOR_01_0_GREEN 10:5
-#define PANEL_HWC_COLOR_01_0_BLUE 4:0
+#define PANEL_HWC_COLOR_01_1_RED_MASK (0x1f << 27)
+#define PANEL_HWC_COLOR_01_1_GREEN_MASK (0x3f << 21)
+#define PANEL_HWC_COLOR_01_1_BLUE_MASK (0x1f << 16)
+#define PANEL_HWC_COLOR_01_0_RED_MASK (0x1f << 11)
+#define PANEL_HWC_COLOR_01_0_GREEN_MASK (0x3f << 5)
+#define PANEL_HWC_COLOR_01_0_BLUE_MASK 0x1f
#define PANEL_HWC_COLOR_2 0x0800FC
-#define PANEL_HWC_COLOR_2_RED 15:11
-#define PANEL_HWC_COLOR_2_GREEN 10:5
-#define PANEL_HWC_COLOR_2_BLUE 4:0
+#define PANEL_HWC_COLOR_2_RED_MASK (0x1f << 11)
+#define PANEL_HWC_COLOR_2_GREEN_MASK (0x3f << 5)
+#define PANEL_HWC_COLOR_2_BLUE_MASK 0x1f
/* Old Definitions --- */
/* Alpha Control */
#define ALPHA_DISPLAY_CTRL 0x080100
-#define ALPHA_DISPLAY_CTRL_SELECT 28:28
-#define ALPHA_DISPLAY_CTRL_SELECT_PER_PIXEL 0
-#define ALPHA_DISPLAY_CTRL_SELECT_ALPHA 1
-#define ALPHA_DISPLAY_CTRL_ALPHA 27:24
-#define ALPHA_DISPLAY_CTRL_FIFO 17:16
-#define ALPHA_DISPLAY_CTRL_FIFO_1 0
-#define ALPHA_DISPLAY_CTRL_FIFO_3 1
-#define ALPHA_DISPLAY_CTRL_FIFO_7 2
-#define ALPHA_DISPLAY_CTRL_FIFO_11 3
-#define ALPHA_DISPLAY_CTRL_PIXEL 7:4
-#define ALPHA_DISPLAY_CTRL_CHROMA_KEY 3:3
-#define ALPHA_DISPLAY_CTRL_CHROMA_KEY_DISABLE 0
-#define ALPHA_DISPLAY_CTRL_CHROMA_KEY_ENABLE 1
-#define ALPHA_DISPLAY_CTRL_FORMAT 1:0
-#define ALPHA_DISPLAY_CTRL_FORMAT_16 1
-#define ALPHA_DISPLAY_CTRL_FORMAT_ALPHA_4_4 2
-#define ALPHA_DISPLAY_CTRL_FORMAT_ALPHA_4_4_4_4 3
+#define ALPHA_DISPLAY_CTRL_SELECT BIT(28)
+#define ALPHA_DISPLAY_CTRL_ALPHA_MASK (0xf << 24)
+#define ALPHA_DISPLAY_CTRL_FIFO_MASK (0x3 << 16)
+#define ALPHA_DISPLAY_CTRL_FIFO_1 (0x0 << 16)
+#define ALPHA_DISPLAY_CTRL_FIFO_3 (0x1 << 16)
+#define ALPHA_DISPLAY_CTRL_FIFO_7 (0x2 << 16)
+#define ALPHA_DISPLAY_CTRL_FIFO_11 (0x3 << 16)
+#define ALPHA_DISPLAY_CTRL_PIXEL_MASK (0xf << 4)
+#define ALPHA_DISPLAY_CTRL_CHROMA_KEY BIT(3)
+#define ALPHA_DISPLAY_CTRL_FORMAT_MASK 0x3
+#define ALPHA_DISPLAY_CTRL_FORMAT_16 0x1
+#define ALPHA_DISPLAY_CTRL_FORMAT_ALPHA_4_4 0x2
+#define ALPHA_DISPLAY_CTRL_FORMAT_ALPHA_4_4_4_4 0x3
#define ALPHA_FB_ADDRESS 0x080104
-#define ALPHA_FB_ADDRESS_STATUS 31:31
-#define ALPHA_FB_ADDRESS_STATUS_CURRENT 0
-#define ALPHA_FB_ADDRESS_STATUS_PENDING 1
-#define ALPHA_FB_ADDRESS_EXT 27:27
-#define ALPHA_FB_ADDRESS_EXT_LOCAL 0
-#define ALPHA_FB_ADDRESS_EXT_EXTERNAL 1
-#define ALPHA_FB_ADDRESS_ADDRESS 25:0
+#define ALPHA_FB_ADDRESS_STATUS BIT(31)
+#define ALPHA_FB_ADDRESS_EXT BIT(27)
+#define ALPHA_FB_ADDRESS_ADDRESS_MASK 0x3ffffff
#define ALPHA_FB_WIDTH 0x080108
-#define ALPHA_FB_WIDTH_WIDTH 29:16
-#define ALPHA_FB_WIDTH_OFFSET 13:0
+#define ALPHA_FB_WIDTH_WIDTH_MASK (0x3fff << 16)
+#define ALPHA_FB_WIDTH_OFFSET_MASK 0x3fff
#define ALPHA_PLANE_TL 0x08010C
-#define ALPHA_PLANE_TL_TOP 26:16
-#define ALPHA_PLANE_TL_LEFT 10:0
+#define ALPHA_PLANE_TL_TOP_MASK (0x7ff << 16)
+#define ALPHA_PLANE_TL_LEFT_MASK 0x7ff
#define ALPHA_PLANE_BR 0x080110
-#define ALPHA_PLANE_BR_BOTTOM 26:16
-#define ALPHA_PLANE_BR_RIGHT 10:0
+#define ALPHA_PLANE_BR_BOTTOM_MASK (0x7ff << 16)
+#define ALPHA_PLANE_BR_RIGHT_MASK 0x7ff
#define ALPHA_CHROMA_KEY 0x080114
-#define ALPHA_CHROMA_KEY_MASK 31:16
-#define ALPHA_CHROMA_KEY_VALUE 15:0
+#define ALPHA_CHROMA_KEY_MASK_MASK (0xffff << 16)
+#define ALPHA_CHROMA_KEY_VALUE_MASK 0xffff
#define ALPHA_COLOR_LOOKUP_01 0x080118
-#define ALPHA_COLOR_LOOKUP_01_1 31:16
-#define ALPHA_COLOR_LOOKUP_01_1_RED 31:27
-#define ALPHA_COLOR_LOOKUP_01_1_GREEN 26:21
-#define ALPHA_COLOR_LOOKUP_01_1_BLUE 20:16
-#define ALPHA_COLOR_LOOKUP_01_0 15:0
-#define ALPHA_COLOR_LOOKUP_01_0_RED 15:11
-#define ALPHA_COLOR_LOOKUP_01_0_GREEN 10:5
-#define ALPHA_COLOR_LOOKUP_01_0_BLUE 4:0
+#define ALPHA_COLOR_LOOKUP_01_1_MASK (0xffff << 16)
+#define ALPHA_COLOR_LOOKUP_01_1_RED_MASK (0x1f << 27)
+#define ALPHA_COLOR_LOOKUP_01_1_GREEN_MASK (0x3f << 21)
+#define ALPHA_COLOR_LOOKUP_01_1_BLUE_MASK (0x1f << 16)
+#define ALPHA_COLOR_LOOKUP_01_0_MASK 0xffff
+#define ALPHA_COLOR_LOOKUP_01_0_RED_MASK (0x1f << 11)
+#define ALPHA_COLOR_LOOKUP_01_0_GREEN_MASK (0x3f << 5)
+#define ALPHA_COLOR_LOOKUP_01_0_BLUE_MASK 0x1f
#define ALPHA_COLOR_LOOKUP_23 0x08011C
-#define ALPHA_COLOR_LOOKUP_23_3 31:16
-#define ALPHA_COLOR_LOOKUP_23_3_RED 31:27
-#define ALPHA_COLOR_LOOKUP_23_3_GREEN 26:21
-#define ALPHA_COLOR_LOOKUP_23_3_BLUE 20:16
-#define ALPHA_COLOR_LOOKUP_23_2 15:0
-#define ALPHA_COLOR_LOOKUP_23_2_RED 15:11
-#define ALPHA_COLOR_LOOKUP_23_2_GREEN 10:5
-#define ALPHA_COLOR_LOOKUP_23_2_BLUE 4:0
+#define ALPHA_COLOR_LOOKUP_23_3_MASK (0xffff << 16)
+#define ALPHA_COLOR_LOOKUP_23_3_RED_MASK (0x1f << 27)
+#define ALPHA_COLOR_LOOKUP_23_3_GREEN_MASK (0x3f << 21)
+#define ALPHA_COLOR_LOOKUP_23_3_BLUE_MASK (0x1f << 16)
+#define ALPHA_COLOR_LOOKUP_23_2_MASK 0xffff
+#define ALPHA_COLOR_LOOKUP_23_2_RED_MASK (0x1f << 11)
+#define ALPHA_COLOR_LOOKUP_23_2_GREEN_MASK (0x3f << 5)
+#define ALPHA_COLOR_LOOKUP_23_2_BLUE_MASK 0x1f
#define ALPHA_COLOR_LOOKUP_45 0x080120
-#define ALPHA_COLOR_LOOKUP_45_5 31:16
-#define ALPHA_COLOR_LOOKUP_45_5_RED 31:27
-#define ALPHA_COLOR_LOOKUP_45_5_GREEN 26:21
-#define ALPHA_COLOR_LOOKUP_45_5_BLUE 20:16
-#define ALPHA_COLOR_LOOKUP_45_4 15:0
-#define ALPHA_COLOR_LOOKUP_45_4_RED 15:11
-#define ALPHA_COLOR_LOOKUP_45_4_GREEN 10:5
-#define ALPHA_COLOR_LOOKUP_45_4_BLUE 4:0
+#define ALPHA_COLOR_LOOKUP_45_5_MASK (0xffff << 16)
+#define ALPHA_COLOR_LOOKUP_45_5_RED_MASK (0x1f << 27)
+#define ALPHA_COLOR_LOOKUP_45_5_GREEN_MASK (0x3f << 21)
+#define ALPHA_COLOR_LOOKUP_45_5_BLUE_MASK (0x1f << 16)
+#define ALPHA_COLOR_LOOKUP_45_4_MASK 0xffff
+#define ALPHA_COLOR_LOOKUP_45_4_RED_MASK (0x1f << 11)
+#define ALPHA_COLOR_LOOKUP_45_4_GREEN_MASK (0x3f << 5)
+#define ALPHA_COLOR_LOOKUP_45_4_BLUE_MASK 0x1f
#define ALPHA_COLOR_LOOKUP_67 0x080124
-#define ALPHA_COLOR_LOOKUP_67_7 31:16
-#define ALPHA_COLOR_LOOKUP_67_7_RED 31:27
-#define ALPHA_COLOR_LOOKUP_67_7_GREEN 26:21
-#define ALPHA_COLOR_LOOKUP_67_7_BLUE 20:16
-#define ALPHA_COLOR_LOOKUP_67_6 15:0
-#define ALPHA_COLOR_LOOKUP_67_6_RED 15:11
-#define ALPHA_COLOR_LOOKUP_67_6_GREEN 10:5
-#define ALPHA_COLOR_LOOKUP_67_6_BLUE 4:0
+#define ALPHA_COLOR_LOOKUP_67_7_MASK (0xffff << 16)
+#define ALPHA_COLOR_LOOKUP_67_7_RED_MASK (0x1f << 27)
+#define ALPHA_COLOR_LOOKUP_67_7_GREEN_MASK (0x3f << 21)
+#define ALPHA_COLOR_LOOKUP_67_7_BLUE_MASK (0x1f << 16)
+#define ALPHA_COLOR_LOOKUP_67_6_MASK 0xffff
+#define ALPHA_COLOR_LOOKUP_67_6_RED_MASK (0x1f << 11)
+#define ALPHA_COLOR_LOOKUP_67_6_GREEN_MASK (0x3f << 5)
+#define ALPHA_COLOR_LOOKUP_67_6_BLUE_MASK 0x1f
#define ALPHA_COLOR_LOOKUP_89 0x080128
-#define ALPHA_COLOR_LOOKUP_89_9 31:16
-#define ALPHA_COLOR_LOOKUP_89_9_RED 31:27
-#define ALPHA_COLOR_LOOKUP_89_9_GREEN 26:21
-#define ALPHA_COLOR_LOOKUP_89_9_BLUE 20:16
-#define ALPHA_COLOR_LOOKUP_89_8 15:0
-#define ALPHA_COLOR_LOOKUP_89_8_RED 15:11
-#define ALPHA_COLOR_LOOKUP_89_8_GREEN 10:5
-#define ALPHA_COLOR_LOOKUP_89_8_BLUE 4:0
+#define ALPHA_COLOR_LOOKUP_89_9_MASK (0xffff << 16)
+#define ALPHA_COLOR_LOOKUP_89_9_RED_MASK (0x1f << 27)
+#define ALPHA_COLOR_LOOKUP_89_9_GREEN_MASK (0x3f << 21)
+#define ALPHA_COLOR_LOOKUP_89_9_BLUE_MASK (0x1f << 16)
+#define ALPHA_COLOR_LOOKUP_89_8_MASK 0xffff
+#define ALPHA_COLOR_LOOKUP_89_8_RED_MASK (0x1f << 11)
+#define ALPHA_COLOR_LOOKUP_89_8_GREEN_MASK (0x3f << 5)
+#define ALPHA_COLOR_LOOKUP_89_8_BLUE_MASK 0x1f
#define ALPHA_COLOR_LOOKUP_AB 0x08012C
-#define ALPHA_COLOR_LOOKUP_AB_B 31:16
-#define ALPHA_COLOR_LOOKUP_AB_B_RED 31:27
-#define ALPHA_COLOR_LOOKUP_AB_B_GREEN 26:21
-#define ALPHA_COLOR_LOOKUP_AB_B_BLUE 20:16
-#define ALPHA_COLOR_LOOKUP_AB_A 15:0
-#define ALPHA_COLOR_LOOKUP_AB_A_RED 15:11
-#define ALPHA_COLOR_LOOKUP_AB_A_GREEN 10:5
-#define ALPHA_COLOR_LOOKUP_AB_A_BLUE 4:0
+#define ALPHA_COLOR_LOOKUP_AB_B_MASK (0xffff << 16)
+#define ALPHA_COLOR_LOOKUP_AB_B_RED_MASK (0x1f << 27)
+#define ALPHA_COLOR_LOOKUP_AB_B_GREEN_MASK (0x3f << 21)
+#define ALPHA_COLOR_LOOKUP_AB_B_BLUE_MASK (0x1f << 16)
+#define ALPHA_COLOR_LOOKUP_AB_A_MASK 0xffff
+#define ALPHA_COLOR_LOOKUP_AB_A_RED_MASK (0x1f << 11)
+#define ALPHA_COLOR_LOOKUP_AB_A_GREEN_MASK (0x3f << 5)
+#define ALPHA_COLOR_LOOKUP_AB_A_BLUE_MASK 0x1f
#define ALPHA_COLOR_LOOKUP_CD 0x080130
-#define ALPHA_COLOR_LOOKUP_CD_D 31:16
-#define ALPHA_COLOR_LOOKUP_CD_D_RED 31:27
-#define ALPHA_COLOR_LOOKUP_CD_D_GREEN 26:21
-#define ALPHA_COLOR_LOOKUP_CD_D_BLUE 20:16
-#define ALPHA_COLOR_LOOKUP_CD_C 15:0
-#define ALPHA_COLOR_LOOKUP_CD_C_RED 15:11
-#define ALPHA_COLOR_LOOKUP_CD_C_GREEN 10:5
-#define ALPHA_COLOR_LOOKUP_CD_C_BLUE 4:0
+#define ALPHA_COLOR_LOOKUP_CD_D_MASK (0xffff << 16)
+#define ALPHA_COLOR_LOOKUP_CD_D_RED_MASK (0x1f << 27)
+#define ALPHA_COLOR_LOOKUP_CD_D_GREEN_MASK (0x3f << 21)
+#define ALPHA_COLOR_LOOKUP_CD_D_BLUE_MASK (0x1f << 16)
+#define ALPHA_COLOR_LOOKUP_CD_C_MASK 0xffff
+#define ALPHA_COLOR_LOOKUP_CD_C_RED_MASK (0x1f << 11)
+#define ALPHA_COLOR_LOOKUP_CD_C_GREEN_MASK (0x3f << 5)
+#define ALPHA_COLOR_LOOKUP_CD_C_BLUE_MASK 0x1f
#define ALPHA_COLOR_LOOKUP_EF 0x080134
-#define ALPHA_COLOR_LOOKUP_EF_F 31:16
-#define ALPHA_COLOR_LOOKUP_EF_F_RED 31:27
-#define ALPHA_COLOR_LOOKUP_EF_F_GREEN 26:21
-#define ALPHA_COLOR_LOOKUP_EF_F_BLUE 20:16
-#define ALPHA_COLOR_LOOKUP_EF_E 15:0
-#define ALPHA_COLOR_LOOKUP_EF_E_RED 15:11
-#define ALPHA_COLOR_LOOKUP_EF_E_GREEN 10:5
-#define ALPHA_COLOR_LOOKUP_EF_E_BLUE 4:0
+#define ALPHA_COLOR_LOOKUP_EF_F_MASK (0xffff << 16)
+#define ALPHA_COLOR_LOOKUP_EF_F_RED_MASK (0x1f << 27)
+#define ALPHA_COLOR_LOOKUP_EF_F_GREEN_MASK (0x3f << 21)
+#define ALPHA_COLOR_LOOKUP_EF_F_BLUE_MASK (0x1f << 16)
+#define ALPHA_COLOR_LOOKUP_EF_E_MASK 0xffff
+#define ALPHA_COLOR_LOOKUP_EF_E_RED_MASK (0x1f << 11)
+#define ALPHA_COLOR_LOOKUP_EF_E_GREEN_MASK (0x3f << 5)
+#define ALPHA_COLOR_LOOKUP_EF_E_BLUE_MASK 0x1f
/* CRT Graphics Control */
@@ -1364,125 +1045,107 @@
#define CRT_DISPLAY_CTRL_FORMAT_32 (0x2 << 0)
#define CRT_FB_ADDRESS 0x080204
-#define CRT_FB_ADDRESS_STATUS 31:31
-#define CRT_FB_ADDRESS_STATUS_CURRENT 0
-#define CRT_FB_ADDRESS_STATUS_PENDING 1
-#define CRT_FB_ADDRESS_EXT 27:27
-#define CRT_FB_ADDRESS_EXT_LOCAL 0
-#define CRT_FB_ADDRESS_EXT_EXTERNAL 1
-#define CRT_FB_ADDRESS_ADDRESS 25:0
+#define CRT_FB_ADDRESS_STATUS BIT(31)
+#define CRT_FB_ADDRESS_EXT BIT(27)
+#define CRT_FB_ADDRESS_ADDRESS_MASK 0x3ffffff
#define CRT_FB_WIDTH 0x080208
-#define CRT_FB_WIDTH_WIDTH 29:16
-#define CRT_FB_WIDTH_OFFSET 13:0
+#define CRT_FB_WIDTH_WIDTH_SHIFT 16
+#define CRT_FB_WIDTH_WIDTH_MASK (0x3fff << 16)
+#define CRT_FB_WIDTH_OFFSET_MASK 0x3fff
#define CRT_HORIZONTAL_TOTAL 0x08020C
-#define CRT_HORIZONTAL_TOTAL_TOTAL 27:16
-#define CRT_HORIZONTAL_TOTAL_DISPLAY_END 11:0
+#define CRT_HORIZONTAL_TOTAL_TOTAL_SHIFT 16
+#define CRT_HORIZONTAL_TOTAL_TOTAL_MASK (0xfff << 16)
+#define CRT_HORIZONTAL_TOTAL_DISPLAY_END_MASK 0xfff
#define CRT_HORIZONTAL_SYNC 0x080210
-#define CRT_HORIZONTAL_SYNC_WIDTH 23:16
-#define CRT_HORIZONTAL_SYNC_START 11:0
+#define CRT_HORIZONTAL_SYNC_WIDTH_SHIFT 16
+#define CRT_HORIZONTAL_SYNC_WIDTH_MASK (0xff << 16)
+#define CRT_HORIZONTAL_SYNC_START_MASK 0xfff
#define CRT_VERTICAL_TOTAL 0x080214
-#define CRT_VERTICAL_TOTAL_TOTAL 26:16
-#define CRT_VERTICAL_TOTAL_DISPLAY_END 10:0
+#define CRT_VERTICAL_TOTAL_TOTAL_SHIFT 16
+#define CRT_VERTICAL_TOTAL_TOTAL_MASK (0x7ff << 16)
+#define CRT_VERTICAL_TOTAL_DISPLAY_END_MASK (0x7ff)
#define CRT_VERTICAL_SYNC 0x080218
-#define CRT_VERTICAL_SYNC_HEIGHT 21:16
-#define CRT_VERTICAL_SYNC_START 10:0
+#define CRT_VERTICAL_SYNC_HEIGHT_SHIFT 16
+#define CRT_VERTICAL_SYNC_HEIGHT_MASK (0x3f << 16)
+#define CRT_VERTICAL_SYNC_START_MASK 0x7ff
#define CRT_SIGNATURE_ANALYZER 0x08021C
-#define CRT_SIGNATURE_ANALYZER_STATUS 31:16
-#define CRT_SIGNATURE_ANALYZER_ENABLE 3:3
-#define CRT_SIGNATURE_ANALYZER_ENABLE_DISABLE 0
-#define CRT_SIGNATURE_ANALYZER_ENABLE_ENABLE 1
-#define CRT_SIGNATURE_ANALYZER_RESET 2:2
-#define CRT_SIGNATURE_ANALYZER_RESET_NORMAL 0
-#define CRT_SIGNATURE_ANALYZER_RESET_RESET 1
-#define CRT_SIGNATURE_ANALYZER_SOURCE 1:0
+#define CRT_SIGNATURE_ANALYZER_STATUS_MASK (0xffff << 16)
+#define CRT_SIGNATURE_ANALYZER_ENABLE BIT(3)
+#define CRT_SIGNATURE_ANALYZER_RESET BIT(2)
+#define CRT_SIGNATURE_ANALYZER_SOURCE_MASK 0x3
#define CRT_SIGNATURE_ANALYZER_SOURCE_RED 0
#define CRT_SIGNATURE_ANALYZER_SOURCE_GREEN 1
#define CRT_SIGNATURE_ANALYZER_SOURCE_BLUE 2
#define CRT_CURRENT_LINE 0x080220
-#define CRT_CURRENT_LINE_LINE 10:0
+#define CRT_CURRENT_LINE_LINE_MASK 0x7ff
#define CRT_MONITOR_DETECT 0x080224
-#define CRT_MONITOR_DETECT_VALUE 25:25
-#define CRT_MONITOR_DETECT_VALUE_DISABLE 0
-#define CRT_MONITOR_DETECT_VALUE_ENABLE 1
-#define CRT_MONITOR_DETECT_ENABLE 24:24
-#define CRT_MONITOR_DETECT_ENABLE_DISABLE 0
-#define CRT_MONITOR_DETECT_ENABLE_ENABLE 1
-#define CRT_MONITOR_DETECT_RED 23:16
-#define CRT_MONITOR_DETECT_GREEN 15:8
-#define CRT_MONITOR_DETECT_BLUE 7:0
+#define CRT_MONITOR_DETECT_VALUE BIT(25)
+#define CRT_MONITOR_DETECT_ENABLE BIT(24)
+#define CRT_MONITOR_DETECT_RED_MASK (0xff << 16)
+#define CRT_MONITOR_DETECT_GREEN_MASK (0xff << 8)
+#define CRT_MONITOR_DETECT_BLUE_MASK 0xff
#define CRT_SCALE 0x080228
-#define CRT_SCALE_VERTICAL_MODE 31:31
-#define CRT_SCALE_VERTICAL_MODE_EXPAND 0
-#define CRT_SCALE_VERTICAL_MODE_SHRINK 1
-#define CRT_SCALE_VERTICAL_SCALE 27:16
-#define CRT_SCALE_HORIZONTAL_MODE 15:15
-#define CRT_SCALE_HORIZONTAL_MODE_EXPAND 0
-#define CRT_SCALE_HORIZONTAL_MODE_SHRINK 1
-#define CRT_SCALE_HORIZONTAL_SCALE 11:0
+#define CRT_SCALE_VERTICAL_MODE BIT(31)
+#define CRT_SCALE_VERTICAL_SCALE_MASK (0xfff << 16)
+#define CRT_SCALE_HORIZONTAL_MODE BIT(15)
+#define CRT_SCALE_HORIZONTAL_SCALE_MASK 0xfff
/* CRT Cursor Control */
#define CRT_HWC_ADDRESS 0x080230
-#define CRT_HWC_ADDRESS_ENABLE 31:31
-#define CRT_HWC_ADDRESS_ENABLE_DISABLE 0
-#define CRT_HWC_ADDRESS_ENABLE_ENABLE 1
-#define CRT_HWC_ADDRESS_EXT 27:27
-#define CRT_HWC_ADDRESS_EXT_LOCAL 0
-#define CRT_HWC_ADDRESS_EXT_EXTERNAL 1
-#define CRT_HWC_ADDRESS_ADDRESS 25:0
+#define CRT_HWC_ADDRESS_ENABLE BIT(31)
+#define CRT_HWC_ADDRESS_EXT BIT(27)
+#define CRT_HWC_ADDRESS_ADDRESS_MASK 0x3ffffff
#define CRT_HWC_LOCATION 0x080234
-#define CRT_HWC_LOCATION_TOP 27:27
-#define CRT_HWC_LOCATION_TOP_INSIDE 0
-#define CRT_HWC_LOCATION_TOP_OUTSIDE 1
-#define CRT_HWC_LOCATION_Y 26:16
-#define CRT_HWC_LOCATION_LEFT 11:11
-#define CRT_HWC_LOCATION_LEFT_INSIDE 0
-#define CRT_HWC_LOCATION_LEFT_OUTSIDE 1
-#define CRT_HWC_LOCATION_X 10:0
+#define CRT_HWC_LOCATION_TOP BIT(27)
+#define CRT_HWC_LOCATION_Y_MASK (0x7ff << 16)
+#define CRT_HWC_LOCATION_LEFT BIT(11)
+#define CRT_HWC_LOCATION_X_MASK 0x7ff
#define CRT_HWC_COLOR_12 0x080238
-#define CRT_HWC_COLOR_12_2_RGB565 31:16
-#define CRT_HWC_COLOR_12_1_RGB565 15:0
+#define CRT_HWC_COLOR_12_2_RGB565_MASK (0xffff << 16)
+#define CRT_HWC_COLOR_12_1_RGB565_MASK 0xffff
#define CRT_HWC_COLOR_3 0x08023C
-#define CRT_HWC_COLOR_3_RGB565 15:0
+#define CRT_HWC_COLOR_3_RGB565_MASK 0xffff
/* This vertical expansion below start at 0x080240 ~ 0x080264 */
#define CRT_VERTICAL_EXPANSION 0x080240
#ifndef VALIDATION_CHIP
- #define CRT_VERTICAL_CENTERING_VALUE 31:24
+ #define CRT_VERTICAL_CENTERING_VALUE_MASK (0xff << 24)
#endif
-#define CRT_VERTICAL_EXPANSION_COMPARE_VALUE 23:16
-#define CRT_VERTICAL_EXPANSION_LINE_BUFFER 15:12
-#define CRT_VERTICAL_EXPANSION_SCALE_FACTOR 11:0
+#define CRT_VERTICAL_EXPANSION_COMPARE_VALUE_MASK (0xff << 16)
+#define CRT_VERTICAL_EXPANSION_LINE_BUFFER_MASK (0xf << 12)
+#define CRT_VERTICAL_EXPANSION_SCALE_FACTOR_MASK 0xfff
/* This horizontal expansion below start at 0x080268 ~ 0x08027C */
#define CRT_HORIZONTAL_EXPANSION 0x080268
#ifndef VALIDATION_CHIP
- #define CRT_HORIZONTAL_CENTERING_VALUE 31:24
+ #define CRT_HORIZONTAL_CENTERING_VALUE_MASK (0xff << 24)
#endif
-#define CRT_HORIZONTAL_EXPANSION_COMPARE_VALUE 23:16
-#define CRT_HORIZONTAL_EXPANSION_SCALE_FACTOR 11:0
+#define CRT_HORIZONTAL_EXPANSION_COMPARE_VALUE_MASK (0xff << 16)
+#define CRT_HORIZONTAL_EXPANSION_SCALE_FACTOR_MASK 0xfff
#ifndef VALIDATION_CHIP
/* Auto Centering */
#define CRT_AUTO_CENTERING_TL 0x080280
- #define CRT_AUTO_CENTERING_TL_TOP 26:16
- #define CRT_AUTO_CENTERING_TL_LEFT 10:0
+ #define CRT_AUTO_CENTERING_TL_TOP_MASK (0x7ff << 16)
+ #define CRT_AUTO_CENTERING_TL_LEFT_MASK 0x7ff
#define CRT_AUTO_CENTERING_BR 0x080284
- #define CRT_AUTO_CENTERING_BR_BOTTOM 26:16
- #define CRT_AUTO_CENTERING_BR_RIGHT 10:0
+ #define CRT_AUTO_CENTERING_BR_BOTTOM_MASK (0x7ff << 16)
+ #define CRT_AUTO_CENTERING_BR_BOTTOM_SHIFT 16
+ #define CRT_AUTO_CENTERING_BR_RIGHT_MASK 0x7ff
#endif
/* sm750le new register to control panel output */
@@ -1498,112 +1161,86 @@
/* Color Space Conversion registers. */
#define CSC_Y_SOURCE_BASE 0x1000C8
-#define CSC_Y_SOURCE_BASE_EXT 27:27
-#define CSC_Y_SOURCE_BASE_EXT_LOCAL 0
-#define CSC_Y_SOURCE_BASE_EXT_EXTERNAL 1
-#define CSC_Y_SOURCE_BASE_CS 26:26
-#define CSC_Y_SOURCE_BASE_CS_0 0
-#define CSC_Y_SOURCE_BASE_CS_1 1
-#define CSC_Y_SOURCE_BASE_ADDRESS 25:0
+#define CSC_Y_SOURCE_BASE_EXT BIT(27)
+#define CSC_Y_SOURCE_BASE_CS BIT(26)
+#define CSC_Y_SOURCE_BASE_ADDRESS_MASK 0x3ffffff
#define CSC_CONSTANTS 0x1000CC
-#define CSC_CONSTANTS_Y 31:24
-#define CSC_CONSTANTS_R 23:16
-#define CSC_CONSTANTS_G 15:8
-#define CSC_CONSTANTS_B 7:0
+#define CSC_CONSTANTS_Y_MASK (0xff << 24)
+#define CSC_CONSTANTS_R_MASK (0xff << 16)
+#define CSC_CONSTANTS_G_MASK (0xff << 8)
+#define CSC_CONSTANTS_B_MASK 0xff
#define CSC_Y_SOURCE_X 0x1000D0
-#define CSC_Y_SOURCE_X_INTEGER 26:16
-#define CSC_Y_SOURCE_X_FRACTION 15:3
+#define CSC_Y_SOURCE_X_INTEGER_MASK (0x7ff << 16)
+#define CSC_Y_SOURCE_X_FRACTION_MASK (0x1fff << 3)
#define CSC_Y_SOURCE_Y 0x1000D4
-#define CSC_Y_SOURCE_Y_INTEGER 27:16
-#define CSC_Y_SOURCE_Y_FRACTION 15:3
+#define CSC_Y_SOURCE_Y_INTEGER_MASK (0xfff << 16)
+#define CSC_Y_SOURCE_Y_FRACTION_MASK (0x1fff << 3)
#define CSC_U_SOURCE_BASE 0x1000D8
-#define CSC_U_SOURCE_BASE_EXT 27:27
-#define CSC_U_SOURCE_BASE_EXT_LOCAL 0
-#define CSC_U_SOURCE_BASE_EXT_EXTERNAL 1
-#define CSC_U_SOURCE_BASE_CS 26:26
-#define CSC_U_SOURCE_BASE_CS_0 0
-#define CSC_U_SOURCE_BASE_CS_1 1
-#define CSC_U_SOURCE_BASE_ADDRESS 25:0
+#define CSC_U_SOURCE_BASE_EXT BIT(27)
+#define CSC_U_SOURCE_BASE_CS BIT(26)
+#define CSC_U_SOURCE_BASE_ADDRESS_MASK 0x3ffffff
#define CSC_V_SOURCE_BASE 0x1000DC
-#define CSC_V_SOURCE_BASE_EXT 27:27
-#define CSC_V_SOURCE_BASE_EXT_LOCAL 0
-#define CSC_V_SOURCE_BASE_EXT_EXTERNAL 1
-#define CSC_V_SOURCE_BASE_CS 26:26
-#define CSC_V_SOURCE_BASE_CS_0 0
-#define CSC_V_SOURCE_BASE_CS_1 1
-#define CSC_V_SOURCE_BASE_ADDRESS 25:0
+#define CSC_V_SOURCE_BASE_EXT BIT(27)
+#define CSC_V_SOURCE_BASE_CS BIT(26)
+#define CSC_V_SOURCE_BASE_ADDRESS_MASK 0x3ffffff
#define CSC_SOURCE_DIMENSION 0x1000E0
-#define CSC_SOURCE_DIMENSION_X 31:16
-#define CSC_SOURCE_DIMENSION_Y 15:0
+#define CSC_SOURCE_DIMENSION_X_MASK (0xffff << 16)
+#define CSC_SOURCE_DIMENSION_Y_MASK 0xffff
#define CSC_SOURCE_PITCH 0x1000E4
-#define CSC_SOURCE_PITCH_Y 31:16
-#define CSC_SOURCE_PITCH_UV 15:0
+#define CSC_SOURCE_PITCH_Y_MASK (0xffff << 16)
+#define CSC_SOURCE_PITCH_UV_MASK 0xffff
#define CSC_DESTINATION 0x1000E8
-#define CSC_DESTINATION_WRAP 31:31
-#define CSC_DESTINATION_WRAP_DISABLE 0
-#define CSC_DESTINATION_WRAP_ENABLE 1
-#define CSC_DESTINATION_X 27:16
-#define CSC_DESTINATION_Y 11:0
+#define CSC_DESTINATION_WRAP BIT(31)
+#define CSC_DESTINATION_X_MASK (0xfff << 16)
+#define CSC_DESTINATION_Y_MASK 0xfff
#define CSC_DESTINATION_DIMENSION 0x1000EC
-#define CSC_DESTINATION_DIMENSION_X 31:16
-#define CSC_DESTINATION_DIMENSION_Y 15:0
+#define CSC_DESTINATION_DIMENSION_X_MASK (0xffff << 16)
+#define CSC_DESTINATION_DIMENSION_Y_MASK 0xffff
#define CSC_DESTINATION_PITCH 0x1000F0
-#define CSC_DESTINATION_PITCH_X 31:16
-#define CSC_DESTINATION_PITCH_Y 15:0
+#define CSC_DESTINATION_PITCH_X_MASK (0xffff << 16)
+#define CSC_DESTINATION_PITCH_Y_MASK 0xffff
#define CSC_SCALE_FACTOR 0x1000F4
-#define CSC_SCALE_FACTOR_HORIZONTAL 31:16
-#define CSC_SCALE_FACTOR_VERTICAL 15:0
+#define CSC_SCALE_FACTOR_HORIZONTAL_MASK (0xffff << 16)
+#define CSC_SCALE_FACTOR_VERTICAL_MASK 0xffff
#define CSC_DESTINATION_BASE 0x1000F8
-#define CSC_DESTINATION_BASE_EXT 27:27
-#define CSC_DESTINATION_BASE_EXT_LOCAL 0
-#define CSC_DESTINATION_BASE_EXT_EXTERNAL 1
-#define CSC_DESTINATION_BASE_CS 26:26
-#define CSC_DESTINATION_BASE_CS_0 0
-#define CSC_DESTINATION_BASE_CS_1 1
-#define CSC_DESTINATION_BASE_ADDRESS 25:0
+#define CSC_DESTINATION_BASE_EXT BIT(27)
+#define CSC_DESTINATION_BASE_CS BIT(26)
+#define CSC_DESTINATION_BASE_ADDRESS_MASK 0x3ffffff
#define CSC_CONTROL 0x1000FC
-#define CSC_CONTROL_STATUS 31:31
-#define CSC_CONTROL_STATUS_STOP 0
-#define CSC_CONTROL_STATUS_START 1
-#define CSC_CONTROL_SOURCE_FORMAT 30:28
-#define CSC_CONTROL_SOURCE_FORMAT_YUV422 0
-#define CSC_CONTROL_SOURCE_FORMAT_YUV420I 1
-#define CSC_CONTROL_SOURCE_FORMAT_YUV420 2
-#define CSC_CONTROL_SOURCE_FORMAT_YVU9 3
-#define CSC_CONTROL_SOURCE_FORMAT_IYU1 4
-#define CSC_CONTROL_SOURCE_FORMAT_IYU2 5
-#define CSC_CONTROL_SOURCE_FORMAT_RGB565 6
-#define CSC_CONTROL_SOURCE_FORMAT_RGB8888 7
-#define CSC_CONTROL_DESTINATION_FORMAT 27:26
-#define CSC_CONTROL_DESTINATION_FORMAT_RGB565 0
-#define CSC_CONTROL_DESTINATION_FORMAT_RGB8888 1
-#define CSC_CONTROL_HORIZONTAL_FILTER 25:25
-#define CSC_CONTROL_HORIZONTAL_FILTER_DISABLE 0
-#define CSC_CONTROL_HORIZONTAL_FILTER_ENABLE 1
-#define CSC_CONTROL_VERTICAL_FILTER 24:24
-#define CSC_CONTROL_VERTICAL_FILTER_DISABLE 0
-#define CSC_CONTROL_VERTICAL_FILTER_ENABLE 1
-#define CSC_CONTROL_BYTE_ORDER 23:23
-#define CSC_CONTROL_BYTE_ORDER_YUYV 0
-#define CSC_CONTROL_BYTE_ORDER_UYVY 1
+#define CSC_CONTROL_STATUS BIT(31)
+#define CSC_CONTROL_SOURCE_FORMAT_MASK (0x7 << 28)
+#define CSC_CONTROL_SOURCE_FORMAT_YUV422 (0x0 << 28)
+#define CSC_CONTROL_SOURCE_FORMAT_YUV420I (0x1 << 28)
+#define CSC_CONTROL_SOURCE_FORMAT_YUV420 (0x2 << 28)
+#define CSC_CONTROL_SOURCE_FORMAT_YVU9 (0x3 << 28)
+#define CSC_CONTROL_SOURCE_FORMAT_IYU1 (0x4 << 28)
+#define CSC_CONTROL_SOURCE_FORMAT_IYU2 (0x5 << 28)
+#define CSC_CONTROL_SOURCE_FORMAT_RGB565 (0x6 << 28)
+#define CSC_CONTROL_SOURCE_FORMAT_RGB8888 (0x7 << 28)
+#define CSC_CONTROL_DESTINATION_FORMAT_MASK (0x3 << 26)
+#define CSC_CONTROL_DESTINATION_FORMAT_RGB565 (0x0 << 26)
+#define CSC_CONTROL_DESTINATION_FORMAT_RGB8888 (0x1 << 26)
+#define CSC_CONTROL_HORIZONTAL_FILTER BIT(25)
+#define CSC_CONTROL_VERTICAL_FILTER BIT(24)
+#define CSC_CONTROL_BYTE_ORDER BIT(23)
#define DE_DATA_PORT 0x110000
#define I2C_BYTE_COUNT 0x010040
-#define I2C_BYTE_COUNT_COUNT 3:0
+#define I2C_BYTE_COUNT_COUNT_MASK 0xf
#define I2C_CTRL 0x010041
#define I2C_CTRL_INT BIT(4)
@@ -1619,14 +1256,11 @@
#define I2C_STATUS_BSY BIT(0)
#define I2C_RESET 0x010042
-#define I2C_RESET_BUS_ERROR 2:2
-#define I2C_RESET_BUS_ERROR_CLEAR 0
+#define I2C_RESET_BUS_ERROR BIT(2)
#define I2C_SLAVE_ADDRESS 0x010043
-#define I2C_SLAVE_ADDRESS_ADDRESS 7:1
-#define I2C_SLAVE_ADDRESS_RW 0:0
-#define I2C_SLAVE_ADDRESS_RW_W 0
-#define I2C_SLAVE_ADDRESS_RW_R 1
+#define I2C_SLAVE_ADDRESS_ADDRESS_MASK (0x7f << 1)
+#define I2C_SLAVE_ADDRESS_RW BIT(0)
#define I2C_DATA0 0x010044
#define I2C_DATA1 0x010045
@@ -1647,120 +1281,59 @@
#define ZV0_CAPTURE_CTRL 0x090000
-#define ZV0_CAPTURE_CTRL_FIELD_INPUT 27:27
-#define ZV0_CAPTURE_CTRL_FIELD_INPUT_EVEN_FIELD 0
-#define ZV0_CAPTURE_CTRL_FIELD_INPUT_ODD_FIELD 1
-#define ZV0_CAPTURE_CTRL_SCAN 26:26
-#define ZV0_CAPTURE_CTRL_SCAN_PROGRESSIVE 0
-#define ZV0_CAPTURE_CTRL_SCAN_INTERLACE 1
-#define ZV0_CAPTURE_CTRL_CURRENT_BUFFER 25:25
-#define ZV0_CAPTURE_CTRL_CURRENT_BUFFER_0 0
-#define ZV0_CAPTURE_CTRL_CURRENT_BUFFER_1 1
-#define ZV0_CAPTURE_CTRL_VERTICAL_SYNC 24:24
-#define ZV0_CAPTURE_CTRL_VERTICAL_SYNC_INACTIVE 0
-#define ZV0_CAPTURE_CTRL_VERTICAL_SYNC_ACTIVE 1
-#define ZV0_CAPTURE_CTRL_ADJ 19:19
-#define ZV0_CAPTURE_CTRL_ADJ_NORMAL 0
-#define ZV0_CAPTURE_CTRL_ADJ_DELAY 1
-#define ZV0_CAPTURE_CTRL_HA 18:18
-#define ZV0_CAPTURE_CTRL_HA_DISABLE 0
-#define ZV0_CAPTURE_CTRL_HA_ENABLE 1
-#define ZV0_CAPTURE_CTRL_VSK 17:17
-#define ZV0_CAPTURE_CTRL_VSK_DISABLE 0
-#define ZV0_CAPTURE_CTRL_VSK_ENABLE 1
-#define ZV0_CAPTURE_CTRL_HSK 16:16
-#define ZV0_CAPTURE_CTRL_HSK_DISABLE 0
-#define ZV0_CAPTURE_CTRL_HSK_ENABLE 1
-#define ZV0_CAPTURE_CTRL_FD 15:15
-#define ZV0_CAPTURE_CTRL_FD_RISING 0
-#define ZV0_CAPTURE_CTRL_FD_FALLING 1
-#define ZV0_CAPTURE_CTRL_VP 14:14
-#define ZV0_CAPTURE_CTRL_VP_HIGH 0
-#define ZV0_CAPTURE_CTRL_VP_LOW 1
-#define ZV0_CAPTURE_CTRL_HP 13:13
-#define ZV0_CAPTURE_CTRL_HP_HIGH 0
-#define ZV0_CAPTURE_CTRL_HP_LOW 1
-#define ZV0_CAPTURE_CTRL_CP 12:12
-#define ZV0_CAPTURE_CTRL_CP_HIGH 0
-#define ZV0_CAPTURE_CTRL_CP_LOW 1
-#define ZV0_CAPTURE_CTRL_UVS 11:11
-#define ZV0_CAPTURE_CTRL_UVS_DISABLE 0
-#define ZV0_CAPTURE_CTRL_UVS_ENABLE 1
-#define ZV0_CAPTURE_CTRL_BS 10:10
-#define ZV0_CAPTURE_CTRL_BS_DISABLE 0
-#define ZV0_CAPTURE_CTRL_BS_ENABLE 1
-#define ZV0_CAPTURE_CTRL_CS 9:9
-#define ZV0_CAPTURE_CTRL_CS_16 0
-#define ZV0_CAPTURE_CTRL_CS_8 1
-#define ZV0_CAPTURE_CTRL_CF 8:8
-#define ZV0_CAPTURE_CTRL_CF_YUV 0
-#define ZV0_CAPTURE_CTRL_CF_RGB 1
-#define ZV0_CAPTURE_CTRL_FS 7:7
-#define ZV0_CAPTURE_CTRL_FS_DISABLE 0
-#define ZV0_CAPTURE_CTRL_FS_ENABLE 1
-#define ZV0_CAPTURE_CTRL_WEAVE 6:6
-#define ZV0_CAPTURE_CTRL_WEAVE_DISABLE 0
-#define ZV0_CAPTURE_CTRL_WEAVE_ENABLE 1
-#define ZV0_CAPTURE_CTRL_BOB 5:5
-#define ZV0_CAPTURE_CTRL_BOB_DISABLE 0
-#define ZV0_CAPTURE_CTRL_BOB_ENABLE 1
-#define ZV0_CAPTURE_CTRL_DB 4:4
-#define ZV0_CAPTURE_CTRL_DB_DISABLE 0
-#define ZV0_CAPTURE_CTRL_DB_ENABLE 1
-#define ZV0_CAPTURE_CTRL_CC 3:3
-#define ZV0_CAPTURE_CTRL_CC_CONTINUE 0
-#define ZV0_CAPTURE_CTRL_CC_CONDITION 1
-#define ZV0_CAPTURE_CTRL_RGB 2:2
-#define ZV0_CAPTURE_CTRL_RGB_DISABLE 0
-#define ZV0_CAPTURE_CTRL_RGB_ENABLE 1
-#define ZV0_CAPTURE_CTRL_656 1:1
-#define ZV0_CAPTURE_CTRL_656_DISABLE 0
-#define ZV0_CAPTURE_CTRL_656_ENABLE 1
-#define ZV0_CAPTURE_CTRL_CAP 0:0
-#define ZV0_CAPTURE_CTRL_CAP_DISABLE 0
-#define ZV0_CAPTURE_CTRL_CAP_ENABLE 1
+#define ZV0_CAPTURE_CTRL_FIELD_INPUT BIT(27)
+#define ZV0_CAPTURE_CTRL_SCAN BIT(26)
+#define ZV0_CAPTURE_CTRL_CURRENT_BUFFER BIT(25)
+#define ZV0_CAPTURE_CTRL_VERTICAL_SYNC BIT(24)
+#define ZV0_CAPTURE_CTRL_ADJ BIT(19)
+#define ZV0_CAPTURE_CTRL_HA BIT(18)
+#define ZV0_CAPTURE_CTRL_VSK BIT(17)
+#define ZV0_CAPTURE_CTRL_HSK BIT(16)
+#define ZV0_CAPTURE_CTRL_FD BIT(15)
+#define ZV0_CAPTURE_CTRL_VP BIT(14)
+#define ZV0_CAPTURE_CTRL_HP BIT(13)
+#define ZV0_CAPTURE_CTRL_CP BIT(12)
+#define ZV0_CAPTURE_CTRL_UVS BIT(11)
+#define ZV0_CAPTURE_CTRL_BS BIT(10)
+#define ZV0_CAPTURE_CTRL_CS BIT(9)
+#define ZV0_CAPTURE_CTRL_CF BIT(8)
+#define ZV0_CAPTURE_CTRL_FS BIT(7)
+#define ZV0_CAPTURE_CTRL_WEAVE BIT(6)
+#define ZV0_CAPTURE_CTRL_BOB BIT(5)
+#define ZV0_CAPTURE_CTRL_DB BIT(4)
+#define ZV0_CAPTURE_CTRL_CC BIT(3)
+#define ZV0_CAPTURE_CTRL_RGB BIT(2)
+#define ZV0_CAPTURE_CTRL_656 BIT(1)
+#define ZV0_CAPTURE_CTRL_CAP BIT(0)
#define ZV0_CAPTURE_CLIP 0x090004
-#define ZV0_CAPTURE_CLIP_YCLIP_EVEN_FIELD 25:16
-#define ZV0_CAPTURE_CLIP_YCLIP 25:16
-#define ZV0_CAPTURE_CLIP_XCLIP 9:0
+#define ZV0_CAPTURE_CLIP_EYCLIP_MASK (0x3ff << 16)
+#define ZV0_CAPTURE_CLIP_XCLIP_MASK 0x3ff
#define ZV0_CAPTURE_SIZE 0x090008
-#define ZV0_CAPTURE_SIZE_HEIGHT 26:16
-#define ZV0_CAPTURE_SIZE_WIDTH 10:0
+#define ZV0_CAPTURE_SIZE_HEIGHT_MASK (0x7ff << 16)
+#define ZV0_CAPTURE_SIZE_WIDTH_MASK 0x7ff
#define ZV0_CAPTURE_BUF0_ADDRESS 0x09000C
-#define ZV0_CAPTURE_BUF0_ADDRESS_STATUS 31:31
-#define ZV0_CAPTURE_BUF0_ADDRESS_STATUS_CURRENT 0
-#define ZV0_CAPTURE_BUF0_ADDRESS_STATUS_PENDING 1
-#define ZV0_CAPTURE_BUF0_ADDRESS_EXT 27:27
-#define ZV0_CAPTURE_BUF0_ADDRESS_EXT_LOCAL 0
-#define ZV0_CAPTURE_BUF0_ADDRESS_EXT_EXTERNAL 1
-#define ZV0_CAPTURE_BUF0_ADDRESS_CS 26:26
-#define ZV0_CAPTURE_BUF0_ADDRESS_CS_0 0
-#define ZV0_CAPTURE_BUF0_ADDRESS_CS_1 1
-#define ZV0_CAPTURE_BUF0_ADDRESS_ADDRESS 25:0
+#define ZV0_CAPTURE_BUF0_ADDRESS_STATUS BIT(31)
+#define ZV0_CAPTURE_BUF0_ADDRESS_EXT BIT(27)
+#define ZV0_CAPTURE_BUF0_ADDRESS_CS BIT(26)
+#define ZV0_CAPTURE_BUF0_ADDRESS_ADDRESS_MASK 0x3ffffff
#define ZV0_CAPTURE_BUF1_ADDRESS 0x090010
-#define ZV0_CAPTURE_BUF1_ADDRESS_STATUS 31:31
-#define ZV0_CAPTURE_BUF1_ADDRESS_STATUS_CURRENT 0
-#define ZV0_CAPTURE_BUF1_ADDRESS_STATUS_PENDING 1
-#define ZV0_CAPTURE_BUF1_ADDRESS_EXT 27:27
-#define ZV0_CAPTURE_BUF1_ADDRESS_EXT_LOCAL 0
-#define ZV0_CAPTURE_BUF1_ADDRESS_EXT_EXTERNAL 1
-#define ZV0_CAPTURE_BUF1_ADDRESS_CS 26:26
-#define ZV0_CAPTURE_BUF1_ADDRESS_CS_0 0
-#define ZV0_CAPTURE_BUF1_ADDRESS_CS_1 1
-#define ZV0_CAPTURE_BUF1_ADDRESS_ADDRESS 25:0
+#define ZV0_CAPTURE_BUF1_ADDRESS_STATUS BIT(31)
+#define ZV0_CAPTURE_BUF1_ADDRESS_EXT BIT(27)
+#define ZV0_CAPTURE_BUF1_ADDRESS_CS BIT(26)
+#define ZV0_CAPTURE_BUF1_ADDRESS_ADDRESS_MASK 0x3ffffff
#define ZV0_CAPTURE_BUF_OFFSET 0x090014
#ifndef VALIDATION_CHIP
- #define ZV0_CAPTURE_BUF_OFFSET_YCLIP_ODD_FIELD 25:16
+ #define ZV0_CAPTURE_BUF_OFFSET_YCLIP_ODD_FIELD (0x3ff << 16)
#endif
-#define ZV0_CAPTURE_BUF_OFFSET_OFFSET 15:0
+#define ZV0_CAPTURE_BUF_OFFSET_OFFSET_MASK 0xffff
#define ZV0_CAPTURE_FIFO_CTRL 0x090018
-#define ZV0_CAPTURE_FIFO_CTRL_FIFO 2:0
+#define ZV0_CAPTURE_FIFO_CTRL_FIFO_MASK 0x7
#define ZV0_CAPTURE_FIFO_CTRL_FIFO_0 0
#define ZV0_CAPTURE_FIFO_CTRL_FIFO_1 1
#define ZV0_CAPTURE_FIFO_CTRL_FIFO_2 2
@@ -1771,130 +1344,68 @@
#define ZV0_CAPTURE_FIFO_CTRL_FIFO_7 7
#define ZV0_CAPTURE_YRGB_CONST 0x09001C
-#define ZV0_CAPTURE_YRGB_CONST_Y 31:24
-#define ZV0_CAPTURE_YRGB_CONST_R 23:16
-#define ZV0_CAPTURE_YRGB_CONST_G 15:8
-#define ZV0_CAPTURE_YRGB_CONST_B 7:0
+#define ZV0_CAPTURE_YRGB_CONST_Y_MASK (0xff << 24)
+#define ZV0_CAPTURE_YRGB_CONST_R_MASK (0xff << 16)
+#define ZV0_CAPTURE_YRGB_CONST_G_MASK (0xff << 8)
+#define ZV0_CAPTURE_YRGB_CONST_B_MASK 0xff
#define ZV0_CAPTURE_LINE_COMP 0x090020
-#define ZV0_CAPTURE_LINE_COMP_LC 10:0
+#define ZV0_CAPTURE_LINE_COMP_LC_MASK 0x7ff
/* ZV1 */
#define ZV1_CAPTURE_CTRL 0x098000
-#define ZV1_CAPTURE_CTRL_FIELD_INPUT 27:27
-#define ZV1_CAPTURE_CTRL_FIELD_INPUT_EVEN_FIELD 0
-#define ZV1_CAPTURE_CTRL_FIELD_INPUT_ODD_FIELD 0
-#define ZV1_CAPTURE_CTRL_SCAN 26:26
-#define ZV1_CAPTURE_CTRL_SCAN_PROGRESSIVE 0
-#define ZV1_CAPTURE_CTRL_SCAN_INTERLACE 1
-#define ZV1_CAPTURE_CTRL_CURRENT_BUFFER 25:25
-#define ZV1_CAPTURE_CTRL_CURRENT_BUFFER_0 0
-#define ZV1_CAPTURE_CTRL_CURRENT_BUFFER_1 1
-#define ZV1_CAPTURE_CTRL_VERTICAL_SYNC 24:24
-#define ZV1_CAPTURE_CTRL_VERTICAL_SYNC_INACTIVE 0
-#define ZV1_CAPTURE_CTRL_VERTICAL_SYNC_ACTIVE 1
-#define ZV1_CAPTURE_CTRL_PANEL 20:20
-#define ZV1_CAPTURE_CTRL_PANEL_DISABLE 0
-#define ZV1_CAPTURE_CTRL_PANEL_ENABLE 1
-#define ZV1_CAPTURE_CTRL_ADJ 19:19
-#define ZV1_CAPTURE_CTRL_ADJ_NORMAL 0
-#define ZV1_CAPTURE_CTRL_ADJ_DELAY 1
-#define ZV1_CAPTURE_CTRL_HA 18:18
-#define ZV1_CAPTURE_CTRL_HA_DISABLE 0
-#define ZV1_CAPTURE_CTRL_HA_ENABLE 1
-#define ZV1_CAPTURE_CTRL_VSK 17:17
-#define ZV1_CAPTURE_CTRL_VSK_DISABLE 0
-#define ZV1_CAPTURE_CTRL_VSK_ENABLE 1
-#define ZV1_CAPTURE_CTRL_HSK 16:16
-#define ZV1_CAPTURE_CTRL_HSK_DISABLE 0
-#define ZV1_CAPTURE_CTRL_HSK_ENABLE 1
-#define ZV1_CAPTURE_CTRL_FD 15:15
-#define ZV1_CAPTURE_CTRL_FD_RISING 0
-#define ZV1_CAPTURE_CTRL_FD_FALLING 1
-#define ZV1_CAPTURE_CTRL_VP 14:14
-#define ZV1_CAPTURE_CTRL_VP_HIGH 0
-#define ZV1_CAPTURE_CTRL_VP_LOW 1
-#define ZV1_CAPTURE_CTRL_HP 13:13
-#define ZV1_CAPTURE_CTRL_HP_HIGH 0
-#define ZV1_CAPTURE_CTRL_HP_LOW 1
-#define ZV1_CAPTURE_CTRL_CP 12:12
-#define ZV1_CAPTURE_CTRL_CP_HIGH 0
-#define ZV1_CAPTURE_CTRL_CP_LOW 1
-#define ZV1_CAPTURE_CTRL_UVS 11:11
-#define ZV1_CAPTURE_CTRL_UVS_DISABLE 0
-#define ZV1_CAPTURE_CTRL_UVS_ENABLE 1
-#define ZV1_CAPTURE_CTRL_BS 10:10
-#define ZV1_CAPTURE_CTRL_BS_DISABLE 0
-#define ZV1_CAPTURE_CTRL_BS_ENABLE 1
-#define ZV1_CAPTURE_CTRL_CS 9:9
-#define ZV1_CAPTURE_CTRL_CS_16 0
-#define ZV1_CAPTURE_CTRL_CS_8 1
-#define ZV1_CAPTURE_CTRL_CF 8:8
-#define ZV1_CAPTURE_CTRL_CF_YUV 0
-#define ZV1_CAPTURE_CTRL_CF_RGB 1
-#define ZV1_CAPTURE_CTRL_FS 7:7
-#define ZV1_CAPTURE_CTRL_FS_DISABLE 0
-#define ZV1_CAPTURE_CTRL_FS_ENABLE 1
-#define ZV1_CAPTURE_CTRL_WEAVE 6:6
-#define ZV1_CAPTURE_CTRL_WEAVE_DISABLE 0
-#define ZV1_CAPTURE_CTRL_WEAVE_ENABLE 1
-#define ZV1_CAPTURE_CTRL_BOB 5:5
-#define ZV1_CAPTURE_CTRL_BOB_DISABLE 0
-#define ZV1_CAPTURE_CTRL_BOB_ENABLE 1
-#define ZV1_CAPTURE_CTRL_DB 4:4
-#define ZV1_CAPTURE_CTRL_DB_DISABLE 0
-#define ZV1_CAPTURE_CTRL_DB_ENABLE 1
-#define ZV1_CAPTURE_CTRL_CC 3:3
-#define ZV1_CAPTURE_CTRL_CC_CONTINUE 0
-#define ZV1_CAPTURE_CTRL_CC_CONDITION 1
-#define ZV1_CAPTURE_CTRL_RGB 2:2
-#define ZV1_CAPTURE_CTRL_RGB_DISABLE 0
-#define ZV1_CAPTURE_CTRL_RGB_ENABLE 1
-#define ZV1_CAPTURE_CTRL_656 1:1
-#define ZV1_CAPTURE_CTRL_656_DISABLE 0
-#define ZV1_CAPTURE_CTRL_656_ENABLE 1
-#define ZV1_CAPTURE_CTRL_CAP 0:0
-#define ZV1_CAPTURE_CTRL_CAP_DISABLE 0
-#define ZV1_CAPTURE_CTRL_CAP_ENABLE 1
+#define ZV1_CAPTURE_CTRL_FIELD_INPUT BIT(27)
+#define ZV1_CAPTURE_CTRL_SCAN BIT(26)
+#define ZV1_CAPTURE_CTRL_CURRENT_BUFFER BIT(25)
+#define ZV1_CAPTURE_CTRL_VERTICAL_SYNC BIT(24)
+#define ZV1_CAPTURE_CTRL_PANEL BIT(20)
+#define ZV1_CAPTURE_CTRL_ADJ BIT(19)
+#define ZV1_CAPTURE_CTRL_HA BIT(18)
+#define ZV1_CAPTURE_CTRL_VSK BIT(17)
+#define ZV1_CAPTURE_CTRL_HSK BIT(16)
+#define ZV1_CAPTURE_CTRL_FD BIT(15)
+#define ZV1_CAPTURE_CTRL_VP BIT(14)
+#define ZV1_CAPTURE_CTRL_HP BIT(13)
+#define ZV1_CAPTURE_CTRL_CP BIT(12)
+#define ZV1_CAPTURE_CTRL_UVS BIT(11)
+#define ZV1_CAPTURE_CTRL_BS BIT(10)
+#define ZV1_CAPTURE_CTRL_CS BIT(9)
+#define ZV1_CAPTURE_CTRL_CF BIT(8)
+#define ZV1_CAPTURE_CTRL_FS BIT(7)
+#define ZV1_CAPTURE_CTRL_WEAVE BIT(6)
+#define ZV1_CAPTURE_CTRL_BOB BIT(5)
+#define ZV1_CAPTURE_CTRL_DB BIT(4)
+#define ZV1_CAPTURE_CTRL_CC BIT(3)
+#define ZV1_CAPTURE_CTRL_RGB BIT(2)
+#define ZV1_CAPTURE_CTRL_656 BIT(1)
+#define ZV1_CAPTURE_CTRL_CAP BIT(0)
#define ZV1_CAPTURE_CLIP 0x098004
-#define ZV1_CAPTURE_CLIP_YCLIP 25:16
-#define ZV1_CAPTURE_CLIP_XCLIP 9:0
+#define ZV1_CAPTURE_CLIP_YCLIP_MASK (0x3ff << 16)
+#define ZV1_CAPTURE_CLIP_XCLIP_MASK 0x3ff
#define ZV1_CAPTURE_SIZE 0x098008
-#define ZV1_CAPTURE_SIZE_HEIGHT 26:16
-#define ZV1_CAPTURE_SIZE_WIDTH 10:0
+#define ZV1_CAPTURE_SIZE_HEIGHT_MASK (0x7ff << 16)
+#define ZV1_CAPTURE_SIZE_WIDTH_MASK 0x7ff
#define ZV1_CAPTURE_BUF0_ADDRESS 0x09800C
-#define ZV1_CAPTURE_BUF0_ADDRESS_STATUS 31:31
-#define ZV1_CAPTURE_BUF0_ADDRESS_STATUS_CURRENT 0
-#define ZV1_CAPTURE_BUF0_ADDRESS_STATUS_PENDING 1
-#define ZV1_CAPTURE_BUF0_ADDRESS_EXT 27:27
-#define ZV1_CAPTURE_BUF0_ADDRESS_EXT_LOCAL 0
-#define ZV1_CAPTURE_BUF0_ADDRESS_EXT_EXTERNAL 1
-#define ZV1_CAPTURE_BUF0_ADDRESS_CS 26:26
-#define ZV1_CAPTURE_BUF0_ADDRESS_CS_0 0
-#define ZV1_CAPTURE_BUF0_ADDRESS_CS_1 1
-#define ZV1_CAPTURE_BUF0_ADDRESS_ADDRESS 25:0
+#define ZV1_CAPTURE_BUF0_ADDRESS_STATUS BIT(31)
+#define ZV1_CAPTURE_BUF0_ADDRESS_EXT BIT(27)
+#define ZV1_CAPTURE_BUF0_ADDRESS_CS BIT(26)
+#define ZV1_CAPTURE_BUF0_ADDRESS_ADDRESS_MASK 0x3ffffff
#define ZV1_CAPTURE_BUF1_ADDRESS 0x098010
-#define ZV1_CAPTURE_BUF1_ADDRESS_STATUS 31:31
-#define ZV1_CAPTURE_BUF1_ADDRESS_STATUS_CURRENT 0
-#define ZV1_CAPTURE_BUF1_ADDRESS_STATUS_PENDING 1
-#define ZV1_CAPTURE_BUF1_ADDRESS_EXT 27:27
-#define ZV1_CAPTURE_BUF1_ADDRESS_EXT_LOCAL 0
-#define ZV1_CAPTURE_BUF1_ADDRESS_EXT_EXTERNAL 1
-#define ZV1_CAPTURE_BUF1_ADDRESS_CS 26:26
-#define ZV1_CAPTURE_BUF1_ADDRESS_CS_0 0
-#define ZV1_CAPTURE_BUF1_ADDRESS_CS_1 1
-#define ZV1_CAPTURE_BUF1_ADDRESS_ADDRESS 25:0
+#define ZV1_CAPTURE_BUF1_ADDRESS_STATUS BIT(31)
+#define ZV1_CAPTURE_BUF1_ADDRESS_EXT BIT(27)
+#define ZV1_CAPTURE_BUF1_ADDRESS_CS BIT(26)
+#define ZV1_CAPTURE_BUF1_ADDRESS_ADDRESS_MASK 0x3ffffff
#define ZV1_CAPTURE_BUF_OFFSET 0x098014
-#define ZV1_CAPTURE_BUF_OFFSET_OFFSET 15:0
+#define ZV1_CAPTURE_BUF_OFFSET_OFFSET_MASK 0xffff
#define ZV1_CAPTURE_FIFO_CTRL 0x098018
-#define ZV1_CAPTURE_FIFO_CTRL_FIFO 2:0
+#define ZV1_CAPTURE_FIFO_CTRL_FIFO_MASK 0x7
#define ZV1_CAPTURE_FIFO_CTRL_FIFO_0 0
#define ZV1_CAPTURE_FIFO_CTRL_FIFO_1 1
#define ZV1_CAPTURE_FIFO_CTRL_FIFO_2 2
@@ -1905,34 +1416,24 @@
#define ZV1_CAPTURE_FIFO_CTRL_FIFO_7 7
#define ZV1_CAPTURE_YRGB_CONST 0x09801C
-#define ZV1_CAPTURE_YRGB_CONST_Y 31:24
-#define ZV1_CAPTURE_YRGB_CONST_R 23:16
-#define ZV1_CAPTURE_YRGB_CONST_G 15:8
-#define ZV1_CAPTURE_YRGB_CONST_B 7:0
+#define ZV1_CAPTURE_YRGB_CONST_Y_MASK (0xff << 24)
+#define ZV1_CAPTURE_YRGB_CONST_R_MASK (0xff << 16)
+#define ZV1_CAPTURE_YRGB_CONST_G_MASK (0xff << 8)
+#define ZV1_CAPTURE_YRGB_CONST_B_MASK 0xff
#define DMA_1_SOURCE 0x0D0010
-#define DMA_1_SOURCE_ADDRESS_EXT 27:27
-#define DMA_1_SOURCE_ADDRESS_EXT_LOCAL 0
-#define DMA_1_SOURCE_ADDRESS_EXT_EXTERNAL 1
-#define DMA_1_SOURCE_ADDRESS_CS 26:26
-#define DMA_1_SOURCE_ADDRESS_CS_0 0
-#define DMA_1_SOURCE_ADDRESS_CS_1 1
-#define DMA_1_SOURCE_ADDRESS 25:0
+#define DMA_1_SOURCE_ADDRESS_EXT BIT(27)
+#define DMA_1_SOURCE_ADDRESS_CS BIT(26)
+#define DMA_1_SOURCE_ADDRESS_MASK 0x3ffffff
#define DMA_1_DESTINATION 0x0D0014
-#define DMA_1_DESTINATION_ADDRESS_EXT 27:27
-#define DMA_1_DESTINATION_ADDRESS_EXT_LOCAL 0
-#define DMA_1_DESTINATION_ADDRESS_EXT_EXTERNAL 1
-#define DMA_1_DESTINATION_ADDRESS_CS 26:26
-#define DMA_1_DESTINATION_ADDRESS_CS_0 0
-#define DMA_1_DESTINATION_ADDRESS_CS_1 1
-#define DMA_1_DESTINATION_ADDRESS 25:0
+#define DMA_1_DESTINATION_ADDRESS_EXT BIT(27)
+#define DMA_1_DESTINATION_ADDRESS_CS BIT(26)
+#define DMA_1_DESTINATION_ADDRESS_MASK 0x3ffffff
#define DMA_1_SIZE_CONTROL 0x0D0018
-#define DMA_1_SIZE_CONTROL_STATUS 31:31
-#define DMA_1_SIZE_CONTROL_STATUS_IDLE 0
-#define DMA_1_SIZE_CONTROL_STATUS_ACTIVE 1
-#define DMA_1_SIZE_CONTROL_SIZE 23:0
+#define DMA_1_SIZE_CONTROL_STATUS BIT(31)
+#define DMA_1_SIZE_CONTROL_SIZE_MASK 0xffffff
#define DMA_ABORT_INTERRUPT 0x0D0020
#define DMA_ABORT_INTERRUPT_ABORT_1 BIT(5)
@@ -1946,16 +1447,12 @@
#define GPIO_DATA_SM750LE 0x020018
-#define GPIO_DATA_SM750LE_1 1:1
-#define GPIO_DATA_SM750LE_0 0:0
+#define GPIO_DATA_SM750LE_1 BIT(1)
+#define GPIO_DATA_SM750LE_0 BIT(0)
#define GPIO_DATA_DIRECTION_SM750LE 0x02001C
-#define GPIO_DATA_DIRECTION_SM750LE_1 1:1
-#define GPIO_DATA_DIRECTION_SM750LE_1_INPUT 0
-#define GPIO_DATA_DIRECTION_SM750LE_1_OUTPUT 1
-#define GPIO_DATA_DIRECTION_SM750LE_0 0:0
-#define GPIO_DATA_DIRECTION_SM750LE_0_INPUT 0
-#define GPIO_DATA_DIRECTION_SM750LE_0_OUTPUT 1
+#define GPIO_DATA_DIRECTION_SM750LE_1 BIT(1)
+#define GPIO_DATA_DIRECTION_SM750LE_0 BIT(0)
#endif
diff --git a/drivers/staging/sm750fb/sm750.c b/drivers/staging/sm750fb/sm750.c
index c9d4871..59c7455 100644
--- a/drivers/staging/sm750fb/sm750.c
+++ b/drivers/staging/sm750fb/sm750.c
@@ -13,8 +13,6 @@
#include <linux/vmalloc.h>
#include <linux/pagemap.h>
#include <linux/screen_info.h>
-#include <linux/vmalloc.h>
-#include <linux/pagemap.h>
#include <linux/console.h>
#include <asm/fb.h>
#include "sm750.h"
diff --git a/drivers/staging/sm750fb/sm750_accel.c b/drivers/staging/sm750fb/sm750_accel.c
index e150680..9aa4066 100644
--- a/drivers/staging/sm750fb/sm750_accel.c
+++ b/drivers/staging/sm750fb/sm750_accel.c
@@ -17,7 +17,6 @@
#include "sm750.h"
#include "sm750_accel.h"
-#include "sm750_help.h"
static inline void write_dpr(struct lynx_accel *accel, int offset, u32 regValue)
{
writel(regValue, accel->dprBase + offset);
@@ -41,20 +40,16 @@
write_dpr(accel, DE_MASKS, 0xFFFFFFFF);
/* dpr1c */
- reg = FIELD_SET(0, DE_STRETCH_FORMAT, PATTERN_XY, NORMAL)|
- FIELD_VALUE(0, DE_STRETCH_FORMAT, PATTERN_Y, 0)|
- FIELD_VALUE(0, DE_STRETCH_FORMAT, PATTERN_X, 0)|
- FIELD_SET(0, DE_STRETCH_FORMAT, ADDRESSING, XY)|
- FIELD_VALUE(0, DE_STRETCH_FORMAT, SOURCE_HEIGHT, 3);
+ reg = 0x3;
- clr = FIELD_CLEAR(DE_STRETCH_FORMAT, PATTERN_XY)&
- FIELD_CLEAR(DE_STRETCH_FORMAT, PATTERN_Y)&
- FIELD_CLEAR(DE_STRETCH_FORMAT, PATTERN_X)&
- FIELD_CLEAR(DE_STRETCH_FORMAT, ADDRESSING)&
- FIELD_CLEAR(DE_STRETCH_FORMAT, SOURCE_HEIGHT);
+ clr = DE_STRETCH_FORMAT_PATTERN_XY | DE_STRETCH_FORMAT_PATTERN_Y_MASK |
+ DE_STRETCH_FORMAT_PATTERN_X_MASK |
+ DE_STRETCH_FORMAT_ADDRESSING_MASK |
+ DE_STRETCH_FORMAT_SOURCE_HEIGHT_MASK;
/* DE_STRETCH bpp format need be initialized in setMode routine */
- write_dpr(accel, DE_STRETCH_FORMAT, (read_dpr(accel, DE_STRETCH_FORMAT) & clr) | reg);
+ write_dpr(accel, DE_STRETCH_FORMAT,
+ (read_dpr(accel, DE_STRETCH_FORMAT) & ~clr) | reg);
/* disable clipping and transparent */
write_dpr(accel, DE_CLIP_TL, 0); /* dpr2c */
@@ -63,16 +58,11 @@
write_dpr(accel, DE_COLOR_COMPARE_MASK, 0); /* dpr24 */
write_dpr(accel, DE_COLOR_COMPARE, 0);
- reg = FIELD_SET(0, DE_CONTROL, TRANSPARENCY, DISABLE)|
- FIELD_SET(0, DE_CONTROL, TRANSPARENCY_MATCH, OPAQUE)|
- FIELD_SET(0, DE_CONTROL, TRANSPARENCY_SELECT, SOURCE);
-
- clr = FIELD_CLEAR(DE_CONTROL, TRANSPARENCY)&
- FIELD_CLEAR(DE_CONTROL, TRANSPARENCY_MATCH)&
- FIELD_CLEAR(DE_CONTROL, TRANSPARENCY_SELECT);
+ clr = DE_CONTROL_TRANSPARENCY | DE_CONTROL_TRANSPARENCY_MATCH |
+ DE_CONTROL_TRANSPARENCY_SELECT;
/* dpr0c */
- write_dpr(accel, DE_CONTROL, (read_dpr(accel, DE_CONTROL)&clr)|reg);
+ write_dpr(accel, DE_CONTROL, read_dpr(accel, DE_CONTROL) & ~clr);
}
/* set2dformat only be called from setmode functions
@@ -85,7 +75,9 @@
/* fmt=0,1,2 for 8,16,32,bpp on sm718/750/502 */
reg = read_dpr(accel, DE_STRETCH_FORMAT);
- reg = FIELD_VALUE(reg, DE_STRETCH_FORMAT, PIXEL_FORMAT, fmt);
+ reg &= ~DE_STRETCH_FORMAT_PIXEL_FORMAT_MASK;
+ reg |= ((fmt << DE_STRETCH_FORMAT_PIXEL_FORMAT_SHIFT) &
+ DE_STRETCH_FORMAT_PIXEL_FORMAT_MASK);
write_dpr(accel, DE_STRETCH_FORMAT, reg);
}
@@ -105,31 +97,28 @@
write_dpr(accel, DE_WINDOW_DESTINATION_BASE, base); /* dpr40 */
write_dpr(accel, DE_PITCH,
- FIELD_VALUE(0, DE_PITCH, DESTINATION, pitch/Bpp)|
- FIELD_VALUE(0, DE_PITCH, SOURCE, pitch/Bpp)); /* dpr10 */
+ ((pitch / Bpp << DE_PITCH_DESTINATION_SHIFT) &
+ DE_PITCH_DESTINATION_MASK) |
+ (pitch / Bpp & DE_PITCH_SOURCE_MASK)); /* dpr10 */
write_dpr(accel, DE_WINDOW_WIDTH,
- FIELD_VALUE(0, DE_WINDOW_WIDTH, DESTINATION, pitch/Bpp)|
- FIELD_VALUE(0, DE_WINDOW_WIDTH, SOURCE, pitch/Bpp)); /* dpr44 */
+ ((pitch / Bpp << DE_WINDOW_WIDTH_DST_SHIFT) &
+ DE_WINDOW_WIDTH_DST_MASK) |
+ (pitch / Bpp & DE_WINDOW_WIDTH_SRC_MASK)); /* dpr44 */
write_dpr(accel, DE_FOREGROUND, color); /* DPR14 */
write_dpr(accel, DE_DESTINATION,
- FIELD_SET(0, DE_DESTINATION, WRAP, DISABLE)|
- FIELD_VALUE(0, DE_DESTINATION, X, x)|
- FIELD_VALUE(0, DE_DESTINATION, Y, y)); /* dpr4 */
+ ((x << DE_DESTINATION_X_SHIFT) & DE_DESTINATION_X_MASK) |
+ (y & DE_DESTINATION_Y_MASK)); /* dpr4 */
write_dpr(accel, DE_DIMENSION,
- FIELD_VALUE(0, DE_DIMENSION, X, width)|
- FIELD_VALUE(0, DE_DIMENSION, Y_ET, height)); /* dpr8 */
+ ((width << DE_DIMENSION_X_SHIFT) & DE_DIMENSION_X_MASK) |
+ (height & DE_DIMENSION_Y_ET_MASK)); /* dpr8 */
- deCtrl =
- FIELD_SET(0, DE_CONTROL, STATUS, START)|
- FIELD_SET(0, DE_CONTROL, DIRECTION, LEFT_TO_RIGHT)|
- FIELD_SET(0, DE_CONTROL, LAST_PIXEL, ON)|
- FIELD_SET(0, DE_CONTROL, COMMAND, RECTANGLE_FILL)|
- FIELD_SET(0, DE_CONTROL, ROP_SELECT, ROP2)|
- FIELD_VALUE(0, DE_CONTROL, ROP, rop); /* dpr0xc */
+ deCtrl = DE_CONTROL_STATUS | DE_CONTROL_LAST_PIXEL |
+ DE_CONTROL_COMMAND_RECTANGLE_FILL | DE_CONTROL_ROP_SELECT |
+ (rop & DE_CONTROL_ROP_MASK); /* dpr0xc */
write_dpr(accel, DE_CONTROL, deCtrl);
return 0;
@@ -237,18 +226,18 @@
Note that input pitch is BYTE value, but the 2D Pitch register uses
pixel values. Need Byte to pixel conversion.
*/
- {
- write_dpr(accel, DE_PITCH,
- FIELD_VALUE(0, DE_PITCH, DESTINATION, (dPitch/Bpp)) |
- FIELD_VALUE(0, DE_PITCH, SOURCE, (sPitch/Bpp))); /* dpr10 */
- }
+ write_dpr(accel, DE_PITCH,
+ ((dPitch / Bpp << DE_PITCH_DESTINATION_SHIFT) &
+ DE_PITCH_DESTINATION_MASK) |
+ (sPitch / Bpp & DE_PITCH_SOURCE_MASK)); /* dpr10 */
/* Screen Window width in Pixels.
2D engine uses this value to calculate the linear address in frame buffer for a given point.
*/
write_dpr(accel, DE_WINDOW_WIDTH,
- FIELD_VALUE(0, DE_WINDOW_WIDTH, DESTINATION, (dPitch/Bpp)) |
- FIELD_VALUE(0, DE_WINDOW_WIDTH, SOURCE, (sPitch/Bpp))); /* dpr3c */
+ ((dPitch / Bpp << DE_WINDOW_WIDTH_DST_SHIFT) &
+ DE_WINDOW_WIDTH_DST_MASK) |
+ (sPitch / Bpp & DE_WINDOW_WIDTH_SRC_MASK)); /* dpr3c */
if (accel->de_wait() != 0)
return -1;
@@ -256,24 +245,18 @@
{
write_dpr(accel, DE_SOURCE,
- FIELD_SET(0, DE_SOURCE, WRAP, DISABLE) |
- FIELD_VALUE(0, DE_SOURCE, X_K1, sx) |
- FIELD_VALUE(0, DE_SOURCE, Y_K2, sy)); /* dpr0 */
+ ((sx << DE_SOURCE_X_K1_SHIFT) & DE_SOURCE_X_K1_MASK) |
+ (sy & DE_SOURCE_Y_K2_MASK)); /* dpr0 */
write_dpr(accel, DE_DESTINATION,
- FIELD_SET(0, DE_DESTINATION, WRAP, DISABLE) |
- FIELD_VALUE(0, DE_DESTINATION, X, dx) |
- FIELD_VALUE(0, DE_DESTINATION, Y, dy)); /* dpr04 */
+ ((dx << DE_DESTINATION_X_SHIFT) & DE_DESTINATION_X_MASK) |
+ (dy & DE_DESTINATION_Y_MASK)); /* dpr04 */
write_dpr(accel, DE_DIMENSION,
- FIELD_VALUE(0, DE_DIMENSION, X, width) |
- FIELD_VALUE(0, DE_DIMENSION, Y_ET, height)); /* dpr08 */
+ ((width << DE_DIMENSION_X_SHIFT) & DE_DIMENSION_X_MASK) |
+ (height & DE_DIMENSION_Y_ET_MASK)); /* dpr08 */
- de_ctrl = FIELD_VALUE(0, DE_CONTROL, ROP, rop2) |
- FIELD_SET(0, DE_CONTROL, ROP_SELECT, ROP2) |
- FIELD_SET(0, DE_CONTROL, COMMAND, BITBLT) |
- ((nDirection == RIGHT_TO_LEFT) ?
- FIELD_SET(0, DE_CONTROL, DIRECTION, RIGHT_TO_LEFT)
- : FIELD_SET(0, DE_CONTROL, DIRECTION, LEFT_TO_RIGHT)) |
- FIELD_SET(0, DE_CONTROL, STATUS, START);
+ de_ctrl = (rop2 & DE_CONTROL_ROP_MASK) | DE_CONTROL_ROP_SELECT |
+ ((nDirection == RIGHT_TO_LEFT) ? DE_CONTROL_DIRECTION : 0) |
+ DE_CONTROL_COMMAND_BITBLT | DE_CONTROL_STATUS;
write_dpr(accel, DE_CONTROL, de_ctrl); /* dpr0c */
}
@@ -287,10 +270,8 @@
de_ctrl = read_dpr(accel, DE_CONTROL);
- de_ctrl &=
- FIELD_MASK(DE_CONTROL_TRANSPARENCY_MATCH) |
- FIELD_MASK(DE_CONTROL_TRANSPARENCY_SELECT)|
- FIELD_MASK(DE_CONTROL_TRANSPARENCY);
+ de_ctrl &= (DE_CONTROL_TRANSPARENCY_MATCH |
+ DE_CONTROL_TRANSPARENCY_SELECT | DE_CONTROL_TRANSPARENCY);
return de_ctrl;
}
@@ -338,42 +319,39 @@
Note that input pitch is BYTE value, but the 2D Pitch register uses
pixel values. Need Byte to pixel conversion.
*/
- {
- write_dpr(accel, DE_PITCH,
- FIELD_VALUE(0, DE_PITCH, DESTINATION, dPitch/bytePerPixel) |
- FIELD_VALUE(0, DE_PITCH, SOURCE, dPitch/bytePerPixel)); /* dpr10 */
- }
+ write_dpr(accel, DE_PITCH,
+ ((dPitch / bytePerPixel << DE_PITCH_DESTINATION_SHIFT) &
+ DE_PITCH_DESTINATION_MASK) |
+ (dPitch / bytePerPixel & DE_PITCH_SOURCE_MASK)); /* dpr10 */
/* Screen Window width in Pixels.
2D engine uses this value to calculate the linear address in frame buffer for a given point.
*/
write_dpr(accel, DE_WINDOW_WIDTH,
- FIELD_VALUE(0, DE_WINDOW_WIDTH, DESTINATION, (dPitch/bytePerPixel)) |
- FIELD_VALUE(0, DE_WINDOW_WIDTH, SOURCE, (dPitch/bytePerPixel)));
+ ((dPitch / bytePerPixel << DE_WINDOW_WIDTH_DST_SHIFT) &
+ DE_WINDOW_WIDTH_DST_MASK) |
+ (dPitch / bytePerPixel & DE_WINDOW_WIDTH_SRC_MASK));
/* Note: For 2D Source in Host Write, only X_K1_MONO field is needed, and Y_K2 field is not used.
For mono bitmap, use startBit for X_K1. */
write_dpr(accel, DE_SOURCE,
- FIELD_SET(0, DE_SOURCE, WRAP, DISABLE) |
- FIELD_VALUE(0, DE_SOURCE, X_K1_MONO, startBit)); /* dpr00 */
+ (startBit << DE_SOURCE_X_K1_SHIFT) &
+ DE_SOURCE_X_K1_MONO_MASK); /* dpr00 */
write_dpr(accel, DE_DESTINATION,
- FIELD_SET(0, DE_DESTINATION, WRAP, DISABLE) |
- FIELD_VALUE(0, DE_DESTINATION, X, dx) |
- FIELD_VALUE(0, DE_DESTINATION, Y, dy)); /* dpr04 */
+ ((dx << DE_DESTINATION_X_SHIFT) & DE_DESTINATION_X_MASK) |
+ (dy & DE_DESTINATION_Y_MASK)); /* dpr04 */
write_dpr(accel, DE_DIMENSION,
- FIELD_VALUE(0, DE_DIMENSION, X, width) |
- FIELD_VALUE(0, DE_DIMENSION, Y_ET, height)); /* dpr08 */
+ ((width << DE_DIMENSION_X_SHIFT) & DE_DIMENSION_X_MASK) |
+ (height & DE_DIMENSION_Y_ET_MASK)); /* dpr08 */
write_dpr(accel, DE_FOREGROUND, fColor);
write_dpr(accel, DE_BACKGROUND, bColor);
- de_ctrl = FIELD_VALUE(0, DE_CONTROL, ROP, rop2) |
- FIELD_SET(0, DE_CONTROL, ROP_SELECT, ROP2) |
- FIELD_SET(0, DE_CONTROL, COMMAND, HOST_WRITE) |
- FIELD_SET(0, DE_CONTROL, HOST, MONO) |
- FIELD_SET(0, DE_CONTROL, STATUS, START);
+ de_ctrl = (rop2 & DE_CONTROL_ROP_MASK) |
+ DE_CONTROL_ROP_SELECT | DE_CONTROL_COMMAND_HOST_WRITE |
+ DE_CONTROL_HOST | DE_CONTROL_STATUS;
write_dpr(accel, DE_CONTROL, de_ctrl | deGetTransparency(accel));
diff --git a/drivers/staging/sm750fb/sm750_accel.h b/drivers/staging/sm750fb/sm750_accel.h
index 1ec66d2..d59d005 100644
--- a/drivers/staging/sm750fb/sm750_accel.h
+++ b/drivers/staging/sm750fb/sm750_accel.h
@@ -21,212 +21,162 @@
#define DE_PORT_ADDR_TYPE3 0x100000
#define DE_SOURCE 0x0
-#define DE_SOURCE_WRAP 31:31
-#define DE_SOURCE_WRAP_DISABLE 0
-#define DE_SOURCE_WRAP_ENABLE 1
-#define DE_SOURCE_X_K1 29:16
-#define DE_SOURCE_Y_K2 15:0
-#define DE_SOURCE_X_K1_MONO 20:16
+#define DE_SOURCE_WRAP BIT(31)
+#define DE_SOURCE_X_K1_SHIFT 16
+#define DE_SOURCE_X_K1_MASK (0x3fff << 16)
+#define DE_SOURCE_X_K1_MONO_MASK (0x1f << 16)
+#define DE_SOURCE_Y_K2_MASK 0xffff
#define DE_DESTINATION 0x4
-#define DE_DESTINATION_WRAP 31:31
-#define DE_DESTINATION_WRAP_DISABLE 0
-#define DE_DESTINATION_WRAP_ENABLE 1
-#define DE_DESTINATION_X 28:16
-#define DE_DESTINATION_Y 15:0
+#define DE_DESTINATION_WRAP BIT(31)
+#define DE_DESTINATION_X_SHIFT 16
+#define DE_DESTINATION_X_MASK (0x1fff << 16)
+#define DE_DESTINATION_Y_MASK 0xffff
#define DE_DIMENSION 0x8
-#define DE_DIMENSION_X 28:16
-#define DE_DIMENSION_Y_ET 15:0
+#define DE_DIMENSION_X_SHIFT 16
+#define DE_DIMENSION_X_MASK (0x1fff << 16)
+#define DE_DIMENSION_Y_ET_MASK 0x1fff
#define DE_CONTROL 0xC
-#define DE_CONTROL_STATUS 31:31
-#define DE_CONTROL_STATUS_STOP 0
-#define DE_CONTROL_STATUS_START 1
-#define DE_CONTROL_PATTERN 30:30
-#define DE_CONTROL_PATTERN_MONO 0
-#define DE_CONTROL_PATTERN_COLOR 1
-#define DE_CONTROL_UPDATE_DESTINATION_X 29:29
-#define DE_CONTROL_UPDATE_DESTINATION_X_DISABLE 0
-#define DE_CONTROL_UPDATE_DESTINATION_X_ENABLE 1
-#define DE_CONTROL_QUICK_START 28:28
-#define DE_CONTROL_QUICK_START_DISABLE 0
-#define DE_CONTROL_QUICK_START_ENABLE 1
-#define DE_CONTROL_DIRECTION 27:27
-#define DE_CONTROL_DIRECTION_LEFT_TO_RIGHT 0
-#define DE_CONTROL_DIRECTION_RIGHT_TO_LEFT 1
-#define DE_CONTROL_MAJOR 26:26
-#define DE_CONTROL_MAJOR_X 0
-#define DE_CONTROL_MAJOR_Y 1
-#define DE_CONTROL_STEP_X 25:25
-#define DE_CONTROL_STEP_X_POSITIVE 1
-#define DE_CONTROL_STEP_X_NEGATIVE 0
-#define DE_CONTROL_STEP_Y 24:24
-#define DE_CONTROL_STEP_Y_POSITIVE 1
-#define DE_CONTROL_STEP_Y_NEGATIVE 0
-#define DE_CONTROL_STRETCH 23:23
-#define DE_CONTROL_STRETCH_DISABLE 0
-#define DE_CONTROL_STRETCH_ENABLE 1
-#define DE_CONTROL_HOST 22:22
-#define DE_CONTROL_HOST_COLOR 0
-#define DE_CONTROL_HOST_MONO 1
-#define DE_CONTROL_LAST_PIXEL 21:21
-#define DE_CONTROL_LAST_PIXEL_OFF 0
-#define DE_CONTROL_LAST_PIXEL_ON 1
-#define DE_CONTROL_COMMAND 20:16
-#define DE_CONTROL_COMMAND_BITBLT 0
-#define DE_CONTROL_COMMAND_RECTANGLE_FILL 1
-#define DE_CONTROL_COMMAND_DE_TILE 2
-#define DE_CONTROL_COMMAND_TRAPEZOID_FILL 3
-#define DE_CONTROL_COMMAND_ALPHA_BLEND 4
-#define DE_CONTROL_COMMAND_RLE_STRIP 5
-#define DE_CONTROL_COMMAND_SHORT_STROKE 6
-#define DE_CONTROL_COMMAND_LINE_DRAW 7
-#define DE_CONTROL_COMMAND_HOST_WRITE 8
-#define DE_CONTROL_COMMAND_HOST_READ 9
-#define DE_CONTROL_COMMAND_HOST_WRITE_BOTTOM_UP 10
-#define DE_CONTROL_COMMAND_ROTATE 11
-#define DE_CONTROL_COMMAND_FONT 12
-#define DE_CONTROL_COMMAND_TEXTURE_LOAD 15
-#define DE_CONTROL_ROP_SELECT 15:15
-#define DE_CONTROL_ROP_SELECT_ROP3 0
-#define DE_CONTROL_ROP_SELECT_ROP2 1
-#define DE_CONTROL_ROP2_SOURCE 14:14
-#define DE_CONTROL_ROP2_SOURCE_BITMAP 0
-#define DE_CONTROL_ROP2_SOURCE_PATTERN 1
-#define DE_CONTROL_MONO_DATA 13:12
-#define DE_CONTROL_MONO_DATA_NOT_PACKED 0
-#define DE_CONTROL_MONO_DATA_8_PACKED 1
-#define DE_CONTROL_MONO_DATA_16_PACKED 2
-#define DE_CONTROL_MONO_DATA_32_PACKED 3
-#define DE_CONTROL_REPEAT_ROTATE 11:11
-#define DE_CONTROL_REPEAT_ROTATE_DISABLE 0
-#define DE_CONTROL_REPEAT_ROTATE_ENABLE 1
-#define DE_CONTROL_TRANSPARENCY_MATCH 10:10
-#define DE_CONTROL_TRANSPARENCY_MATCH_OPAQUE 0
-#define DE_CONTROL_TRANSPARENCY_MATCH_TRANSPARENT 1
-#define DE_CONTROL_TRANSPARENCY_SELECT 9:9
-#define DE_CONTROL_TRANSPARENCY_SELECT_SOURCE 0
-#define DE_CONTROL_TRANSPARENCY_SELECT_DESTINATION 1
-#define DE_CONTROL_TRANSPARENCY 8:8
-#define DE_CONTROL_TRANSPARENCY_DISABLE 0
-#define DE_CONTROL_TRANSPARENCY_ENABLE 1
-#define DE_CONTROL_ROP 7:0
+#define DE_CONTROL_STATUS BIT(31)
+#define DE_CONTROL_PATTERN BIT(30)
+#define DE_CONTROL_UPDATE_DESTINATION_X BIT(29)
+#define DE_CONTROL_QUICK_START BIT(28)
+#define DE_CONTROL_DIRECTION BIT(27)
+#define DE_CONTROL_MAJOR BIT(26)
+#define DE_CONTROL_STEP_X BIT(25)
+#define DE_CONTROL_STEP_Y BIT(24)
+#define DE_CONTROL_STRETCH BIT(23)
+#define DE_CONTROL_HOST BIT(22)
+#define DE_CONTROL_LAST_PIXEL BIT(21)
+#define DE_CONTROL_COMMAND_SHIFT 16
+#define DE_CONTROL_COMMAND_MASK (0x1f << 16)
+#define DE_CONTROL_COMMAND_BITBLT (0x0 << 16)
+#define DE_CONTROL_COMMAND_RECTANGLE_FILL (0x1 << 16)
+#define DE_CONTROL_COMMAND_DE_TILE (0x2 << 16)
+#define DE_CONTROL_COMMAND_TRAPEZOID_FILL (0x3 << 16)
+#define DE_CONTROL_COMMAND_ALPHA_BLEND (0x4 << 16)
+#define DE_CONTROL_COMMAND_RLE_STRIP (0x5 << 16)
+#define DE_CONTROL_COMMAND_SHORT_STROKE (0x6 << 16)
+#define DE_CONTROL_COMMAND_LINE_DRAW (0x7 << 16)
+#define DE_CONTROL_COMMAND_HOST_WRITE (0x8 << 16)
+#define DE_CONTROL_COMMAND_HOST_READ (0x9 << 16)
+#define DE_CONTROL_COMMAND_HOST_WRITE_BOTTOM_UP (0xa << 16)
+#define DE_CONTROL_COMMAND_ROTATE (0xb << 16)
+#define DE_CONTROL_COMMAND_FONT (0xc << 16)
+#define DE_CONTROL_COMMAND_TEXTURE_LOAD (0xe << 16)
+#define DE_CONTROL_ROP_SELECT BIT(15)
+#define DE_CONTROL_ROP2_SOURCE BIT(14)
+#define DE_CONTROL_MONO_DATA_SHIFT 12
+#define DE_CONTROL_MONO_DATA_MASK (0x3 << 12)
+#define DE_CONTROL_MONO_DATA_NOT_PACKED (0x0 << 12)
+#define DE_CONTROL_MONO_DATA_8_PACKED (0x1 << 12)
+#define DE_CONTROL_MONO_DATA_16_PACKED (0x2 << 12)
+#define DE_CONTROL_MONO_DATA_32_PACKED (0x3 << 12)
+#define DE_CONTROL_REPEAT_ROTATE BIT(11)
+#define DE_CONTROL_TRANSPARENCY_MATCH BIT(10)
+#define DE_CONTROL_TRANSPARENCY_SELECT BIT(9)
+#define DE_CONTROL_TRANSPARENCY BIT(8)
+#define DE_CONTROL_ROP_MASK 0xff
/* Pseudo fields. */
-#define DE_CONTROL_SHORT_STROKE_DIR 27:24
-#define DE_CONTROL_SHORT_STROKE_DIR_225 0
-#define DE_CONTROL_SHORT_STROKE_DIR_135 1
-#define DE_CONTROL_SHORT_STROKE_DIR_315 2
-#define DE_CONTROL_SHORT_STROKE_DIR_45 3
-#define DE_CONTROL_SHORT_STROKE_DIR_270 4
-#define DE_CONTROL_SHORT_STROKE_DIR_90 5
-#define DE_CONTROL_SHORT_STROKE_DIR_180 8
-#define DE_CONTROL_SHORT_STROKE_DIR_0 10
-#define DE_CONTROL_ROTATION 25:24
-#define DE_CONTROL_ROTATION_0 0
-#define DE_CONTROL_ROTATION_270 1
-#define DE_CONTROL_ROTATION_90 2
-#define DE_CONTROL_ROTATION_180 3
+#define DE_CONTROL_SHORT_STROKE_DIR_MASK (0xf << 24)
+#define DE_CONTROL_SHORT_STROKE_DIR_225 (0x0 << 24)
+#define DE_CONTROL_SHORT_STROKE_DIR_135 (0x1 << 24)
+#define DE_CONTROL_SHORT_STROKE_DIR_315 (0x2 << 24)
+#define DE_CONTROL_SHORT_STROKE_DIR_45 (0x3 << 24)
+#define DE_CONTROL_SHORT_STROKE_DIR_270 (0x4 << 24)
+#define DE_CONTROL_SHORT_STROKE_DIR_90 (0x5 << 24)
+#define DE_CONTROL_SHORT_STROKE_DIR_180 (0x8 << 24)
+#define DE_CONTROL_SHORT_STROKE_DIR_0 (0xa << 24)
+#define DE_CONTROL_ROTATION_MASK (0x3 << 24)
+#define DE_CONTROL_ROTATION_0 (0x0 << 24)
+#define DE_CONTROL_ROTATION_270 (0x1 << 24)
+#define DE_CONTROL_ROTATION_90 (0x2 << 24)
+#define DE_CONTROL_ROTATION_180 (0x3 << 24)
#define DE_PITCH 0x000010
-#define DE_PITCH_DESTINATION 28:16
-#define DE_PITCH_SOURCE 12:0
+#define DE_PITCH_DESTINATION_SHIFT 16
+#define DE_PITCH_DESTINATION_MASK (0x1fff << 16)
+#define DE_PITCH_SOURCE_MASK 0x1fff
#define DE_FOREGROUND 0x000014
-#define DE_FOREGROUND_COLOR 31:0
+#define DE_FOREGROUND_COLOR_MASK 0xffffffff
#define DE_BACKGROUND 0x000018
-#define DE_BACKGROUND_COLOR 31:0
+#define DE_BACKGROUND_COLOR_MASK 0xffffffff
#define DE_STRETCH_FORMAT 0x00001C
-#define DE_STRETCH_FORMAT_PATTERN_XY 30:30
-#define DE_STRETCH_FORMAT_PATTERN_XY_NORMAL 0
-#define DE_STRETCH_FORMAT_PATTERN_XY_OVERWRITE 1
-#define DE_STRETCH_FORMAT_PATTERN_Y 29:27
-#define DE_STRETCH_FORMAT_PATTERN_X 25:23
-#define DE_STRETCH_FORMAT_PIXEL_FORMAT 21:20
-#define DE_STRETCH_FORMAT_PIXEL_FORMAT_8 0
-#define DE_STRETCH_FORMAT_PIXEL_FORMAT_16 1
-#define DE_STRETCH_FORMAT_PIXEL_FORMAT_32 2
-#define DE_STRETCH_FORMAT_PIXEL_FORMAT_24 3
-
-#define DE_STRETCH_FORMAT_ADDRESSING 19:16
-#define DE_STRETCH_FORMAT_ADDRESSING_XY 0
-#define DE_STRETCH_FORMAT_ADDRESSING_LINEAR 15
-#define DE_STRETCH_FORMAT_SOURCE_HEIGHT 11:0
+#define DE_STRETCH_FORMAT_PATTERN_XY BIT(30)
+#define DE_STRETCH_FORMAT_PATTERN_Y_SHIFT 27
+#define DE_STRETCH_FORMAT_PATTERN_Y_MASK (0x7 << 27)
+#define DE_STRETCH_FORMAT_PATTERN_X_SHIFT 23
+#define DE_STRETCH_FORMAT_PATTERN_X_MASK (0x7 << 23)
+#define DE_STRETCH_FORMAT_PIXEL_FORMAT_SHIFT 20
+#define DE_STRETCH_FORMAT_PIXEL_FORMAT_MASK (0x3 << 20)
+#define DE_STRETCH_FORMAT_PIXEL_FORMAT_8 (0x0 << 20)
+#define DE_STRETCH_FORMAT_PIXEL_FORMAT_16 (0x1 << 20)
+#define DE_STRETCH_FORMAT_PIXEL_FORMAT_32 (0x2 << 20)
+#define DE_STRETCH_FORMAT_PIXEL_FORMAT_24 (0x3 << 20)
+#define DE_STRETCH_FORMAT_ADDRESSING_SHIFT 16
+#define DE_STRETCH_FORMAT_ADDRESSING_MASK (0xf << 16)
+#define DE_STRETCH_FORMAT_ADDRESSING_XY (0x0 << 16)
+#define DE_STRETCH_FORMAT_ADDRESSING_LINEAR (0xf << 16)
+#define DE_STRETCH_FORMAT_SOURCE_HEIGHT_MASK 0xfff
#define DE_COLOR_COMPARE 0x000020
-#define DE_COLOR_COMPARE_COLOR 23:0
+#define DE_COLOR_COMPARE_COLOR_MASK 0xffffff
#define DE_COLOR_COMPARE_MASK 0x000024
-#define DE_COLOR_COMPARE_MASK_MASKS 23:0
+#define DE_COLOR_COMPARE_MASK_MASK 0xffffff
#define DE_MASKS 0x000028
-#define DE_MASKS_BYTE_MASK 31:16
-#define DE_MASKS_BIT_MASK 15:0
+#define DE_MASKS_BYTE_MASK (0xffff << 16)
+#define DE_MASKS_BIT_MASK 0xffff
#define DE_CLIP_TL 0x00002C
-#define DE_CLIP_TL_TOP 31:16
-#define DE_CLIP_TL_STATUS 13:13
-#define DE_CLIP_TL_STATUS_DISABLE 0
-#define DE_CLIP_TL_STATUS_ENABLE 1
-#define DE_CLIP_TL_INHIBIT 12:12
-#define DE_CLIP_TL_INHIBIT_OUTSIDE 0
-#define DE_CLIP_TL_INHIBIT_INSIDE 1
-#define DE_CLIP_TL_LEFT 11:0
+#define DE_CLIP_TL_TOP_MASK (0xffff << 16)
+#define DE_CLIP_TL_STATUS BIT(13)
+#define DE_CLIP_TL_INHIBIT BIT(12)
+#define DE_CLIP_TL_LEFT_MASK 0xfff
#define DE_CLIP_BR 0x000030
-#define DE_CLIP_BR_BOTTOM 31:16
-#define DE_CLIP_BR_RIGHT 12:0
+#define DE_CLIP_BR_BOTTOM_MASK (0xffff << 16)
+#define DE_CLIP_BR_RIGHT_MASK 0x1fff
#define DE_MONO_PATTERN_LOW 0x000034
-#define DE_MONO_PATTERN_LOW_PATTERN 31:0
+#define DE_MONO_PATTERN_LOW_PATTERN_MASK 0xffffffff
#define DE_MONO_PATTERN_HIGH 0x000038
-#define DE_MONO_PATTERN_HIGH_PATTERN 31:0
+#define DE_MONO_PATTERN_HIGH_PATTERN_MASK 0xffffffff
#define DE_WINDOW_WIDTH 0x00003C
-#define DE_WINDOW_WIDTH_DESTINATION 28:16
-#define DE_WINDOW_WIDTH_SOURCE 12:0
+#define DE_WINDOW_WIDTH_DST_SHIFT 16
+#define DE_WINDOW_WIDTH_DST_MASK (0x1fff << 16)
+#define DE_WINDOW_WIDTH_SRC_MASK 0x1fff
#define DE_WINDOW_SOURCE_BASE 0x000040
-#define DE_WINDOW_SOURCE_BASE_EXT 27:27
-#define DE_WINDOW_SOURCE_BASE_EXT_LOCAL 0
-#define DE_WINDOW_SOURCE_BASE_EXT_EXTERNAL 1
-#define DE_WINDOW_SOURCE_BASE_CS 26:26
-#define DE_WINDOW_SOURCE_BASE_CS_0 0
-#define DE_WINDOW_SOURCE_BASE_CS_1 1
-#define DE_WINDOW_SOURCE_BASE_ADDRESS 25:0
+#define DE_WINDOW_SOURCE_BASE_EXT BIT(27)
+#define DE_WINDOW_SOURCE_BASE_CS BIT(26)
+#define DE_WINDOW_SOURCE_BASE_ADDRESS_MASK 0x3ffffff
#define DE_WINDOW_DESTINATION_BASE 0x000044
-#define DE_WINDOW_DESTINATION_BASE_EXT 27:27
-#define DE_WINDOW_DESTINATION_BASE_EXT_LOCAL 0
-#define DE_WINDOW_DESTINATION_BASE_EXT_EXTERNAL 1
-#define DE_WINDOW_DESTINATION_BASE_CS 26:26
-#define DE_WINDOW_DESTINATION_BASE_CS_0 0
-#define DE_WINDOW_DESTINATION_BASE_CS_1 1
-#define DE_WINDOW_DESTINATION_BASE_ADDRESS 25:0
+#define DE_WINDOW_DESTINATION_BASE_EXT BIT(27)
+#define DE_WINDOW_DESTINATION_BASE_CS BIT(26)
+#define DE_WINDOW_DESTINATION_BASE_ADDRESS_MASK 0x3ffffff
#define DE_ALPHA 0x000048
-#define DE_ALPHA_VALUE 7:0
+#define DE_ALPHA_VALUE_MASK 0xff
#define DE_WRAP 0x00004C
-#define DE_WRAP_X 31:16
-#define DE_WRAP_Y 15:0
+#define DE_WRAP_X_MASK (0xffff << 16)
+#define DE_WRAP_Y_MASK 0xffff
#define DE_STATUS 0x000050
-#define DE_STATUS_CSC 1:1
-#define DE_STATUS_CSC_CLEAR 0
-#define DE_STATUS_CSC_NOT_ACTIVE 0
-#define DE_STATUS_CSC_ACTIVE 1
-#define DE_STATUS_2D 0:0
-#define DE_STATUS_2D_CLEAR 0
-#define DE_STATUS_2D_NOT_ACTIVE 0
-#define DE_STATUS_2D_ACTIVE 1
-
-
+#define DE_STATUS_CSC BIT(1)
+#define DE_STATUS_2D BIT(0)
/* blt direction */
#define TOP_TO_BOTTOM 0
diff --git a/drivers/staging/sm750fb/sm750_cursor.c b/drivers/staging/sm750fb/sm750_cursor.c
index e887101..2d348c6 100644
--- a/drivers/staging/sm750fb/sm750_cursor.c
+++ b/drivers/staging/sm750fb/sm750_cursor.c
@@ -16,7 +16,6 @@
#include <linux/screen_info.h>
#include "sm750.h"
-#include "sm750_help.h"
#include "sm750_cursor.h"
@@ -28,33 +27,25 @@
/* cursor control for voyager and 718/750*/
#define HWC_ADDRESS 0x0
-#define HWC_ADDRESS_ENABLE 31:31
-#define HWC_ADDRESS_ENABLE_DISABLE 0
-#define HWC_ADDRESS_ENABLE_ENABLE 1
-#define HWC_ADDRESS_EXT 27:27
-#define HWC_ADDRESS_EXT_LOCAL 0
-#define HWC_ADDRESS_EXT_EXTERNAL 1
-#define HWC_ADDRESS_CS 26:26
-#define HWC_ADDRESS_CS_0 0
-#define HWC_ADDRESS_CS_1 1
-#define HWC_ADDRESS_ADDRESS 25:0
+#define HWC_ADDRESS_ENABLE BIT(31)
+#define HWC_ADDRESS_EXT BIT(27)
+#define HWC_ADDRESS_CS BIT(26)
+#define HWC_ADDRESS_ADDRESS_MASK 0x3ffffff
#define HWC_LOCATION 0x4
-#define HWC_LOCATION_TOP 27:27
-#define HWC_LOCATION_TOP_INSIDE 0
-#define HWC_LOCATION_TOP_OUTSIDE 1
-#define HWC_LOCATION_Y 26:16
-#define HWC_LOCATION_LEFT 11:11
-#define HWC_LOCATION_LEFT_INSIDE 0
-#define HWC_LOCATION_LEFT_OUTSIDE 1
-#define HWC_LOCATION_X 10:0
+#define HWC_LOCATION_TOP BIT(27)
+#define HWC_LOCATION_Y_SHIFT 16
+#define HWC_LOCATION_Y_MASK (0x7ff << 16)
+#define HWC_LOCATION_LEFT BIT(11)
+#define HWC_LOCATION_X_MASK 0x7ff
#define HWC_COLOR_12 0x8
-#define HWC_COLOR_12_2_RGB565 31:16
-#define HWC_COLOR_12_1_RGB565 15:0
+#define HWC_COLOR_12_2_RGB565_SHIFT 16
+#define HWC_COLOR_12_2_RGB565_MASK (0xffff << 16)
+#define HWC_COLOR_12_1_RGB565_MASK 0xffff
#define HWC_COLOR_3 0xC
-#define HWC_COLOR_3_RGB565 15:0
+#define HWC_COLOR_3_RGB565_MASK 0xffff
/* hw_cursor_xxx works for voyager,718 and 750 */
@@ -62,9 +53,7 @@
{
u32 reg;
- reg = FIELD_VALUE(0, HWC_ADDRESS, ADDRESS, cursor->offset)|
- FIELD_SET(0, HWC_ADDRESS, EXT, LOCAL)|
- FIELD_SET(0, HWC_ADDRESS, ENABLE, ENABLE);
+ reg = (cursor->offset & HWC_ADDRESS_ADDRESS_MASK) | HWC_ADDRESS_ENABLE;
POKE32(HWC_ADDRESS, reg);
}
void hw_cursor_disable(struct lynx_cursor *cursor)
@@ -83,14 +72,17 @@
{
u32 reg;
- reg = FIELD_VALUE(0, HWC_LOCATION, Y, y)|
- FIELD_VALUE(0, HWC_LOCATION, X, x);
+ reg = (((y << HWC_LOCATION_Y_SHIFT) & HWC_LOCATION_Y_MASK) |
+ (x & HWC_LOCATION_X_MASK));
POKE32(HWC_LOCATION, reg);
}
void hw_cursor_setColor(struct lynx_cursor *cursor,
u32 fg, u32 bg)
{
- POKE32(HWC_COLOR_12, (fg<<16)|(bg&0xffff));
+ u32 reg = (fg << HWC_COLOR_12_2_RGB565_SHIFT) &
+ HWC_COLOR_12_2_RGB565_MASK;
+
+ POKE32(HWC_COLOR_12, reg | (bg & HWC_COLOR_12_1_RGB565_MASK));
POKE32(HWC_COLOR_3, 0xffe0);
}
diff --git a/drivers/staging/sm750fb/sm750_help.h b/drivers/staging/sm750fb/sm750_help.h
deleted file mode 100644
index c070cf2..0000000
--- a/drivers/staging/sm750fb/sm750_help.h
+++ /dev/null
@@ -1,56 +0,0 @@
-#ifndef LYNX_HELP_H__
-#define LYNX_HELP_H__
-
-/* Internal macros */
-#define _F_START(f) (0 ? f)
-#define _F_END(f) (1 ? f)
-#define _F_SIZE(f) (1 + _F_END(f) - _F_START(f))
-#define _F_MASK(f) (((1 << _F_SIZE(f)) - 1) << _F_START(f))
-#define _F_NORMALIZE(v, f) (((v) & _F_MASK(f)) >> _F_START(f))
-#define _F_DENORMALIZE(v, f) (((v) << _F_START(f)) & _F_MASK(f))
-
-/* Global macros */
-#define FIELD_GET(x, reg, field) \
-( \
- _F_NORMALIZE((x), reg ## _ ## field) \
-)
-
-#define FIELD_SET(x, reg, field, value) \
-( \
- (x & ~_F_MASK(reg ## _ ## field)) \
- | _F_DENORMALIZE(reg ## _ ## field ## _ ## value, reg ## _ ## field) \
-)
-
-#define FIELD_VALUE(x, reg, field, value) \
-( \
- (x & ~_F_MASK(reg ## _ ## field)) \
- | _F_DENORMALIZE(value, reg ## _ ## field) \
-)
-
-#define FIELD_CLEAR(reg, field) \
-( \
- ~_F_MASK(reg ## _ ## field) \
-)
-
-/* Field Macros */
-#define FIELD_START(field) (0 ? field)
-#define FIELD_END(field) (1 ? field)
-#define FIELD_SIZE(field) (1 + FIELD_END(field) - FIELD_START(field))
-#define FIELD_MASK(field) (((1 << (FIELD_SIZE(field)-1)) | ((1 << (FIELD_SIZE(field)-1)) - 1)) << FIELD_START(field))
-
-static inline unsigned int absDiff(unsigned int a, unsigned int b)
-{
- if (a < b)
- return b-a;
- else
- return a-b;
-}
-
-/* n / d + 1 / 2 = (2n + d) / 2d */
-#define roundedDiv(num, denom) ((2 * (num) + (denom)) / (2 * (denom)))
-#define MHz(x) ((x) * 1000000)
-
-
-
-
-#endif
diff --git a/drivers/staging/sm750fb/sm750_hw.c b/drivers/staging/sm750fb/sm750_hw.c
index 01850bf..2daeedd 100644
--- a/drivers/staging/sm750fb/sm750_hw.c
+++ b/drivers/staging/sm750fb/sm750_hw.c
@@ -1,4 +1,3 @@
-#include <linux/version.h>
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/errno.h>
@@ -343,11 +342,10 @@
POKE32(CRT_FB_ADDRESS, crtc->oScreen);
reg = var->xres * (var->bits_per_pixel >> 3);
/* crtc->channel is not equal to par->index on numeric,be aware of that */
- reg = ALIGN(reg, crtc->line_pad);
-
- POKE32(CRT_FB_WIDTH,
- FIELD_VALUE(0, CRT_FB_WIDTH, WIDTH, reg)|
- FIELD_VALUE(0, CRT_FB_WIDTH, OFFSET, fix->line_length));
+ reg = ALIGN(reg, crtc->line_pad) << CRT_FB_WIDTH_WIDTH_SHIFT;
+ reg &= CRT_FB_WIDTH_WIDTH_MASK;
+ reg |= (fix->line_length & CRT_FB_WIDTH_OFFSET_MASK);
+ POKE32(CRT_FB_WIDTH, reg);
/* SET PIXEL FORMAT */
reg = PEEK32(CRT_DISPLAY_CTRL);
@@ -550,8 +548,8 @@
(total & PANEL_FB_ADDRESS_ADDRESS_MASK));
} else {
POKE32(CRT_FB_ADDRESS,
- FIELD_VALUE(PEEK32(CRT_FB_ADDRESS),
- CRT_FB_ADDRESS, ADDRESS, total));
+ PEEK32(CRT_FB_ADDRESS) |
+ (total & CRT_FB_ADDRESS_ADDRESS_MASK));
}
return 0;
}
diff --git a/drivers/staging/unisys/include/iochannel.h b/drivers/staging/unisys/include/iochannel.h
index 162ca18..880d9f0 100644
--- a/drivers/staging/unisys/include/iochannel.h
+++ b/drivers/staging/unisys/include/iochannel.h
@@ -575,7 +575,7 @@
* room)
*/
static inline u16
-add_physinfo_entries(u32 inp_pfn, u16 inp_off, u32 inp_len, u16 index,
+add_physinfo_entries(u64 inp_pfn, u16 inp_off, u32 inp_len, u16 index,
u16 max_pi_arr_entries, struct phys_info pi_arr[])
{
u32 len;
@@ -589,21 +589,19 @@
pi_arr[index].pi_pfn = inp_pfn;
pi_arr[index].pi_off = (u16)inp_off;
pi_arr[index].pi_len = (u16)inp_len;
- return index + 1;
+ return index + 1;
}
- /* this entry spans multiple pages */
- for (len = inp_len, i = 0; len;
- len -= pi_arr[index + i].pi_len, i++) {
+ /* this entry spans multiple pages */
+ for (len = inp_len, i = 0; len;
+ len -= pi_arr[index + i].pi_len, i++) {
if (index + i >= max_pi_arr_entries)
return 0;
pi_arr[index + i].pi_pfn = inp_pfn + i;
if (i == 0) {
pi_arr[index].pi_off = inp_off;
pi_arr[index].pi_len = firstlen;
- }
-
- else {
+ } else {
pi_arr[index + i].pi_off = 0;
pi_arr[index + i].pi_len =
(u16)MINNUM(len, (u32)PI_PAGE_SIZE);
diff --git a/drivers/staging/unisys/visorbus/visorbus_main.c b/drivers/staging/unisys/visorbus/visorbus_main.c
index a301385..533bb5b 100644
--- a/drivers/staging/unisys/visorbus/visorbus_main.c
+++ b/drivers/staging/unisys/visorbus/visorbus_main.c
@@ -221,7 +221,6 @@
{
struct visor_device *dev = dev_get_drvdata(xdev);
- dev_set_drvdata(xdev, NULL);
kfree(dev);
}
@@ -701,12 +700,10 @@
static int
register_driver_attributes(struct visor_driver *drv)
{
- int rc;
struct driver_attribute version =
__ATTR(version, S_IRUGO, DRIVER_ATTR_version, NULL);
drv->version_attr = version;
- rc = driver_create_file(&drv->driver, &drv->version_attr);
- return rc;
+ return driver_create_file(&drv->driver, &drv->version_attr);
}
static void
@@ -771,7 +768,7 @@
get_device(&dev->device);
if (!drv->probe) {
up(&dev->visordriver_callback_lock);
- rc = -1;
+ rc = -ENODEV;
goto away;
}
rc = drv->probe(dev);
@@ -973,7 +970,7 @@
static int
create_visor_device(struct visor_device *dev)
{
- int rc = -1;
+ int rc;
u32 chipset_bus_no = dev->chipset_bus_no;
u32 chipset_dev_no = dev->chipset_dev_no;
@@ -995,6 +992,7 @@
if (!dev->periodic_work) {
POSTCODE_LINUX_3(DEVICE_CREATE_FAILURE_PC, chipset_dev_no,
DIAG_SEVERITY_ERR);
+ rc = -EINVAL;
goto away;
}
@@ -1032,14 +1030,15 @@
if (rc < 0) {
POSTCODE_LINUX_3(DEVICE_REGISTER_FAILURE_PC, chipset_dev_no,
DIAG_SEVERITY_ERR);
- goto away_register;
+ goto away_unregister;
}
list_add_tail(&dev->list_all, &list_all_device_instances);
return 0;
-away_register:
+away_unregister:
device_unregister(&dev->device);
+
away:
put_device(&dev->device);
return rc;
@@ -1058,23 +1057,21 @@
get_vbus_header_info(struct visorchannel *chan,
struct spar_vbus_headerinfo *hdr_info)
{
- int rc = -1;
-
if (!SPAR_VBUS_CHANNEL_OK_CLIENT(visorchannel_get_header(chan)))
- goto away;
+ return -EINVAL;
+
if (visorchannel_read(chan, sizeof(struct channel_header), hdr_info,
sizeof(*hdr_info)) < 0) {
- goto away;
+ return -EIO;
}
if (hdr_info->struct_bytes < sizeof(struct spar_vbus_headerinfo))
- goto away;
+ return -EINVAL;
+
if (hdr_info->device_info_struct_bytes <
sizeof(struct ultra_vbus_deviceinfo)) {
- goto away;
+ return -EINVAL;
}
- rc = 0;
-away:
- return rc;
+ return 0;
}
/* Write the contents of <info> to the struct
@@ -1197,17 +1194,14 @@
static int
create_bus_instance(struct visor_device *dev)
{
- int rc;
int id = dev->chipset_bus_no;
struct spar_vbus_headerinfo *hdr_info;
POSTCODE_LINUX_2(BUS_CREATE_ENTRY_PC, POSTCODE_SEVERITY_INFO);
hdr_info = kzalloc(sizeof(*hdr_info), GFP_KERNEL);
- if (!hdr_info) {
- rc = -1;
- goto away;
- }
+ if (!hdr_info)
+ return -ENOMEM;
dev_set_name(&dev->device, "visorbus%d", id);
dev->device.bus = &visorbus_type;
@@ -1217,8 +1211,8 @@
if (device_register(&dev->device) < 0) {
POSTCODE_LINUX_3(DEVICE_CREATE_FAILURE_PC, id,
POSTCODE_SEVERITY_ERR);
- rc = -1;
- goto away_mem;
+ kfree(hdr_info);
+ return -ENODEV;
}
if (get_vbus_header_info(dev->visorchannel, hdr_info) >= 0) {
@@ -1234,11 +1228,6 @@
list_add_tail(&dev->list_all, &list_all_bus_instances);
dev_set_drvdata(&dev->device, dev);
return 0;
-
-away_mem:
- kfree(hdr_info);
-away:
- return rc;
}
/** Remove a device instance for the visor bus itself.
@@ -1328,7 +1317,7 @@
static void
chipset_device_create(struct visor_device *dev_info)
{
- int rc = -1;
+ int rc;
u32 bus_no = dev_info->chipset_bus_no;
u32 dev_no = dev_info->chipset_dev_no;
@@ -1405,7 +1394,7 @@
static void
initiate_chipset_device_pause_resume(struct visor_device *dev, bool is_pause)
{
- int rc = -1, x;
+ int rc;
struct visor_driver *drv = NULL;
void (*notify_func)(struct visor_device *dev, int response) = NULL;
@@ -1414,14 +1403,18 @@
else
notify_func = chipset_responders.device_resume;
if (!notify_func)
- goto away;
+ return;
drv = to_visor_driver(dev->device.driver);
- if (!drv)
- goto away;
+ if (!drv) {
+ (*notify_func)(dev, -ENODEV);
+ return;
+ }
- if (dev->pausing || dev->resuming)
- goto away;
+ if (dev->pausing || dev->resuming) {
+ (*notify_func)(dev, -EBUSY);
+ return;
+ }
/* Note that even though both drv->pause() and drv->resume
* specify a callback function, it is NOT necessary for us to
@@ -1431,11 +1424,13 @@
* visorbus while child function drivers are still running.
*/
if (is_pause) {
- if (!drv->pause)
- goto away;
+ if (!drv->pause) {
+ (*notify_func)(dev, -EINVAL);
+ return;
+ }
dev->pausing = true;
- x = drv->pause(dev, pause_state_change_complete);
+ rc = drv->pause(dev, pause_state_change_complete);
} else {
/* This should be done at BUS resume time, but an
* existing problem prevents us from ever getting a bus
@@ -1444,24 +1439,20 @@
* would never even get here in that case.
*/
fix_vbus_dev_info(dev);
- if (!drv->resume)
- goto away;
+ if (!drv->resume) {
+ (*notify_func)(dev, -EINVAL);
+ return;
+ }
dev->resuming = true;
- x = drv->resume(dev, resume_state_change_complete);
+ rc = drv->resume(dev, resume_state_change_complete);
}
- if (x < 0) {
+ if (rc < 0) {
if (is_pause)
dev->pausing = false;
else
dev->resuming = false;
- goto away;
- }
- rc = 0;
-away:
- if (rc < 0) {
- if (notify_func)
- (*notify_func)(dev, rc);
+ (*notify_func)(dev, -EINVAL);
}
}
diff --git a/drivers/staging/unisys/visorbus/visorchipset.c b/drivers/staging/unisys/visorbus/visorchipset.c
index a79aa2d..b75b063 100644
--- a/drivers/staging/unisys/visorbus/visorchipset.c
+++ b/drivers/staging/unisys/visorbus/visorchipset.c
@@ -2316,7 +2316,7 @@
visorchipset_platform_device.dev.devt = major_dev;
if (platform_device_register(&visorchipset_platform_device) < 0) {
POSTCODE_LINUX_2(DEVICE_REGISTER_FAILURE_PC, DIAG_SEVERITY_ERR);
- rc = -1;
+ rc = -ENODEV;
goto cleanup;
}
POSTCODE_LINUX_2(CHIPSET_INIT_SUCCESS_PC, POSTCODE_SEVERITY_INFO);
diff --git a/drivers/staging/vt6656/usbpipe.c b/drivers/staging/vt6656/usbpipe.c
index f27eb9a..f546553 100644
--- a/drivers/staging/vt6656/usbpipe.c
+++ b/drivers/staging/vt6656/usbpipe.c
@@ -116,7 +116,7 @@
break;
}
- if (status != STATUS_SUCCESS) {
+ if (status) {
priv->int_buf.in_use = false;
dev_dbg(&priv->usb->dev, "%s status = %d\n", __func__, status);
@@ -221,7 +221,7 @@
rcb);
status = usb_submit_urb(urb, GFP_ATOMIC);
- if (status != 0) {
+ if (status) {
dev_dbg(&priv->usb->dev, "Submit Rx URB failed %d\n", status);
return STATUS_FAILURE;
}
@@ -282,7 +282,7 @@
context);
status = usb_submit_urb(urb, GFP_ATOMIC);
- if (status != 0) {
+ if (status) {
dev_dbg(&priv->usb->dev, "Submit Tx URB failed %d\n", status);
context->in_use = false;
diff --git a/drivers/staging/wilc1000/coreconfigurator.c b/drivers/staging/wilc1000/coreconfigurator.c
index 3a76586..25dc108 100644
--- a/drivers/staging/wilc1000/coreconfigurator.c
+++ b/drivers/staging/wilc1000/coreconfigurator.c
@@ -284,10 +284,8 @@
msg_type = msg_buffer[0];
- if ('N' != msg_type) {
- PRINT_ER("Received Message format incorrect.\n");
+ if ('N' != msg_type)
return -EFAULT;
- }
msg_id = msg_buffer[1];
msg_len = MAKE_WORD16(msg_buffer[2], msg_buffer[3]);
diff --git a/drivers/staging/wilc1000/coreconfigurator.h b/drivers/staging/wilc1000/coreconfigurator.h
index 748199d..63eca24 100644
--- a/drivers/staging/wilc1000/coreconfigurator.h
+++ b/drivers/staging/wilc1000/coreconfigurator.h
@@ -105,20 +105,20 @@
u16 ies_len;
};
-typedef struct {
- u8 au8bssid[6];
- u8 *pu8ReqIEs;
- size_t ReqIEsLen;
- u8 *pu8RespIEs;
- u16 u16RespIEsLen;
- u16 u16ConnectStatus;
-} tstrConnectInfo;
+struct connect_info {
+ u8 bssid[6];
+ u8 *req_ies;
+ size_t req_ies_len;
+ u8 *resp_ies;
+ u16 resp_ies_len;
+ u16 status;
+};
-typedef struct {
- u16 u16reason;
+struct disconnect_info {
+ u16 reason;
u8 *ie;
size_t ie_len;
-} tstrDisconnectNotifInfo;
+};
s32 wilc_parse_network_info(u8 *msg_buffer,
struct network_info **ret_network_info);
diff --git a/drivers/staging/wilc1000/host_interface.c b/drivers/staging/wilc1000/host_interface.c
index d1eedfb..b2fdc93 100644
--- a/drivers/staging/wilc1000/host_interface.c
+++ b/drivers/staging/wilc1000/host_interface.c
@@ -46,7 +46,6 @@
#define HOST_IF_MSG_DEL_BA_SESSION 34
#define HOST_IF_MSG_Q_IDLE 35
#define HOST_IF_MSG_DEL_ALL_STA 36
-#define HOST_IF_MSG_DEL_ALL_RX_BA_SESSIONS 37
#define HOST_IF_MSG_SET_TX_POWER 38
#define HOST_IF_MSG_GET_TX_POWER 39
#define HOST_IF_MSG_EXIT 100
@@ -62,10 +61,6 @@
#define TCP_ACK_FILTER_LINK_SPEED_THRESH 54
#define DEFAULT_LINK_SPEED 72
-struct cfg_param_attr {
- struct cfg_param_val cfg_attr_info;
-};
-
struct host_if_wpa_attr {
u8 *key;
const u8 *mac_addr;
@@ -335,7 +330,7 @@
up(&hif_sema_driver);
if (result) {
- PRINT_ER("Failed to set driver handler\n");
+ netdev_err(vif->ndev, "Failed to set driver handler\n");
return -EINVAL;
}
@@ -360,7 +355,7 @@
up(&hif_sema_driver);
if (result) {
- PRINT_ER("Failed to set driver handler\n");
+ netdev_err(vif->ndev, "Failed to set driver handler\n");
return -EINVAL;
}
@@ -389,7 +384,7 @@
host_int_get_ipaddress(vif, firmware_ip_addr, idx);
if (result) {
- PRINT_ER("Failed to set IP address\n");
+ netdev_err(vif->ndev, "Failed to set IP address\n");
return -EINVAL;
}
@@ -417,40 +412,35 @@
wilc_setup_ipaddress(vif, set_ip[idx], idx);
if (result != 0) {
- PRINT_ER("Failed to get IP address\n");
+ netdev_err(vif->ndev, "Failed to get IP address\n");
return -EINVAL;
}
return result;
}
-static s32 handle_set_mac_address(struct wilc_vif *vif,
- struct set_mac_addr *set_mac_addr)
+static void handle_set_mac_address(struct wilc_vif *vif,
+ struct set_mac_addr *set_mac_addr)
{
- s32 result = 0;
+ int ret = 0;
struct wid wid;
- u8 *mac_buf = kmalloc(ETH_ALEN, GFP_KERNEL);
+ u8 *mac_buf;
- if (!mac_buf) {
- PRINT_ER("No buffer to send mac address\n");
- return -EFAULT;
- }
- memcpy(mac_buf, set_mac_addr->mac_addr, ETH_ALEN);
+ mac_buf = kmemdup(set_mac_addr->mac_addr, ETH_ALEN, GFP_KERNEL);
+ if (!mac_buf)
+ return;
wid.id = (u16)WID_MAC_ADDR;
wid.type = WID_STR;
wid.val = mac_buf;
wid.size = ETH_ALEN;
- result = wilc_send_config_pkt(vif, SET_CFG, &wid, 1,
- wilc_get_vif_idx(vif));
- if (result) {
- PRINT_ER("Failed to set mac address\n");
- result = -EFAULT;
- }
+ ret = wilc_send_config_pkt(vif, SET_CFG, &wid, 1,
+ wilc_get_vif_idx(vif));
+ if (ret)
+ netdev_err(vif->ndev, "Failed to set mac address\n");
kfree(mac_buf);
- return result;
}
static s32 handle_get_mac_address(struct wilc_vif *vif,
@@ -468,7 +458,7 @@
wilc_get_vif_idx(vif));
if (result) {
- PRINT_ER("Failed to get mac address\n");
+ netdev_err(vif->ndev, "Failed to get mac address\n");
result = -EFAULT;
}
up(&hif_sema_wait_response);
@@ -482,287 +472,288 @@
s32 result = 0;
struct wid wid_list[32];
struct host_if_drv *hif_drv = vif->hif_drv;
- u8 wid_cnt = 0;
+ int i = 0;
down(&hif_drv->sem_cfg_values);
- if (cfg_param_attr->cfg_attr_info.flag & BSS_TYPE) {
- if (cfg_param_attr->cfg_attr_info.bss_type < 6) {
- wid_list[wid_cnt].id = WID_BSS_TYPE;
- wid_list[wid_cnt].val = (s8 *)&cfg_param_attr->cfg_attr_info.bss_type;
- wid_list[wid_cnt].type = WID_CHAR;
- wid_list[wid_cnt].size = sizeof(char);
- hif_drv->cfg_values.bss_type = (u8)cfg_param_attr->cfg_attr_info.bss_type;
+ if (cfg_param_attr->flag & BSS_TYPE) {
+ if (cfg_param_attr->bss_type < 6) {
+ wid_list[i].id = WID_BSS_TYPE;
+ wid_list[i].val = (s8 *)&cfg_param_attr->bss_type;
+ wid_list[i].type = WID_CHAR;
+ wid_list[i].size = sizeof(char);
+ hif_drv->cfg_values.bss_type = (u8)cfg_param_attr->bss_type;
} else {
- PRINT_ER("check value 6 over\n");
+ netdev_err(vif->ndev, "check value 6 over\n");
result = -EINVAL;
goto ERRORHANDLER;
}
- wid_cnt++;
+ i++;
}
- if (cfg_param_attr->cfg_attr_info.flag & AUTH_TYPE) {
- if (cfg_param_attr->cfg_attr_info.auth_type == 1 ||
- cfg_param_attr->cfg_attr_info.auth_type == 2 ||
- cfg_param_attr->cfg_attr_info.auth_type == 5) {
- wid_list[wid_cnt].id = WID_AUTH_TYPE;
- wid_list[wid_cnt].val = (s8 *)&cfg_param_attr->cfg_attr_info.auth_type;
- wid_list[wid_cnt].type = WID_CHAR;
- wid_list[wid_cnt].size = sizeof(char);
- hif_drv->cfg_values.auth_type = (u8)cfg_param_attr->cfg_attr_info.auth_type;
+ if (cfg_param_attr->flag & AUTH_TYPE) {
+ if (cfg_param_attr->auth_type == 1 ||
+ cfg_param_attr->auth_type == 2 ||
+ cfg_param_attr->auth_type == 5) {
+ wid_list[i].id = WID_AUTH_TYPE;
+ wid_list[i].val = (s8 *)&cfg_param_attr->auth_type;
+ wid_list[i].type = WID_CHAR;
+ wid_list[i].size = sizeof(char);
+ hif_drv->cfg_values.auth_type = (u8)cfg_param_attr->auth_type;
} else {
- PRINT_ER("Impossible value \n");
+ netdev_err(vif->ndev, "Impossible value\n");
result = -EINVAL;
goto ERRORHANDLER;
}
- wid_cnt++;
+ i++;
}
- if (cfg_param_attr->cfg_attr_info.flag & AUTHEN_TIMEOUT) {
- if (cfg_param_attr->cfg_attr_info.auth_timeout > 0 &&
- cfg_param_attr->cfg_attr_info.auth_timeout < 65536) {
- wid_list[wid_cnt].id = WID_AUTH_TIMEOUT;
- wid_list[wid_cnt].val = (s8 *)&cfg_param_attr->cfg_attr_info.auth_timeout;
- wid_list[wid_cnt].type = WID_SHORT;
- wid_list[wid_cnt].size = sizeof(u16);
- hif_drv->cfg_values.auth_timeout = cfg_param_attr->cfg_attr_info.auth_timeout;
+ if (cfg_param_attr->flag & AUTHEN_TIMEOUT) {
+ if (cfg_param_attr->auth_timeout > 0 &&
+ cfg_param_attr->auth_timeout < 65536) {
+ wid_list[i].id = WID_AUTH_TIMEOUT;
+ wid_list[i].val = (s8 *)&cfg_param_attr->auth_timeout;
+ wid_list[i].type = WID_SHORT;
+ wid_list[i].size = sizeof(u16);
+ hif_drv->cfg_values.auth_timeout = cfg_param_attr->auth_timeout;
} else {
- PRINT_ER("Range(1 ~ 65535) over\n");
+ netdev_err(vif->ndev, "Range(1 ~ 65535) over\n");
result = -EINVAL;
goto ERRORHANDLER;
}
- wid_cnt++;
+ i++;
}
- if (cfg_param_attr->cfg_attr_info.flag & POWER_MANAGEMENT) {
- if (cfg_param_attr->cfg_attr_info.power_mgmt_mode < 5) {
- wid_list[wid_cnt].id = WID_POWER_MANAGEMENT;
- wid_list[wid_cnt].val = (s8 *)&cfg_param_attr->cfg_attr_info.power_mgmt_mode;
- wid_list[wid_cnt].type = WID_CHAR;
- wid_list[wid_cnt].size = sizeof(char);
- hif_drv->cfg_values.power_mgmt_mode = (u8)cfg_param_attr->cfg_attr_info.power_mgmt_mode;
+ if (cfg_param_attr->flag & POWER_MANAGEMENT) {
+ if (cfg_param_attr->power_mgmt_mode < 5) {
+ wid_list[i].id = WID_POWER_MANAGEMENT;
+ wid_list[i].val = (s8 *)&cfg_param_attr->power_mgmt_mode;
+ wid_list[i].type = WID_CHAR;
+ wid_list[i].size = sizeof(char);
+ hif_drv->cfg_values.power_mgmt_mode = (u8)cfg_param_attr->power_mgmt_mode;
} else {
- PRINT_ER("Invalid power mode\n");
+ netdev_err(vif->ndev, "Invalid power mode\n");
result = -EINVAL;
goto ERRORHANDLER;
}
- wid_cnt++;
+ i++;
}
- if (cfg_param_attr->cfg_attr_info.flag & RETRY_SHORT) {
- if (cfg_param_attr->cfg_attr_info.short_retry_limit > 0 &&
- cfg_param_attr->cfg_attr_info.short_retry_limit < 256) {
- wid_list[wid_cnt].id = WID_SHORT_RETRY_LIMIT;
- wid_list[wid_cnt].val = (s8 *)&cfg_param_attr->cfg_attr_info.short_retry_limit;
- wid_list[wid_cnt].type = WID_SHORT;
- wid_list[wid_cnt].size = sizeof(u16);
- hif_drv->cfg_values.short_retry_limit = cfg_param_attr->cfg_attr_info.short_retry_limit;
+ if (cfg_param_attr->flag & RETRY_SHORT) {
+ if (cfg_param_attr->short_retry_limit > 0 &&
+ cfg_param_attr->short_retry_limit < 256) {
+ wid_list[i].id = WID_SHORT_RETRY_LIMIT;
+ wid_list[i].val = (s8 *)&cfg_param_attr->short_retry_limit;
+ wid_list[i].type = WID_SHORT;
+ wid_list[i].size = sizeof(u16);
+ hif_drv->cfg_values.short_retry_limit = cfg_param_attr->short_retry_limit;
} else {
- PRINT_ER("Range(1~256) over\n");
+ netdev_err(vif->ndev, "Range(1~256) over\n");
result = -EINVAL;
goto ERRORHANDLER;
}
- wid_cnt++;
+ i++;
}
- if (cfg_param_attr->cfg_attr_info.flag & RETRY_LONG) {
- if (cfg_param_attr->cfg_attr_info.long_retry_limit > 0 &&
- cfg_param_attr->cfg_attr_info.long_retry_limit < 256) {
- wid_list[wid_cnt].id = WID_LONG_RETRY_LIMIT;
- wid_list[wid_cnt].val = (s8 *)&cfg_param_attr->cfg_attr_info.long_retry_limit;
- wid_list[wid_cnt].type = WID_SHORT;
- wid_list[wid_cnt].size = sizeof(u16);
- hif_drv->cfg_values.long_retry_limit = cfg_param_attr->cfg_attr_info.long_retry_limit;
+ if (cfg_param_attr->flag & RETRY_LONG) {
+ if (cfg_param_attr->long_retry_limit > 0 &&
+ cfg_param_attr->long_retry_limit < 256) {
+ wid_list[i].id = WID_LONG_RETRY_LIMIT;
+ wid_list[i].val = (s8 *)&cfg_param_attr->long_retry_limit;
+ wid_list[i].type = WID_SHORT;
+ wid_list[i].size = sizeof(u16);
+ hif_drv->cfg_values.long_retry_limit = cfg_param_attr->long_retry_limit;
} else {
- PRINT_ER("Range(1~256) over\n");
+ netdev_err(vif->ndev, "Range(1~256) over\n");
result = -EINVAL;
goto ERRORHANDLER;
}
- wid_cnt++;
+ i++;
}
- if (cfg_param_attr->cfg_attr_info.flag & FRAG_THRESHOLD) {
- if (cfg_param_attr->cfg_attr_info.frag_threshold > 255 &&
- cfg_param_attr->cfg_attr_info.frag_threshold < 7937) {
- wid_list[wid_cnt].id = WID_FRAG_THRESHOLD;
- wid_list[wid_cnt].val = (s8 *)&cfg_param_attr->cfg_attr_info.frag_threshold;
- wid_list[wid_cnt].type = WID_SHORT;
- wid_list[wid_cnt].size = sizeof(u16);
- hif_drv->cfg_values.frag_threshold = cfg_param_attr->cfg_attr_info.frag_threshold;
+ if (cfg_param_attr->flag & FRAG_THRESHOLD) {
+ if (cfg_param_attr->frag_threshold > 255 &&
+ cfg_param_attr->frag_threshold < 7937) {
+ wid_list[i].id = WID_FRAG_THRESHOLD;
+ wid_list[i].val = (s8 *)&cfg_param_attr->frag_threshold;
+ wid_list[i].type = WID_SHORT;
+ wid_list[i].size = sizeof(u16);
+ hif_drv->cfg_values.frag_threshold = cfg_param_attr->frag_threshold;
} else {
- PRINT_ER("Threshold Range fail\n");
+ netdev_err(vif->ndev, "Threshold Range fail\n");
result = -EINVAL;
goto ERRORHANDLER;
}
- wid_cnt++;
+ i++;
}
- if (cfg_param_attr->cfg_attr_info.flag & RTS_THRESHOLD) {
- if (cfg_param_attr->cfg_attr_info.rts_threshold > 255 &&
- cfg_param_attr->cfg_attr_info.rts_threshold < 65536) {
- wid_list[wid_cnt].id = WID_RTS_THRESHOLD;
- wid_list[wid_cnt].val = (s8 *)&cfg_param_attr->cfg_attr_info.rts_threshold;
- wid_list[wid_cnt].type = WID_SHORT;
- wid_list[wid_cnt].size = sizeof(u16);
- hif_drv->cfg_values.rts_threshold = cfg_param_attr->cfg_attr_info.rts_threshold;
+ if (cfg_param_attr->flag & RTS_THRESHOLD) {
+ if (cfg_param_attr->rts_threshold > 255 &&
+ cfg_param_attr->rts_threshold < 65536) {
+ wid_list[i].id = WID_RTS_THRESHOLD;
+ wid_list[i].val = (s8 *)&cfg_param_attr->rts_threshold;
+ wid_list[i].type = WID_SHORT;
+ wid_list[i].size = sizeof(u16);
+ hif_drv->cfg_values.rts_threshold = cfg_param_attr->rts_threshold;
} else {
- PRINT_ER("Threshold Range fail\n");
+ netdev_err(vif->ndev, "Threshold Range fail\n");
result = -EINVAL;
goto ERRORHANDLER;
}
- wid_cnt++;
+ i++;
}
- if (cfg_param_attr->cfg_attr_info.flag & PREAMBLE) {
- if (cfg_param_attr->cfg_attr_info.preamble_type < 3) {
- wid_list[wid_cnt].id = WID_PREAMBLE;
- wid_list[wid_cnt].val = (s8 *)&cfg_param_attr->cfg_attr_info.preamble_type;
- wid_list[wid_cnt].type = WID_CHAR;
- wid_list[wid_cnt].size = sizeof(char);
- hif_drv->cfg_values.preamble_type = cfg_param_attr->cfg_attr_info.preamble_type;
+ if (cfg_param_attr->flag & PREAMBLE) {
+ if (cfg_param_attr->preamble_type < 3) {
+ wid_list[i].id = WID_PREAMBLE;
+ wid_list[i].val = (s8 *)&cfg_param_attr->preamble_type;
+ wid_list[i].type = WID_CHAR;
+ wid_list[i].size = sizeof(char);
+ hif_drv->cfg_values.preamble_type = cfg_param_attr->preamble_type;
} else {
- PRINT_ER("Preamle Range(0~2) over\n");
+ netdev_err(vif->ndev, "Preamle Range(0~2) over\n");
result = -EINVAL;
goto ERRORHANDLER;
}
- wid_cnt++;
+ i++;
}
- if (cfg_param_attr->cfg_attr_info.flag & SHORT_SLOT_ALLOWED) {
- if (cfg_param_attr->cfg_attr_info.short_slot_allowed < 2) {
- wid_list[wid_cnt].id = WID_SHORT_SLOT_ALLOWED;
- wid_list[wid_cnt].val = (s8 *)&cfg_param_attr->cfg_attr_info.short_slot_allowed;
- wid_list[wid_cnt].type = WID_CHAR;
- wid_list[wid_cnt].size = sizeof(char);
- hif_drv->cfg_values.short_slot_allowed = (u8)cfg_param_attr->cfg_attr_info.short_slot_allowed;
+ if (cfg_param_attr->flag & SHORT_SLOT_ALLOWED) {
+ if (cfg_param_attr->short_slot_allowed < 2) {
+ wid_list[i].id = WID_SHORT_SLOT_ALLOWED;
+ wid_list[i].val = (s8 *)&cfg_param_attr->short_slot_allowed;
+ wid_list[i].type = WID_CHAR;
+ wid_list[i].size = sizeof(char);
+ hif_drv->cfg_values.short_slot_allowed = (u8)cfg_param_attr->short_slot_allowed;
} else {
- PRINT_ER("Short slot(2) over\n");
+ netdev_err(vif->ndev, "Short slot(2) over\n");
result = -EINVAL;
goto ERRORHANDLER;
}
- wid_cnt++;
+ i++;
}
- if (cfg_param_attr->cfg_attr_info.flag & TXOP_PROT_DISABLE) {
- if (cfg_param_attr->cfg_attr_info.txop_prot_disabled < 2) {
- wid_list[wid_cnt].id = WID_11N_TXOP_PROT_DISABLE;
- wid_list[wid_cnt].val = (s8 *)&cfg_param_attr->cfg_attr_info.txop_prot_disabled;
- wid_list[wid_cnt].type = WID_CHAR;
- wid_list[wid_cnt].size = sizeof(char);
- hif_drv->cfg_values.txop_prot_disabled = (u8)cfg_param_attr->cfg_attr_info.txop_prot_disabled;
+ if (cfg_param_attr->flag & TXOP_PROT_DISABLE) {
+ if (cfg_param_attr->txop_prot_disabled < 2) {
+ wid_list[i].id = WID_11N_TXOP_PROT_DISABLE;
+ wid_list[i].val = (s8 *)&cfg_param_attr->txop_prot_disabled;
+ wid_list[i].type = WID_CHAR;
+ wid_list[i].size = sizeof(char);
+ hif_drv->cfg_values.txop_prot_disabled = (u8)cfg_param_attr->txop_prot_disabled;
} else {
- PRINT_ER("TXOP prot disable\n");
+ netdev_err(vif->ndev, "TXOP prot disable\n");
result = -EINVAL;
goto ERRORHANDLER;
}
- wid_cnt++;
+ i++;
}
- if (cfg_param_attr->cfg_attr_info.flag & BEACON_INTERVAL) {
- if (cfg_param_attr->cfg_attr_info.beacon_interval > 0 &&
- cfg_param_attr->cfg_attr_info.beacon_interval < 65536) {
- wid_list[wid_cnt].id = WID_BEACON_INTERVAL;
- wid_list[wid_cnt].val = (s8 *)&cfg_param_attr->cfg_attr_info.beacon_interval;
- wid_list[wid_cnt].type = WID_SHORT;
- wid_list[wid_cnt].size = sizeof(u16);
- hif_drv->cfg_values.beacon_interval = cfg_param_attr->cfg_attr_info.beacon_interval;
+ if (cfg_param_attr->flag & BEACON_INTERVAL) {
+ if (cfg_param_attr->beacon_interval > 0 &&
+ cfg_param_attr->beacon_interval < 65536) {
+ wid_list[i].id = WID_BEACON_INTERVAL;
+ wid_list[i].val = (s8 *)&cfg_param_attr->beacon_interval;
+ wid_list[i].type = WID_SHORT;
+ wid_list[i].size = sizeof(u16);
+ hif_drv->cfg_values.beacon_interval = cfg_param_attr->beacon_interval;
} else {
- PRINT_ER("Beacon interval(1~65535) fail\n");
+ netdev_err(vif->ndev, "Beacon interval(1~65535)fail\n");
result = -EINVAL;
goto ERRORHANDLER;
}
- wid_cnt++;
+ i++;
}
- if (cfg_param_attr->cfg_attr_info.flag & DTIM_PERIOD) {
- if (cfg_param_attr->cfg_attr_info.dtim_period > 0 &&
- cfg_param_attr->cfg_attr_info.dtim_period < 256) {
- wid_list[wid_cnt].id = WID_DTIM_PERIOD;
- wid_list[wid_cnt].val = (s8 *)&cfg_param_attr->cfg_attr_info.dtim_period;
- wid_list[wid_cnt].type = WID_CHAR;
- wid_list[wid_cnt].size = sizeof(char);
- hif_drv->cfg_values.dtim_period = cfg_param_attr->cfg_attr_info.dtim_period;
+ if (cfg_param_attr->flag & DTIM_PERIOD) {
+ if (cfg_param_attr->dtim_period > 0 &&
+ cfg_param_attr->dtim_period < 256) {
+ wid_list[i].id = WID_DTIM_PERIOD;
+ wid_list[i].val = (s8 *)&cfg_param_attr->dtim_period;
+ wid_list[i].type = WID_CHAR;
+ wid_list[i].size = sizeof(char);
+ hif_drv->cfg_values.dtim_period = cfg_param_attr->dtim_period;
} else {
- PRINT_ER("DTIM range(1~255) fail\n");
+ netdev_err(vif->ndev, "DTIM range(1~255) fail\n");
result = -EINVAL;
goto ERRORHANDLER;
}
- wid_cnt++;
+ i++;
}
- if (cfg_param_attr->cfg_attr_info.flag & SITE_SURVEY) {
- if (cfg_param_attr->cfg_attr_info.site_survey_enabled < 3) {
- wid_list[wid_cnt].id = WID_SITE_SURVEY;
- wid_list[wid_cnt].val = (s8 *)&cfg_param_attr->cfg_attr_info.site_survey_enabled;
- wid_list[wid_cnt].type = WID_CHAR;
- wid_list[wid_cnt].size = sizeof(char);
- hif_drv->cfg_values.site_survey_enabled = (u8)cfg_param_attr->cfg_attr_info.site_survey_enabled;
+ if (cfg_param_attr->flag & SITE_SURVEY) {
+ if (cfg_param_attr->site_survey_enabled < 3) {
+ wid_list[i].id = WID_SITE_SURVEY;
+ wid_list[i].val = (s8 *)&cfg_param_attr->site_survey_enabled;
+ wid_list[i].type = WID_CHAR;
+ wid_list[i].size = sizeof(char);
+ hif_drv->cfg_values.site_survey_enabled = (u8)cfg_param_attr->site_survey_enabled;
} else {
- PRINT_ER("Site survey disable\n");
+ netdev_err(vif->ndev, "Site survey disable\n");
result = -EINVAL;
goto ERRORHANDLER;
}
- wid_cnt++;
+ i++;
}
- if (cfg_param_attr->cfg_attr_info.flag & SITE_SURVEY_SCAN_TIME) {
- if (cfg_param_attr->cfg_attr_info.site_survey_scan_time > 0 &&
- cfg_param_attr->cfg_attr_info.site_survey_scan_time < 65536) {
- wid_list[wid_cnt].id = WID_SITE_SURVEY_SCAN_TIME;
- wid_list[wid_cnt].val = (s8 *)&cfg_param_attr->cfg_attr_info.site_survey_scan_time;
- wid_list[wid_cnt].type = WID_SHORT;
- wid_list[wid_cnt].size = sizeof(u16);
- hif_drv->cfg_values.site_survey_scan_time = cfg_param_attr->cfg_attr_info.site_survey_scan_time;
+ if (cfg_param_attr->flag & SITE_SURVEY_SCAN_TIME) {
+ if (cfg_param_attr->site_survey_scan_time > 0 &&
+ cfg_param_attr->site_survey_scan_time < 65536) {
+ wid_list[i].id = WID_SITE_SURVEY_SCAN_TIME;
+ wid_list[i].val = (s8 *)&cfg_param_attr->site_survey_scan_time;
+ wid_list[i].type = WID_SHORT;
+ wid_list[i].size = sizeof(u16);
+ hif_drv->cfg_values.site_survey_scan_time = cfg_param_attr->site_survey_scan_time;
} else {
- PRINT_ER("Site survey scan time(1~65535) over\n");
+ netdev_err(vif->ndev, "Site scan time(1~65535) over\n");
result = -EINVAL;
goto ERRORHANDLER;
}
- wid_cnt++;
+ i++;
}
- if (cfg_param_attr->cfg_attr_info.flag & ACTIVE_SCANTIME) {
- if (cfg_param_attr->cfg_attr_info.active_scan_time > 0 &&
- cfg_param_attr->cfg_attr_info.active_scan_time < 65536) {
- wid_list[wid_cnt].id = WID_ACTIVE_SCAN_TIME;
- wid_list[wid_cnt].val = (s8 *)&cfg_param_attr->cfg_attr_info.active_scan_time;
- wid_list[wid_cnt].type = WID_SHORT;
- wid_list[wid_cnt].size = sizeof(u16);
- hif_drv->cfg_values.active_scan_time = cfg_param_attr->cfg_attr_info.active_scan_time;
+ if (cfg_param_attr->flag & ACTIVE_SCANTIME) {
+ if (cfg_param_attr->active_scan_time > 0 &&
+ cfg_param_attr->active_scan_time < 65536) {
+ wid_list[i].id = WID_ACTIVE_SCAN_TIME;
+ wid_list[i].val = (s8 *)&cfg_param_attr->active_scan_time;
+ wid_list[i].type = WID_SHORT;
+ wid_list[i].size = sizeof(u16);
+ hif_drv->cfg_values.active_scan_time = cfg_param_attr->active_scan_time;
} else {
- PRINT_ER("Active scan time(1~65535) over\n");
+ netdev_err(vif->ndev, "Active time(1~65535) over\n");
result = -EINVAL;
goto ERRORHANDLER;
}
- wid_cnt++;
+ i++;
}
- if (cfg_param_attr->cfg_attr_info.flag & PASSIVE_SCANTIME) {
- if (cfg_param_attr->cfg_attr_info.passive_scan_time > 0 &&
- cfg_param_attr->cfg_attr_info.passive_scan_time < 65536) {
- wid_list[wid_cnt].id = WID_PASSIVE_SCAN_TIME;
- wid_list[wid_cnt].val = (s8 *)&cfg_param_attr->cfg_attr_info.passive_scan_time;
- wid_list[wid_cnt].type = WID_SHORT;
- wid_list[wid_cnt].size = sizeof(u16);
- hif_drv->cfg_values.passive_scan_time = cfg_param_attr->cfg_attr_info.passive_scan_time;
+ if (cfg_param_attr->flag & PASSIVE_SCANTIME) {
+ if (cfg_param_attr->passive_scan_time > 0 &&
+ cfg_param_attr->passive_scan_time < 65536) {
+ wid_list[i].id = WID_PASSIVE_SCAN_TIME;
+ wid_list[i].val = (s8 *)&cfg_param_attr->passive_scan_time;
+ wid_list[i].type = WID_SHORT;
+ wid_list[i].size = sizeof(u16);
+ hif_drv->cfg_values.passive_scan_time = cfg_param_attr->passive_scan_time;
} else {
- PRINT_ER("Passive scan time(1~65535) over\n");
+ netdev_err(vif->ndev, "Passive time(1~65535) over\n");
result = -EINVAL;
goto ERRORHANDLER;
}
- wid_cnt++;
+ i++;
}
- if (cfg_param_attr->cfg_attr_info.flag & CURRENT_TX_RATE) {
- enum CURRENT_TXRATE curr_tx_rate = cfg_param_attr->cfg_attr_info.curr_tx_rate;
+ if (cfg_param_attr->flag & CURRENT_TX_RATE) {
+ enum CURRENT_TXRATE curr_tx_rate = cfg_param_attr->curr_tx_rate;
- if (curr_tx_rate == AUTORATE || curr_tx_rate == MBPS_1
- || curr_tx_rate == MBPS_2 || curr_tx_rate == MBPS_5_5
- || curr_tx_rate == MBPS_11 || curr_tx_rate == MBPS_6
- || curr_tx_rate == MBPS_9 || curr_tx_rate == MBPS_12
- || curr_tx_rate == MBPS_18 || curr_tx_rate == MBPS_24
- || curr_tx_rate == MBPS_36 || curr_tx_rate == MBPS_48 || curr_tx_rate == MBPS_54) {
- wid_list[wid_cnt].id = WID_CURRENT_TX_RATE;
- wid_list[wid_cnt].val = (s8 *)&curr_tx_rate;
- wid_list[wid_cnt].type = WID_SHORT;
- wid_list[wid_cnt].size = sizeof(u16);
+ if (curr_tx_rate == AUTORATE || curr_tx_rate == MBPS_1 ||
+ curr_tx_rate == MBPS_2 || curr_tx_rate == MBPS_5_5 ||
+ curr_tx_rate == MBPS_11 || curr_tx_rate == MBPS_6 ||
+ curr_tx_rate == MBPS_9 || curr_tx_rate == MBPS_12 ||
+ curr_tx_rate == MBPS_18 || curr_tx_rate == MBPS_24 ||
+ curr_tx_rate == MBPS_36 || curr_tx_rate == MBPS_48 ||
+ curr_tx_rate == MBPS_54) {
+ wid_list[i].id = WID_CURRENT_TX_RATE;
+ wid_list[i].val = (s8 *)&curr_tx_rate;
+ wid_list[i].type = WID_SHORT;
+ wid_list[i].size = sizeof(u16);
hif_drv->cfg_values.curr_tx_rate = (u8)curr_tx_rate;
} else {
- PRINT_ER("out of TX rate\n");
+ netdev_err(vif->ndev, "out of TX rate\n");
result = -EINVAL;
goto ERRORHANDLER;
}
- wid_cnt++;
+ i++;
}
result = wilc_send_config_pkt(vif, SET_CFG, wid_list,
- wid_cnt, wilc_get_vif_idx(vif));
+ i, wilc_get_vif_idx(vif));
if (result)
- PRINT_ER("Error in setting CFG params\n");
+ netdev_err(vif->ndev, "Error in setting CFG params\n");
ERRORHANDLER:
up(&hif_drv->sem_cfg_values);
@@ -795,13 +786,13 @@
if ((hif_drv->hif_state >= HOST_IF_SCANNING) &&
(hif_drv->hif_state < HOST_IF_CONNECTED)) {
- PRINT_ER("Already scan\n");
+ netdev_err(vif->ndev, "Already scan\n");
result = -EBUSY;
goto ERRORHANDLER;
}
if (wilc_optaining_ip || wilc_connecting) {
- PRINT_ER("Don't do obss scan\n");
+ netdev_err(vif->ndev, "Don't do obss scan\n");
result = -EBUSY;
goto ERRORHANDLER;
}
@@ -877,7 +868,7 @@
wilc_get_vif_idx(vif));
if (result)
- PRINT_ER("Failed to send scan parameters config packet\n");
+ netdev_err(vif->ndev, "Failed to send scan parameters\n");
ERRORHANDLER:
if (result) {
@@ -917,13 +908,13 @@
wilc_get_vif_idx(vif));
if (result) {
- PRINT_ER("Failed to set abort running scan\n");
+ netdev_err(vif->ndev, "Failed to set abort running\n");
result = -EFAULT;
}
}
if (!hif_drv) {
- PRINT_ER("Driver handler is NULL\n");
+ netdev_err(vif->ndev, "Driver handler is NULL\n");
return result;
}
@@ -949,13 +940,13 @@
if (memcmp(pstrHostIFconnectAttr->bssid, wilc_connected_ssid, ETH_ALEN) == 0) {
result = 0;
- PRINT_ER("Trying to connect to an already connected AP, Discard connect request\n");
+ netdev_err(vif->ndev, "Discard connect request\n");
return result;
}
ptstrJoinBssParam = pstrHostIFconnectAttr->params;
if (!ptstrJoinBssParam) {
- PRINT_ER("Required BSSID not found\n");
+ netdev_err(vif->ndev, "Required BSSID not found\n");
result = -ENOENT;
goto ERRORHANDLER;
}
@@ -1063,7 +1054,7 @@
if ((pstrHostIFconnectAttr->ch >= 1) && (pstrHostIFconnectAttr->ch <= 14)) {
*(pu8CurrByte++) = pstrHostIFconnectAttr->ch;
} else {
- PRINT_ER("Channel out of range\n");
+ netdev_err(vif->ndev, "Channel out of range\n");
*(pu8CurrByte++) = 0xFF;
}
*(pu8CurrByte++) = (ptstrJoinBssParam->cap_info) & 0xFF;
@@ -1146,7 +1137,7 @@
u32WidsCount,
wilc_get_vif_idx(vif));
if (result) {
- PRINT_ER("failed to send config packet\n");
+ netdev_err(vif->ndev, "failed to send config packet\n");
result = -EFAULT;
goto ERRORHANDLER;
} else {
@@ -1155,20 +1146,20 @@
ERRORHANDLER:
if (result) {
- tstrConnectInfo strConnectInfo;
+ struct connect_info strConnectInfo;
del_timer(&hif_drv->connect_timer);
- memset(&strConnectInfo, 0, sizeof(tstrConnectInfo));
+ memset(&strConnectInfo, 0, sizeof(struct connect_info));
if (pstrHostIFconnectAttr->result) {
if (pstrHostIFconnectAttr->bssid)
- memcpy(strConnectInfo.au8bssid, pstrHostIFconnectAttr->bssid, 6);
+ memcpy(strConnectInfo.bssid, pstrHostIFconnectAttr->bssid, 6);
if (pstrHostIFconnectAttr->ies) {
- strConnectInfo.ReqIEsLen = pstrHostIFconnectAttr->ies_len;
- strConnectInfo.pu8ReqIEs = kmalloc(pstrHostIFconnectAttr->ies_len, GFP_KERNEL);
- memcpy(strConnectInfo.pu8ReqIEs,
+ strConnectInfo.req_ies_len = pstrHostIFconnectAttr->ies_len;
+ strConnectInfo.req_ies = kmalloc(pstrHostIFconnectAttr->ies_len, GFP_KERNEL);
+ memcpy(strConnectInfo.req_ies,
pstrHostIFconnectAttr->ies,
pstrHostIFconnectAttr->ies_len);
}
@@ -1179,11 +1170,11 @@
NULL,
pstrHostIFconnectAttr->arg);
hif_drv->hif_state = HOST_IF_IDLE;
- kfree(strConnectInfo.pu8ReqIEs);
- strConnectInfo.pu8ReqIEs = NULL;
+ kfree(strConnectInfo.req_ies);
+ strConnectInfo.req_ies = NULL;
} else {
- PRINT_ER("Connect callback function pointer is NULL\n");
+ netdev_err(vif->ndev, "Connect callback is NULL\n");
}
}
@@ -1240,7 +1231,7 @@
u32WidsCount,
wilc_get_vif_idx(join_req_vif));
if (result) {
- PRINT_ER("failed to send config packet\n");
+ netdev_err(vif->ndev, "failed to send config packet\n");
result = -EINVAL;
}
@@ -1250,13 +1241,13 @@
static s32 Handle_ConnectTimeout(struct wilc_vif *vif)
{
s32 result = 0;
- tstrConnectInfo strConnectInfo;
+ struct connect_info strConnectInfo;
struct wid wid;
u16 u16DummyReasonCode = 0;
struct host_if_drv *hif_drv = vif->hif_drv;
if (!hif_drv) {
- PRINT_ER("Driver handler is NULL\n");
+ netdev_err(vif->ndev, "Driver handler is NULL\n");
return result;
}
@@ -1264,18 +1255,18 @@
scan_while_connected = false;
- memset(&strConnectInfo, 0, sizeof(tstrConnectInfo));
+ memset(&strConnectInfo, 0, sizeof(struct connect_info));
if (hif_drv->usr_conn_req.conn_result) {
if (hif_drv->usr_conn_req.bssid) {
- memcpy(strConnectInfo.au8bssid,
+ memcpy(strConnectInfo.bssid,
hif_drv->usr_conn_req.bssid, 6);
}
if (hif_drv->usr_conn_req.ies) {
- strConnectInfo.ReqIEsLen = hif_drv->usr_conn_req.ies_len;
- strConnectInfo.pu8ReqIEs = kmalloc(hif_drv->usr_conn_req.ies_len, GFP_KERNEL);
- memcpy(strConnectInfo.pu8ReqIEs,
+ strConnectInfo.req_ies_len = hif_drv->usr_conn_req.ies_len;
+ strConnectInfo.req_ies = kmalloc(hif_drv->usr_conn_req.ies_len, GFP_KERNEL);
+ memcpy(strConnectInfo.req_ies,
hif_drv->usr_conn_req.ies,
hif_drv->usr_conn_req.ies_len);
}
@@ -1286,10 +1277,10 @@
NULL,
hif_drv->usr_conn_req.arg);
- kfree(strConnectInfo.pu8ReqIEs);
- strConnectInfo.pu8ReqIEs = NULL;
+ kfree(strConnectInfo.req_ies);
+ strConnectInfo.req_ies = NULL;
} else {
- PRINT_ER("Connect callback function pointer is NULL\n");
+ netdev_err(vif->ndev, "Connect callback is NULL\n");
}
wid.id = (u16)WID_DISCONNECT;
@@ -1300,7 +1291,7 @@
result = wilc_send_config_pkt(vif, SET_CFG, &wid, 1,
wilc_get_vif_idx(vif));
if (result)
- PRINT_ER("Failed to send dissconect config packet\n");
+ netdev_err(vif->ndev, "Failed to send dissconect\n");
hif_drv->usr_conn_req.ssid_len = 0;
kfree(hif_drv->usr_conn_req.ssid);
@@ -1342,7 +1333,7 @@
wilc_parse_network_info(pstrRcvdNetworkInfo->buffer, &pstrNetworkInfo);
if ((!pstrNetworkInfo) ||
(!hif_drv->usr_scan_req.scan_result)) {
- PRINT_ER("driver is null\n");
+ netdev_err(vif->ndev, "driver is null\n");
result = -EINVAL;
goto done;
}
@@ -1418,13 +1409,13 @@
u8 u8MacStatus;
u8 u8MacStatusReasonCode;
u8 u8MacStatusAdditionalInfo;
- tstrConnectInfo strConnectInfo;
- tstrDisconnectNotifInfo strDisconnectNotifInfo;
+ struct connect_info strConnectInfo;
+ struct disconnect_info strDisconnectNotifInfo;
s32 s32Err = 0;
struct host_if_drv *hif_drv = vif->hif_drv;
if (!hif_drv) {
- PRINT_ER("Driver handler is NULL\n");
+ netdev_err(vif->ndev, "Driver handler is NULL\n");
return -ENODEV;
}
@@ -1433,14 +1424,14 @@
hif_drv->usr_scan_req.scan_result) {
if (!pstrRcvdGnrlAsyncInfo->buffer ||
!hif_drv->usr_conn_req.conn_result) {
- PRINT_ER("driver is null\n");
+ netdev_err(vif->ndev, "driver is null\n");
return -EINVAL;
}
u8MsgType = pstrRcvdGnrlAsyncInfo->buffer[0];
if ('I' != u8MsgType) {
- PRINT_ER("Received Message format incorrect.\n");
+ netdev_err(vif->ndev, "Received Message incorrect.\n");
return -EFAULT;
}
@@ -1455,7 +1446,7 @@
u32 u32RcvdAssocRespInfoLen = 0;
struct connect_resp_info *pstrConnectRespInfo = NULL;
- memset(&strConnectInfo, 0, sizeof(tstrConnectInfo));
+ memset(&strConnectInfo, 0, sizeof(struct connect_info));
if (u8MacStatus == MAC_CONNECTED) {
memset(rcv_assoc_resp, 0, MAX_ASSOC_RESP_FRAME_SIZE);
@@ -1469,15 +1460,15 @@
s32Err = wilc_parse_assoc_resp_info(rcv_assoc_resp, u32RcvdAssocRespInfoLen,
&pstrConnectRespInfo);
if (s32Err) {
- PRINT_ER("wilc_parse_assoc_resp_info() returned error %d\n", s32Err);
+ netdev_err(vif->ndev, "wilc_parse_assoc_resp_info() returned error %d\n", s32Err);
} else {
- strConnectInfo.u16ConnectStatus = pstrConnectRespInfo->status;
+ strConnectInfo.status = pstrConnectRespInfo->status;
- if (strConnectInfo.u16ConnectStatus == SUCCESSFUL_STATUSCODE) {
+ if (strConnectInfo.status == SUCCESSFUL_STATUSCODE) {
if (pstrConnectRespInfo->ies) {
- strConnectInfo.u16RespIEsLen = pstrConnectRespInfo->ies_len;
- strConnectInfo.pu8RespIEs = kmalloc(pstrConnectRespInfo->ies_len, GFP_KERNEL);
- memcpy(strConnectInfo.pu8RespIEs, pstrConnectRespInfo->ies,
+ strConnectInfo.resp_ies_len = pstrConnectRespInfo->ies_len;
+ strConnectInfo.resp_ies = kmalloc(pstrConnectRespInfo->ies_len, GFP_KERNEL);
+ memcpy(strConnectInfo.resp_ies, pstrConnectRespInfo->ies,
pstrConnectRespInfo->ies_len);
}
}
@@ -1491,28 +1482,28 @@
}
if ((u8MacStatus == MAC_CONNECTED) &&
- (strConnectInfo.u16ConnectStatus != SUCCESSFUL_STATUSCODE)) {
- PRINT_ER("Received MAC status is MAC_CONNECTED while the received status code in Asoc Resp is not SUCCESSFUL_STATUSCODE\n");
+ (strConnectInfo.status != SUCCESSFUL_STATUSCODE)) {
+ netdev_err(vif->ndev, "Received MAC status is MAC_CONNECTED while the received status code in Asoc Resp is not SUCCESSFUL_STATUSCODE\n");
eth_zero_addr(wilc_connected_ssid);
} else if (u8MacStatus == MAC_DISCONNECTED) {
- PRINT_ER("Received MAC status is MAC_DISCONNECTED\n");
+ netdev_err(vif->ndev, "Received MAC status is MAC_DISCONNECTED\n");
eth_zero_addr(wilc_connected_ssid);
}
if (hif_drv->usr_conn_req.bssid) {
- memcpy(strConnectInfo.au8bssid, hif_drv->usr_conn_req.bssid, 6);
+ memcpy(strConnectInfo.bssid, hif_drv->usr_conn_req.bssid, 6);
if ((u8MacStatus == MAC_CONNECTED) &&
- (strConnectInfo.u16ConnectStatus == SUCCESSFUL_STATUSCODE)) {
+ (strConnectInfo.status == SUCCESSFUL_STATUSCODE)) {
memcpy(hif_drv->assoc_bssid,
hif_drv->usr_conn_req.bssid, ETH_ALEN);
}
}
if (hif_drv->usr_conn_req.ies) {
- strConnectInfo.ReqIEsLen = hif_drv->usr_conn_req.ies_len;
- strConnectInfo.pu8ReqIEs = kmalloc(hif_drv->usr_conn_req.ies_len, GFP_KERNEL);
- memcpy(strConnectInfo.pu8ReqIEs,
+ strConnectInfo.req_ies_len = hif_drv->usr_conn_req.ies_len;
+ strConnectInfo.req_ies = kmalloc(hif_drv->usr_conn_req.ies_len, GFP_KERNEL);
+ memcpy(strConnectInfo.req_ies,
hif_drv->usr_conn_req.ies,
hif_drv->usr_conn_req.ies_len);
}
@@ -1525,7 +1516,7 @@
hif_drv->usr_conn_req.arg);
if ((u8MacStatus == MAC_CONNECTED) &&
- (strConnectInfo.u16ConnectStatus == SUCCESSFUL_STATUSCODE)) {
+ (strConnectInfo.status == SUCCESSFUL_STATUSCODE)) {
wilc_set_power_mgmt(vif, 0, 0);
hif_drv->hif_state = HOST_IF_CONNECTED;
@@ -1538,11 +1529,11 @@
scan_while_connected = false;
}
- kfree(strConnectInfo.pu8RespIEs);
- strConnectInfo.pu8RespIEs = NULL;
+ kfree(strConnectInfo.resp_ies);
+ strConnectInfo.resp_ies = NULL;
- kfree(strConnectInfo.pu8ReqIEs);
- strConnectInfo.pu8ReqIEs = NULL;
+ kfree(strConnectInfo.req_ies);
+ strConnectInfo.req_ies = NULL;
hif_drv->usr_conn_req.ssid_len = 0;
kfree(hif_drv->usr_conn_req.ssid);
hif_drv->usr_conn_req.ssid = NULL;
@@ -1553,14 +1544,14 @@
hif_drv->usr_conn_req.ies = NULL;
} else if ((u8MacStatus == MAC_DISCONNECTED) &&
(hif_drv->hif_state == HOST_IF_CONNECTED)) {
- memset(&strDisconnectNotifInfo, 0, sizeof(tstrDisconnectNotifInfo));
+ memset(&strDisconnectNotifInfo, 0, sizeof(struct disconnect_info));
if (hif_drv->usr_scan_req.scan_result) {
del_timer(&hif_drv->scan_timer);
Handle_ScanDone(vif, SCAN_EVENT_ABORTED);
}
- strDisconnectNotifInfo.u16reason = 0;
+ strDisconnectNotifInfo.reason = 0;
strDisconnectNotifInfo.ie = NULL;
strDisconnectNotifInfo.ie_len = 0;
@@ -1574,7 +1565,7 @@
&strDisconnectNotifInfo,
hif_drv->usr_conn_req.arg);
} else {
- PRINT_ER("Connect result callback function is NULL\n");
+ netdev_err(vif->ndev, "Connect result NULL\n");
}
eth_zero_addr(hif_drv->assoc_bssid);
@@ -1643,11 +1634,8 @@
pu8keybuf = kmalloc(pstrHostIFkeyAttr->attr.wep.key_len + 2,
GFP_KERNEL);
-
- if (pu8keybuf == NULL) {
- PRINT_ER("No buffer to send Key\n");
+ if (!pu8keybuf)
return -ENOMEM;
- }
pu8keybuf[0] = pstrHostIFkeyAttr->attr.wep.index;
pu8keybuf[1] = pstrHostIFkeyAttr->attr.wep.key_len;
@@ -1668,10 +1656,8 @@
kfree(pu8keybuf);
} else if (pstrHostIFkeyAttr->action & ADDKEY) {
pu8keybuf = kmalloc(pstrHostIFkeyAttr->attr.wep.key_len + 2, GFP_KERNEL);
- if (!pu8keybuf) {
- PRINT_ER("No buffer to send Key\n");
+ if (!pu8keybuf)
return -ENOMEM;
- }
pu8keybuf[0] = pstrHostIFkeyAttr->attr.wep.index;
memcpy(pu8keybuf + 1, &pstrHostIFkeyAttr->attr.wep.key_len, 1);
memcpy(pu8keybuf + 2, pstrHostIFkeyAttr->attr.wep.key,
@@ -1715,7 +1701,6 @@
if (pstrHostIFkeyAttr->action & ADDKEY_AP) {
pu8keybuf = kzalloc(RX_MIC_KEY_MSG_LEN, GFP_KERNEL);
if (!pu8keybuf) {
- PRINT_ER("No buffer to send RxGTK Key\n");
ret = -ENOMEM;
goto _WPARxGtk_end_case_;
}
@@ -1747,7 +1732,6 @@
} else if (pstrHostIFkeyAttr->action & ADDKEY) {
pu8keybuf = kzalloc(RX_MIC_KEY_MSG_LEN, GFP_KERNEL);
if (pu8keybuf == NULL) {
- PRINT_ER("No buffer to send RxGTK Key\n");
ret = -ENOMEM;
goto _WPARxGtk_end_case_;
}
@@ -1755,7 +1739,7 @@
if (hif_drv->hif_state == HOST_IF_CONNECTED)
memcpy(pu8keybuf, hif_drv->assoc_bssid, ETH_ALEN);
else
- PRINT_ER("Couldn't handle WPARxGtk while state is not HOST_IF_CONNECTED\n");
+ netdev_err(vif->ndev, "Couldn't handle\n");
memcpy(pu8keybuf + 6, pstrHostIFkeyAttr->attr.wpa.seq, 8);
memcpy(pu8keybuf + 14, &pstrHostIFkeyAttr->attr.wpa.index, 1);
@@ -1787,7 +1771,6 @@
if (pstrHostIFkeyAttr->action & ADDKEY_AP) {
pu8keybuf = kmalloc(PTK_KEY_MSG_LEN + 1, GFP_KERNEL);
if (!pu8keybuf) {
- PRINT_ER("No buffer to send PTK Key\n");
ret = -ENOMEM;
goto _WPAPtk_end_case_;
}
@@ -1816,7 +1799,7 @@
} else if (pstrHostIFkeyAttr->action & ADDKEY) {
pu8keybuf = kmalloc(PTK_KEY_MSG_LEN, GFP_KERNEL);
if (!pu8keybuf) {
- PRINT_ER("No buffer to send PTK Key\n");
+ netdev_err(vif->ndev, "No buffer send PTK\n");
ret = -ENOMEM;
goto _WPAPtk_end_case_;
}
@@ -1848,7 +1831,7 @@
case PMKSA:
pu8keybuf = kmalloc((pstrHostIFkeyAttr->attr.pmkid.numpmkid * PMKSA_KEY_LEN) + 1, GFP_KERNEL);
if (!pu8keybuf) {
- PRINT_ER("No buffer to send PMKSA Key\n");
+ netdev_err(vif->ndev, "No buffer to send PMKSA Key\n");
return -ENOMEM;
}
@@ -1872,7 +1855,7 @@
}
if (result)
- PRINT_ER("Failed to send key config packet\n");
+ netdev_err(vif->ndev, "Failed to send key config packet\n");
return result;
}
@@ -1899,13 +1882,13 @@
wilc_get_vif_idx(vif));
if (result) {
- PRINT_ER("Failed to send dissconect config packet\n");
+ netdev_err(vif->ndev, "Failed to send dissconect\n");
} else {
- tstrDisconnectNotifInfo strDisconnectNotifInfo;
+ struct disconnect_info strDisconnectNotifInfo;
- memset(&strDisconnectNotifInfo, 0, sizeof(tstrDisconnectNotifInfo));
+ memset(&strDisconnectNotifInfo, 0, sizeof(struct disconnect_info));
- strDisconnectNotifInfo.u16reason = 0;
+ strDisconnectNotifInfo.reason = 0;
strDisconnectNotifInfo.ie = NULL;
strDisconnectNotifInfo.ie_len = 0;
@@ -1928,7 +1911,7 @@
&strDisconnectNotifInfo,
hif_drv->usr_conn_req.arg);
} else {
- PRINT_ER("usr_conn_req.conn_result = NULL\n");
+ netdev_err(vif->ndev, "conn_result = NULL\n");
}
scan_while_connected = false;
@@ -1973,7 +1956,6 @@
{
s32 result = 0;
struct wid wid;
- struct host_if_drv *hif_drv = vif->hif_drv;
wid.id = (u16)WID_CURRENT_CHANNEL;
wid.type = WID_CHAR;
@@ -1984,12 +1966,10 @@
wilc_get_vif_idx(vif));
if (result) {
- PRINT_ER("Failed to get channel number\n");
+ netdev_err(vif->ndev, "Failed to get channel number\n");
result = -EFAULT;
}
- up(&hif_drv->sem_get_chnl);
-
return result;
}
@@ -2006,7 +1986,7 @@
result = wilc_send_config_pkt(vif, GET_CFG, &wid, 1,
wilc_get_vif_idx(vif));
if (result) {
- PRINT_ER("Failed to get RSSI value\n");
+ netdev_err(vif->ndev, "Failed to get RSSI value\n");
result = -EFAULT;
}
@@ -2017,7 +1997,6 @@
{
s32 result = 0;
struct wid wid;
- struct host_if_drv *hif_drv = vif->hif_drv;
link_speed = 0;
@@ -2029,11 +2008,10 @@
result = wilc_send_config_pkt(vif, GET_CFG, &wid, 1,
wilc_get_vif_idx(vif));
if (result) {
- PRINT_ER("Failed to get LINKSPEED value\n");
+ netdev_err(vif->ndev, "Failed to get LINKSPEED value\n");
result = -EFAULT;
}
- up(&hif_drv->sem_get_link_speed);
}
static s32 Handle_GetStatistics(struct wilc_vif *vif,
@@ -2077,7 +2055,7 @@
wilc_get_vif_idx(vif));
if (result)
- PRINT_ER("Failed to send scan parameters config packet\n");
+ netdev_err(vif->ndev, "Failed to send scan parameters\n");
if (pstrStatistics->link_speed > TCP_ACK_FILTER_LINK_SPEED_THRESH &&
pstrStatistics->link_speed != DEFAULT_LINK_SPEED)
@@ -2106,13 +2084,11 @@
stamac = wid.val;
memcpy(stamac, strHostIfStaInactiveT->mac, ETH_ALEN);
- PRINT_D(CFG80211_DBG, "SETING STA inactive time\n");
-
result = wilc_send_config_pkt(vif, SET_CFG, &wid, 1,
wilc_get_vif_idx(vif));
if (result) {
- PRINT_ER("Failed to SET incative time\n");
+ netdev_err(vif->ndev, "Failed to SET incative time\n");
return -EFAULT;
}
@@ -2125,12 +2101,10 @@
wilc_get_vif_idx(vif));
if (result) {
- PRINT_ER("Failed to get incative time\n");
+ netdev_err(vif->ndev, "Failed to get incative time\n");
return -EFAULT;
}
- PRINT_D(CFG80211_DBG, "Getting inactive time : %d\n", inactive_time);
-
up(&hif_drv->sem_inactive_time);
return result;
@@ -2181,7 +2155,7 @@
result = wilc_send_config_pkt(vif, SET_CFG, &wid, 1,
wilc_get_vif_idx(vif));
if (result)
- PRINT_ER("Failed to send add beacon config packet\n");
+ netdev_err(vif->ndev, "Failed to send add beacon\n");
ERRORHANDLER:
kfree(wid.val);
@@ -2208,7 +2182,7 @@
result = wilc_send_config_pkt(vif, SET_CFG, &wid, 1,
wilc_get_vif_idx(vif));
if (result)
- PRINT_ER("Failed to send delete beacon config packet\n");
+ netdev_err(vif->ndev, "Failed to send delete beacon\n");
}
static u32 WILC_HostIf_PackStaParam(u8 *pu8Buffer,
@@ -2279,7 +2253,7 @@
result = wilc_send_config_pkt(vif, SET_CFG, &wid, 1,
wilc_get_vif_idx(vif));
if (result != 0)
- PRINT_ER("Failed to send add station config packet\n");
+ netdev_err(vif->ndev, "Failed to send add station\n");
ERRORHANDLER:
kfree(pstrStationParam->rates);
@@ -2319,7 +2293,7 @@
result = wilc_send_config_pkt(vif, SET_CFG, &wid, 1,
wilc_get_vif_idx(vif));
if (result)
- PRINT_ER("Failed to send add station config packet\n");
+ netdev_err(vif->ndev, "Failed to send add station\n");
ERRORHANDLER:
kfree(wid.val);
@@ -2349,7 +2323,7 @@
result = wilc_send_config_pkt(vif, SET_CFG, &wid, 1,
wilc_get_vif_idx(vif));
if (result)
- PRINT_ER("Failed to send add station config packet\n");
+ netdev_err(vif->ndev, "Failed to send add station\n");
ERRORHANDLER:
kfree(wid.val);
@@ -2376,7 +2350,7 @@
result = wilc_send_config_pkt(vif, SET_CFG, &wid, 1,
wilc_get_vif_idx(vif));
if (result)
- PRINT_ER("Failed to send edit station config packet\n");
+ netdev_err(vif->ndev, "Failed to send edit station\n");
ERRORHANDLER:
kfree(pstrStationParam->rates);
@@ -2432,7 +2406,7 @@
result = wilc_send_config_pkt(vif, SET_CFG, &wid, 1,
wilc_get_vif_idx(vif));
if (result != 0)
- PRINT_ER("Failed to set remain on channel\n");
+ netdev_err(vif->ndev, "Failed to set remain on channel\n");
ERRORHANDLER:
{
@@ -2476,7 +2450,7 @@
result = wilc_send_config_pkt(vif, SET_CFG, &wid, 1,
wilc_get_vif_idx(vif));
if (result) {
- PRINT_ER("Failed to frame register config packet\n");
+ netdev_err(vif->ndev, "Failed to frame register\n");
result = -EINVAL;
}
@@ -2499,7 +2473,7 @@
wid.val = kmalloc(wid.size, GFP_KERNEL);
if (!wid.val) {
- PRINT_ER("Failed to allocate memory\n");
+ netdev_err(vif->ndev, "Failed to allocate memory\n");
return -ENOMEM;
}
@@ -2509,7 +2483,7 @@
result = wilc_send_config_pkt(vif, SET_CFG, &wid, 1,
wilc_get_vif_idx(vif));
if (result != 0) {
- PRINT_ER("Failed to set remain on channel\n");
+ netdev_err(vif->ndev, "Failed to set remain channel\n");
goto _done_;
}
@@ -2542,7 +2516,7 @@
result = wilc_mq_send(&hif_msg_q, &msg, sizeof(struct host_if_msg));
if (result)
- PRINT_ER("wilc_mq_send fail\n");
+ netdev_err(vif->ndev, "wilc_mq_send fail\n");
}
static void Handle_PowerManagement(struct wilc_vif *vif,
@@ -2565,7 +2539,7 @@
result = wilc_send_config_pkt(vif, SET_CFG, &wid, 1,
wilc_get_vif_idx(vif));
if (result)
- PRINT_ER("Failed to send power management config packet\n");
+ netdev_err(vif->ndev, "Failed to send power management\n");
}
static void Handle_SetMulticastFilter(struct wilc_vif *vif,
@@ -2600,43 +2574,12 @@
result = wilc_send_config_pkt(vif, SET_CFG, &wid, 1,
wilc_get_vif_idx(vif));
if (result)
- PRINT_ER("Failed to send setup multicast config packet\n");
+ netdev_err(vif->ndev, "Failed to send setup multicast\n");
ERRORHANDLER:
kfree(wid.val);
}
-static s32 Handle_DelAllRxBASessions(struct wilc_vif *vif,
- struct ba_session_info *strHostIfBASessionInfo)
-{
- s32 result = 0;
- struct wid wid;
- char *ptr = NULL;
-
- wid.id = (u16)WID_DEL_ALL_RX_BA;
- wid.type = WID_STR;
- wid.val = kmalloc(BLOCK_ACK_REQ_SIZE, GFP_KERNEL);
- wid.size = BLOCK_ACK_REQ_SIZE;
- ptr = wid.val;
- *ptr++ = 0x14;
- *ptr++ = 0x3;
- *ptr++ = 0x2;
- memcpy(ptr, strHostIfBASessionInfo->bssid, ETH_ALEN);
- ptr += ETH_ALEN;
- *ptr++ = strHostIfBASessionInfo->tid;
- *ptr++ = 0;
- *ptr++ = 32;
-
- result = wilc_send_config_pkt(vif, SET_CFG, &wid, 1,
- wilc_get_vif_idx(vif));
-
- kfree(wid.val);
-
- up(&hif_sema_wait_response);
-
- return result;
-}
-
static void handle_set_tx_pwr(struct wilc_vif *vif, u8 tx_pwr)
{
int ret;
@@ -2854,10 +2797,6 @@
Handle_SetMulticastFilter(msg.vif, &msg.body.multicast_info);
break;
- case HOST_IF_MSG_DEL_ALL_RX_BA_SESSIONS:
- Handle_DelAllRxBASessions(msg.vif, &msg.body.session_info);
- break;
-
case HOST_IF_MSG_DEL_ALL_STA:
Handle_DelAllSta(msg.vif, &msg.body.del_all_sta_info);
break;
@@ -2870,7 +2809,7 @@
handle_get_tx_pwr(msg.vif, &msg.body.tx_power.tx_pwr);
break;
default:
- PRINT_ER("[Host Interface] undefined Received Msg ID\n");
+ netdev_err(vif->ndev, "[Host Interface] undefined\n");
break;
}
}
@@ -2923,7 +2862,7 @@
if (!hif_drv) {
result = -EFAULT;
- PRINT_ER("Failed to send setup multicast config packet\n");
+ netdev_err(vif->ndev, "Failed to send setup multicast\n");
return result;
}
@@ -2937,7 +2876,7 @@
result = wilc_mq_send(&hif_msg_q, &msg, sizeof(struct host_if_msg));
if (result)
- PRINT_ER("Error in sending message queue : Request to remove WEP key\n");
+ netdev_err(vif->ndev, "Request to remove WEP key\n");
down(&hif_drv->sem_test_key_block);
return result;
@@ -2951,7 +2890,7 @@
if (!hif_drv) {
result = -EFAULT;
- PRINT_ER("driver is null\n");
+ netdev_err(vif->ndev, "driver is null\n");
return result;
}
@@ -2965,7 +2904,7 @@
result = wilc_mq_send(&hif_msg_q, &msg, sizeof(struct host_if_msg));
if (result)
- PRINT_ER("Error in sending message queue : Default key index\n");
+ netdev_err(vif->ndev, "Default key index\n");
down(&hif_drv->sem_test_key_block);
return result;
@@ -2979,7 +2918,7 @@
struct host_if_drv *hif_drv = vif->hif_drv;
if (!hif_drv) {
- PRINT_ER("driver is null\n");
+ netdev_err(vif->ndev, "driver is null\n");
return -EFAULT;
}
@@ -2998,7 +2937,7 @@
result = wilc_mq_send(&hif_msg_q, &msg, sizeof(struct host_if_msg));
if (result)
- PRINT_ER("Error in sending message queue :WEP Key\n");
+ netdev_err(vif->ndev, "STA - WEP Key\n");
down(&hif_drv->sem_test_key_block);
return result;
@@ -3010,19 +2949,14 @@
int result = 0;
struct host_if_msg msg;
struct host_if_drv *hif_drv = vif->hif_drv;
- int i;
if (!hif_drv) {
- PRINT_ER("driver is null\n");
+ netdev_err(vif->ndev, "driver is null\n");
return -EFAULT;
}
memset(&msg, 0, sizeof(struct host_if_msg));
- if (INFO) {
- for (i = 0; i < len; i++)
- PRINT_INFO(HOSTAPD_DBG, "KEY is %x\n", key[i]);
- }
msg.id = HOST_IF_MSG_KEY;
msg.body.key_info.type = WEP;
msg.body.key_info.action = ADDKEY_AP;
@@ -3039,7 +2973,7 @@
result = wilc_mq_send(&hif_msg_q, &msg, sizeof(struct host_if_msg));
if (result)
- PRINT_ER("Error in sending message queue :WEP Key\n");
+ netdev_err(vif->ndev, "AP - WEP Key\n");
down(&hif_drv->sem_test_key_block);
return result;
@@ -3053,10 +2987,9 @@
struct host_if_msg msg;
struct host_if_drv *hif_drv = vif->hif_drv;
u8 key_len = ptk_key_len;
- int i;
if (!hif_drv) {
- PRINT_ER("driver is null\n");
+ netdev_err(vif->ndev, "driver is null\n");
return -EFAULT;
}
@@ -3081,20 +3014,11 @@
if (!msg.body.key_info.attr.wpa.key)
return -ENOMEM;
- if (rx_mic) {
+ if (rx_mic)
memcpy(msg.body.key_info.attr.wpa.key + 16, rx_mic, RX_MIC_KEY_LEN);
- if (INFO) {
- for (i = 0; i < RX_MIC_KEY_LEN; i++)
- PRINT_INFO(CFG80211_DBG, "PairwiseRx[%d] = %x\n", i, rx_mic[i]);
- }
- }
- if (tx_mic) {
+
+ if (tx_mic)
memcpy(msg.body.key_info.attr.wpa.key + 24, tx_mic, TX_MIC_KEY_LEN);
- if (INFO) {
- for (i = 0; i < TX_MIC_KEY_LEN; i++)
- PRINT_INFO(CFG80211_DBG, "PairwiseTx[%d] = %x\n", i, tx_mic[i]);
- }
- }
msg.body.key_info.attr.wpa.key_len = key_len;
msg.body.key_info.attr.wpa.mac_addr = mac_addr;
@@ -3104,7 +3028,7 @@
result = wilc_mq_send(&hif_msg_q, &msg, sizeof(struct host_if_msg));
if (result)
- PRINT_ER("Error in sending message queue: PTK Key\n");
+ netdev_err(vif->ndev, "PTK Key\n");
down(&hif_drv->sem_test_key_block);
@@ -3122,7 +3046,7 @@
u8 key_len = gtk_key_len;
if (!hif_drv) {
- PRINT_ER("driver is null\n");
+ netdev_err(vif->ndev, "driver is null\n");
return -EFAULT;
}
memset(&msg, 0, sizeof(struct host_if_msg));
@@ -3172,7 +3096,7 @@
result = wilc_mq_send(&hif_msg_q, &msg, sizeof(struct host_if_msg));
if (result)
- PRINT_ER("Error in sending message queue: RX GTK\n");
+ netdev_err(vif->ndev, "RX GTK\n");
down(&hif_drv->sem_test_key_block);
@@ -3188,7 +3112,7 @@
int i;
if (!hif_drv) {
- PRINT_ER("driver is null\n");
+ netdev_err(vif->ndev, "driver is null\n");
return -EFAULT;
}
@@ -3208,7 +3132,7 @@
result = wilc_mq_send(&hif_msg_q, &msg, sizeof(struct host_if_msg));
if (result)
- PRINT_ER(" Error in sending messagequeue: PMKID Info\n");
+ netdev_err(vif->ndev, "PMKID Info\n");
return result;
}
@@ -3226,7 +3150,7 @@
result = wilc_mq_send(&hif_msg_q, &msg, sizeof(struct host_if_msg));
if (result) {
- PRINT_ER("Failed to send get mac address\n");
+ netdev_err(vif->ndev, "Failed to send get mac address\n");
return -EFAULT;
}
@@ -3245,12 +3169,12 @@
struct host_if_drv *hif_drv = vif->hif_drv;
if (!hif_drv || !connect_result) {
- PRINT_ER("Driver is null\n");
+ netdev_err(vif->ndev, "Driver is null\n");
return -EFAULT;
}
if (!join_params) {
- PRINT_ER("Unable to Join - JoinParams is NULL\n");
+ netdev_err(vif->ndev, "Unable to Join - JoinParams is NULL\n");
return -EFAULT;
}
@@ -3290,7 +3214,7 @@
result = wilc_mq_send(&hif_msg_q, &msg, sizeof(struct host_if_msg));
if (result) {
- PRINT_ER("Failed to send message queue: Set join request\n");
+ netdev_err(vif->ndev, "send message: Set join request\n");
return -EFAULT;
}
@@ -3308,7 +3232,7 @@
struct host_if_drv *hif_drv = vif->hif_drv;
if (!hif_drv) {
- PRINT_ER("Driver is null\n");
+ netdev_err(vif->ndev, "Driver is null\n");
return -EFAULT;
}
@@ -3319,7 +3243,7 @@
result = wilc_mq_send(&hif_msg_q, &msg, sizeof(struct host_if_msg));
if (result)
- PRINT_ER("Failed to send message queue: disconnect\n");
+ netdev_err(vif->ndev, "Failed to send message: disconnect\n");
down(&hif_drv->sem_test_disconn_block);
@@ -3336,7 +3260,7 @@
struct host_if_drv *hif_drv = vif->hif_drv;
if (!hif_drv) {
- PRINT_ER("Driver is null\n");
+ netdev_err(vif->ndev, "Driver is null\n");
return -EFAULT;
}
@@ -3349,10 +3273,10 @@
wilc_get_vif_idx(vif));
if (result) {
*pu32RcvdAssocRespInfoLen = 0;
- PRINT_ER("Failed to send association response config packet\n");
+ netdev_err(vif->ndev, "Failed to send association response\n");
return -EINVAL;
}
-
+
*pu32RcvdAssocRespInfoLen = wid.size;
return result;
}
@@ -3364,7 +3288,7 @@
struct host_if_drv *hif_drv = vif->hif_drv;
if (!hif_drv) {
- PRINT_ER("driver is null\n");
+ netdev_err(vif->ndev, "driver is null\n");
return -EFAULT;
}
@@ -3375,7 +3299,7 @@
result = wilc_mq_send(&hif_msg_q, &msg, sizeof(struct host_if_msg));
if (result) {
- PRINT_ER("wilc mq send fail\n");
+ netdev_err(vif->ndev, "wilc mq send fail\n");
return -EINVAL;
}
@@ -3395,7 +3319,7 @@
result = wilc_mq_send(&hif_msg_q, &msg, sizeof(struct host_if_msg));
if (result) {
- PRINT_ER("wilc mq send fail\n");
+ netdev_err(vif->ndev, "wilc mq send fail\n");
result = -EINVAL;
}
@@ -3414,7 +3338,7 @@
result = wilc_mq_send(&hif_msg_q, &msg, sizeof(struct host_if_msg));
if (result) {
- PRINT_ER("wilc mq send fail\n");
+ netdev_err(vif->ndev, "wilc mq send fail\n");
result = -EINVAL;
}
@@ -3429,7 +3353,7 @@
struct host_if_drv *hif_drv = vif->hif_drv;
if (!hif_drv) {
- PRINT_ER("driver is null\n");
+ netdev_err(vif->ndev, "driver is null\n");
return -EFAULT;
}
@@ -3441,7 +3365,7 @@
result = wilc_mq_send(&hif_msg_q, &msg, sizeof(struct host_if_msg));
if (result)
- PRINT_ER("Failed to send get host channel param's message queue ");
+ netdev_err(vif->ndev, "Failed to send get host ch param\n");
down(&hif_drv->sem_inactive_time);
@@ -3462,14 +3386,14 @@
result = wilc_mq_send(&hif_msg_q, &msg, sizeof(struct host_if_msg));
if (result) {
- PRINT_ER("Failed to send get host channel param's message queue ");
+ netdev_err(vif->ndev, "Failed to send get host ch param\n");
return -EFAULT;
}
down(&hif_drv->sem_get_rssi);
if (!rssi_level) {
- PRINT_ER("RSS pointer value is null");
+ netdev_err(vif->ndev, "RSS pointer value is null\n");
return -EFAULT;
}
@@ -3490,7 +3414,7 @@
result = wilc_mq_send(&hif_msg_q, &msg, sizeof(struct host_if_msg));
if (result) {
- PRINT_ER("Failed to send get host channel param's message queue ");
+ netdev_err(vif->ndev, "Failed to send get host channel\n");
return -EFAULT;
}
@@ -3510,7 +3434,7 @@
struct host_if_drv *hif_drv = vif->hif_drv;
if (!hif_drv || !scan_result) {
- PRINT_ER("hif_drv or scan_result = NULL\n");
+ netdev_err(vif->ndev, "hif_drv or scan_result = NULL\n");
return -EFAULT;
}
@@ -3543,7 +3467,7 @@
result = wilc_mq_send(&hif_msg_q, &msg, sizeof(struct host_if_msg));
if (result) {
- PRINT_ER("Error in sending message queue\n");
+ netdev_err(vif->ndev, "Error in sending message queue\n");
return -EINVAL;
}
@@ -3555,20 +3479,20 @@
}
int wilc_hif_set_cfg(struct wilc_vif *vif,
- struct cfg_param_val *cfg_param)
+ struct cfg_param_attr *cfg_param)
{
int result = 0;
struct host_if_msg msg;
struct host_if_drv *hif_drv = vif->hif_drv;
if (!hif_drv) {
- PRINT_ER("hif_drv NULL\n");
+ netdev_err(vif->ndev, "hif_drv NULL\n");
return -EFAULT;
}
memset(&msg, 0, sizeof(struct host_if_msg));
msg.id = HOST_IF_MSG_CFG_PARAMS;
- msg.body.cfg_info.cfg_attr_info = *cfg_param;
+ msg.body.cfg_info = *cfg_param;
msg.vif = vif;
result = wilc_mq_send(&hif_msg_q, &msg, sizeof(struct host_if_msg));
@@ -3581,7 +3505,7 @@
struct wilc_vif *vif = (struct wilc_vif *)arg;
if (!vif->hif_drv) {
- PRINT_ER("Driver handler is NULL\n");
+ netdev_err(vif->ndev, "Driver handler is NULL\n");
return;
}
@@ -3630,15 +3554,13 @@
sema_init(&hif_drv->sem_test_key_block, 0);
sema_init(&hif_drv->sem_test_disconn_block, 0);
sema_init(&hif_drv->sem_get_rssi, 0);
- sema_init(&hif_drv->sem_get_link_speed, 0);
- sema_init(&hif_drv->sem_get_chnl, 0);
sema_init(&hif_drv->sem_inactive_time, 0);
if (clients_count == 0) {
result = wilc_mq_create(&hif_msg_q);
if (result < 0) {
- PRINT_ER("Failed to creat MQ\n");
+ netdev_err(vif->ndev, "Failed to creat MQ\n");
goto _fail_;
}
@@ -3646,7 +3568,7 @@
"WILC_kthread");
if (IS_ERR(hif_thread_handler)) {
- PRINT_ER("Failed to creat Thread\n");
+ netdev_err(vif->ndev, "Failed to creat Thread\n");
result = -EFAULT;
goto _fail_mq_;
}
@@ -3690,7 +3612,7 @@
struct host_if_drv *hif_drv = vif->hif_drv;
if (!hif_drv) {
- PRINT_ER("hif_drv = NULL\n");
+ netdev_err(vif->ndev, "hif_drv = NULL\n");
return -EFAULT;
}
@@ -3725,7 +3647,7 @@
result = wilc_mq_send(&hif_msg_q, &msg, sizeof(struct host_if_msg));
if (result != 0)
- PRINT_ER("Error in sending deinit's message queue message function: Error(%d)\n", result);
+ netdev_err(vif->ndev, "deinit : Error(%d)\n", result);
down(&hif_sema_thread);
@@ -3756,7 +3678,7 @@
hif_drv = vif->hif_drv;
if (!hif_drv || hif_drv == terminated_handle) {
- PRINT_ER("NetworkInfo received but driver not init[%p]\n", hif_drv);
+ netdev_err(vif->ndev, "driver not init[%p]\n", hif_drv);
return;
}
@@ -3771,7 +3693,7 @@
result = wilc_mq_send(&hif_msg_q, &msg, sizeof(struct host_if_msg));
if (result)
- PRINT_ER("Error in sending network info message queue message parameters: Error(%d)\n", result);
+ netdev_err(vif->ndev, "message parameters (%d)\n", result);
}
void wilc_gnrl_async_info_received(struct wilc *wilc, u8 *pu8Buffer,
@@ -3800,7 +3722,7 @@
}
if (!hif_drv->usr_conn_req.conn_result) {
- PRINT_ER("Received mac status is not needed when there is no current Connect Reques\n");
+ netdev_err(vif->ndev, "there is no current Connect Request\n");
up(&hif_sema_deinit);
return;
}
@@ -3816,7 +3738,7 @@
result = wilc_mq_send(&hif_msg_q, &msg, sizeof(struct host_if_msg));
if (result)
- PRINT_ER("Error in sending message queue asynchronous message info: Error(%d)\n", result);
+ netdev_err(vif->ndev, "synchronous info (%d)\n", result);
up(&hif_sema_deinit);
}
@@ -3847,7 +3769,7 @@
result = wilc_mq_send(&hif_msg_q, &msg, sizeof(struct host_if_msg));
if (result)
- PRINT_ER("Error in sending message queue scan complete parameters: Error(%d)\n", result);
+ netdev_err(vif->ndev, "complete param (%d)\n", result);
}
}
@@ -3862,7 +3784,7 @@
struct host_if_drv *hif_drv = vif->hif_drv;
if (!hif_drv) {
- PRINT_ER("driver is null\n");
+ netdev_err(vif->ndev, "driver is null\n");
return -EFAULT;
}
@@ -3879,7 +3801,7 @@
result = wilc_mq_send(&hif_msg_q, &msg, sizeof(struct host_if_msg));
if (result)
- PRINT_ER("wilc mq send fail\n");
+ netdev_err(vif->ndev, "wilc mq send fail\n");
return result;
}
@@ -3891,7 +3813,7 @@
struct host_if_drv *hif_drv = vif->hif_drv;
if (!hif_drv) {
- PRINT_ER("driver is null\n");
+ netdev_err(vif->ndev, "driver is null\n");
return -EFAULT;
}
@@ -3904,7 +3826,7 @@
result = wilc_mq_send(&hif_msg_q, &msg, sizeof(struct host_if_msg));
if (result)
- PRINT_ER("wilc mq send fail\n");
+ netdev_err(vif->ndev, "wilc mq send fail\n");
return result;
}
@@ -3916,7 +3838,7 @@
struct host_if_drv *hif_drv = vif->hif_drv;
if (!hif_drv) {
- PRINT_ER("driver is null\n");
+ netdev_err(vif->ndev, "driver is null\n");
return -EFAULT;
}
@@ -3941,7 +3863,7 @@
result = wilc_mq_send(&hif_msg_q, &msg, sizeof(struct host_if_msg));
if (result)
- PRINT_ER("wilc mq send fail\n");
+ netdev_err(vif->ndev, "wilc mq send fail\n");
return result;
}
@@ -3955,7 +3877,7 @@
struct host_if_drv *hif_drv = vif->hif_drv;
if (!hif_drv) {
- PRINT_ER("driver is null\n");
+ netdev_err(vif->ndev, "driver is null\n");
return -EFAULT;
}
@@ -3985,7 +3907,7 @@
result = wilc_mq_send(&hif_msg_q, &msg, sizeof(struct host_if_msg));
if (result)
- PRINT_ER("wilc mq send fail\n");
+ netdev_err(vif->ndev, "wilc mq send fail\n");
ERRORHANDLER:
if (result) {
@@ -4004,7 +3926,7 @@
struct host_if_drv *hif_drv = vif->hif_drv;
if (!hif_drv) {
- PRINT_ER("driver is null\n");
+ netdev_err(vif->ndev, "driver is null\n");
return -EFAULT;
}
@@ -4013,7 +3935,7 @@
result = wilc_mq_send(&hif_msg_q, &msg, sizeof(struct host_if_msg));
if (result)
- PRINT_ER("wilc_mq_send fail\n");
+ netdev_err(vif->ndev, "wilc_mq_send fail\n");
return result;
}
@@ -4026,7 +3948,7 @@
struct host_if_drv *hif_drv = vif->hif_drv;
if (!hif_drv) {
- PRINT_ER("driver is null\n");
+ netdev_err(vif->ndev, "driver is null\n");
return -EFAULT;
}
@@ -4046,7 +3968,7 @@
result = wilc_mq_send(&hif_msg_q, &msg, sizeof(struct host_if_msg));
if (result)
- PRINT_ER("wilc_mq_send fail\n");
+ netdev_err(vif->ndev, "wilc_mq_send fail\n");
return result;
}
@@ -4058,7 +3980,7 @@
struct host_if_drv *hif_drv = vif->hif_drv;
if (!hif_drv) {
- PRINT_ER("driver is null\n");
+ netdev_err(vif->ndev, "driver is null\n");
return -EFAULT;
}
@@ -4074,7 +3996,7 @@
result = wilc_mq_send(&hif_msg_q, &msg, sizeof(struct host_if_msg));
if (result)
- PRINT_ER("wilc_mq_send fail\n");
+ netdev_err(vif->ndev, "wilc_mq_send fail\n");
return result;
}
@@ -4089,7 +4011,7 @@
u8 assoc_sta = 0;
if (!hif_drv) {
- PRINT_ER("driver is null\n");
+ netdev_err(vif->ndev, "driver is null\n");
return -EFAULT;
}
@@ -4101,26 +4023,17 @@
for (i = 0; i < MAX_NUM_STA; i++) {
if (memcmp(mac_addr[i], zero_addr, ETH_ALEN)) {
memcpy(del_all_sta_info->del_all_sta[i], mac_addr[i], ETH_ALEN);
- PRINT_D(CFG80211_DBG, "BSSID = %x%x%x%x%x%x\n",
- del_all_sta_info->del_all_sta[i][0],
- del_all_sta_info->del_all_sta[i][1],
- del_all_sta_info->del_all_sta[i][2],
- del_all_sta_info->del_all_sta[i][3],
- del_all_sta_info->del_all_sta[i][4],
- del_all_sta_info->del_all_sta[i][5]);
assoc_sta++;
}
}
- if (!assoc_sta) {
- PRINT_D(CFG80211_DBG, "NO ASSOCIATED STAS\n");
+ if (!assoc_sta)
return result;
- }
del_all_sta_info->assoc_sta = assoc_sta;
result = wilc_mq_send(&hif_msg_q, &msg, sizeof(struct host_if_msg));
if (result)
- PRINT_ER("wilc_mq_send fail\n");
+ netdev_err(vif->ndev, "wilc_mq_send fail\n");
down(&hif_sema_wait_response);
@@ -4136,7 +4049,7 @@
struct host_if_drv *hif_drv = vif->hif_drv;
if (!hif_drv) {
- PRINT_ER("driver is null\n");
+ netdev_err(vif->ndev, "driver is null\n");
return -EFAULT;
}
@@ -4156,7 +4069,7 @@
result = wilc_mq_send(&hif_msg_q, &msg, sizeof(struct host_if_msg));
if (result)
- PRINT_ER("wilc_mq_send fail\n");
+ netdev_err(vif->ndev, "wilc_mq_send fail\n");
return result;
}
@@ -4169,7 +4082,7 @@
struct host_if_drv *hif_drv = vif->hif_drv;
if (!hif_drv) {
- PRINT_ER("driver is null\n");
+ netdev_err(vif->ndev, "driver is null\n");
return -EFAULT;
}
@@ -4186,7 +4099,7 @@
result = wilc_mq_send(&hif_msg_q, &msg, sizeof(struct host_if_msg));
if (result)
- PRINT_ER("wilc_mq_send fail\n");
+ netdev_err(vif->ndev, "wilc_mq_send fail\n");
return result;
}
@@ -4199,7 +4112,7 @@
struct host_if_drv *hif_drv = vif->hif_drv;
if (!hif_drv) {
- PRINT_ER("driver is null\n");
+ netdev_err(vif->ndev, "driver is null\n");
return -EFAULT;
}
@@ -4213,7 +4126,7 @@
result = wilc_mq_send(&hif_msg_q, &msg, sizeof(struct host_if_msg));
if (result)
- PRINT_ER("wilc_mq_send fail\n");
+ netdev_err(vif->ndev, "wilc_mq_send fail\n");
return result;
}
@@ -4378,7 +4291,7 @@
struct host_if_drv *hif_drv = vif->hif_drv;
if (!hif_drv) {
- PRINT_ER("driver is null\n");
+ netdev_err(vif->ndev, "driver is null\n");
return -EFAULT;
}
@@ -4392,7 +4305,7 @@
result = wilc_mq_send(&hif_msg_q, &msg, sizeof(struct host_if_msg));
if (result)
- PRINT_ER("wilc_mq_send fail\n");
+ netdev_err(vif->ndev, "wilc_mq_send fail\n");
return result;
}
@@ -4404,7 +4317,7 @@
struct host_if_drv *hif_drv = vif->hif_drv;
if (!hif_drv) {
- PRINT_ER("driver is null\n");
+ netdev_err(vif->ndev, "driver is null\n");
return -EFAULT;
}
@@ -4418,7 +4331,7 @@
result = wilc_mq_send(&hif_msg_q, &msg, sizeof(struct host_if_msg));
if (result)
- PRINT_ER("wilc_mq_send fail\n");
+ netdev_err(vif->ndev, "wilc_mq_send fail\n");
return result;
}
diff --git a/drivers/staging/wilc1000/host_interface.h b/drivers/staging/wilc1000/host_interface.h
index 5e65f2c..795c18c 100644
--- a/drivers/staging/wilc1000/host_interface.h
+++ b/drivers/staging/wilc1000/host_interface.h
@@ -96,7 +96,7 @@
MBPS_54 = 54
};
-struct cfg_param_val {
+struct cfg_param_attr {
u32 flag;
u8 ht_enable;
u8 bss_type;
@@ -172,9 +172,9 @@
void *, void *);
typedef void (*wilc_connect_result)(enum conn_event,
- tstrConnectInfo *,
+ struct connect_info *,
u8,
- tstrDisconnectNotifInfo *,
+ struct disconnect_info *,
void *);
typedef void (*wilc_remain_on_chan_expired)(void *, u32);
@@ -272,14 +272,12 @@
enum host_if_state hif_state;
u8 assoc_bssid[ETH_ALEN];
- struct cfg_param_val cfg_values;
+ struct cfg_param_attr cfg_values;
struct semaphore sem_cfg_values;
struct semaphore sem_test_key_block;
struct semaphore sem_test_disconn_block;
struct semaphore sem_get_rssi;
- struct semaphore sem_get_link_speed;
- struct semaphore sem_get_chnl;
struct semaphore sem_inactive_time;
struct timer_list scan_timer;
@@ -338,7 +336,7 @@
size_t ies_len, wilc_scan_result scan_result, void *user_arg,
struct hidden_network *hidden_network);
int wilc_hif_set_cfg(struct wilc_vif *vif,
- struct cfg_param_val *cfg_param);
+ struct cfg_param_attr *cfg_param);
int wilc_init(struct net_device *dev, struct host_if_drv **hif_drv_handler);
int wilc_deinit(struct wilc_vif *vif);
int wilc_add_beacon(struct wilc_vif *vif, u32 interval, u32 dtim_period,
diff --git a/drivers/staging/wilc1000/linux_mon.c b/drivers/staging/wilc1000/linux_mon.c
index 21f35d7..7d9e5de 100644
--- a/drivers/staging/wilc1000/linux_mon.c
+++ b/drivers/staging/wilc1000/linux_mon.c
@@ -7,7 +7,6 @@
* @version 1.0
*/
#include "wilc_wfi_cfgoperations.h"
-#include "linux_wlan_common.h"
#include "wilc_wlan_if.h"
#include "wilc_wlan.h"
@@ -52,15 +51,11 @@
struct wilc_wfi_radiotap_hdr *hdr;
struct wilc_wfi_radiotap_cb_hdr *cb_hdr;
- PRINT_INFO(HOSTAPD_DBG, "In monitor interface receive function\n");
-
if (!wilc_wfi_mon)
return;
- if (!netif_running(wilc_wfi_mon)) {
- PRINT_INFO(HOSTAPD_DBG, "Monitor interface already RUNNING\n");
+ if (!netif_running(wilc_wfi_mon))
return;
- }
/* Get WILC header */
memcpy(&header, (buff - HOST_HDR_OFFSET), HOST_HDR_OFFSET);
@@ -73,10 +68,8 @@
/* hostapd callback mgmt frame */
skb = dev_alloc_skb(size + sizeof(struct wilc_wfi_radiotap_cb_hdr));
- if (!skb) {
- PRINT_INFO(HOSTAPD_DBG, "Monitor if : No memory to allocate skb");
+ if (!skb)
return;
- }
memcpy(skb_put(skb, size), buff, size);
@@ -103,20 +96,16 @@
} else {
skb = dev_alloc_skb(size + sizeof(struct wilc_wfi_radiotap_hdr));
- if (!skb) {
- PRINT_INFO(HOSTAPD_DBG, "Monitor if : No memory to allocate skb");
+ if (!skb)
return;
- }
memcpy(skb_put(skb, size), buff, size);
hdr = (struct wilc_wfi_radiotap_hdr *)skb_push(skb, sizeof(*hdr));
memset(hdr, 0, sizeof(struct wilc_wfi_radiotap_hdr));
hdr->hdr.it_version = 0; /* PKTHDR_RADIOTAP_VERSION; */
hdr->hdr.it_len = cpu_to_le16(sizeof(struct wilc_wfi_radiotap_hdr));
- PRINT_INFO(HOSTAPD_DBG, "Radiotap len %d\n", hdr->hdr.it_len);
hdr->hdr.it_present = cpu_to_le32
(1 << IEEE80211_RADIOTAP_RATE); /* | */
- PRINT_INFO(HOSTAPD_DBG, "Presentflags %d\n", hdr->hdr.it_present);
hdr->rate = 5; /* txrate->bitrate / 5; */
}
@@ -138,14 +127,6 @@
static void mgmt_tx_complete(void *priv, int status)
{
struct tx_complete_mon_data *pv_data = priv;
- u8 *buf = pv_data->buff;
-
- if (status == 1) {
- if (INFO || buf[0] == 0x10 || buf[0] == 0xb0)
- PRINT_INFO(HOSTAPD_DBG, "Packet sent successfully - Size = %d - Address = %p.\n", pv_data->size, pv_data->buff);
- } else {
- PRINT_INFO(HOSTAPD_DBG, "Couldn't send packet - Size = %d - Address = %p.\n", pv_data->size, pv_data->buff);
- }
/* incase of fully hosting mode, the freeing will be done in response to the cfg packet */
kfree(pv_data->buff);
@@ -157,10 +138,8 @@
{
struct tx_complete_mon_data *mgmt_tx = NULL;
- if (!dev) {
- PRINT_D(HOSTAPD_DBG, "ERROR: dev == NULL\n");
+ if (!dev)
return -EFAULT;
- }
netif_stop_queue(dev);
mgmt_tx = kmalloc(sizeof(*mgmt_tx), GFP_ATOMIC);
@@ -195,7 +174,7 @@
static netdev_tx_t WILC_WFI_mon_xmit(struct sk_buff *skb,
struct net_device *dev)
{
- u32 rtap_len, i, ret = 0;
+ u32 rtap_len, ret = 0;
struct WILC_WFI_mon_priv *mon_priv;
struct sk_buff *skb2;
@@ -205,30 +184,14 @@
return -EFAULT;
mon_priv = netdev_priv(wilc_wfi_mon);
-
- if (!mon_priv) {
- PRINT_ER("Monitor interface private structure is NULL\n");
+ if (!mon_priv)
return -EFAULT;
- }
-
rtap_len = ieee80211_get_radiotap_len(skb->data);
- if (skb->len < rtap_len) {
- PRINT_ER("Error in radiotap header\n");
+ if (skb->len < rtap_len)
return -1;
- }
- /* skip the radiotap header */
- PRINT_INFO(HOSTAPD_DBG, "Radiotap len: %d\n", rtap_len);
- if (INFO) {
- for (i = 0; i < rtap_len; i++)
- PRINT_INFO(HOSTAPD_DBG, "Radiotap_hdr[%d] %02x\n", i, skb->data[i]);
- }
- /* Skip the ratio tap header */
skb_pull(skb, rtap_len);
- if (skb->data[0] == 0xc0)
- PRINT_INFO(HOSTAPD_DBG, "%x:%x:%x:%x:%x%x\n", skb->data[4], skb->data[5], skb->data[6], skb->data[7], skb->data[8], skb->data[9]);
-
if (skb->data[0] == 0xc0 && (!(memcmp(broadcast, &skb->data[4], 6)))) {
skb2 = dev_alloc_skb(skb->len + sizeof(struct wilc_wfi_radiotap_cb_hdr));
@@ -261,19 +224,15 @@
}
skb->dev = mon_priv->real_ndev;
- PRINT_INFO(HOSTAPD_DBG, "Skipping the radiotap header\n");
-
- /* actual deliver of data is device-specific, and not shown here */
- PRINT_INFO(HOSTAPD_DBG, "SKB netdevice name = %s\n", skb->dev->name);
- PRINT_INFO(HOSTAPD_DBG, "MONITOR real dev name = %s\n", mon_priv->real_ndev->name);
-
/* Identify if Ethernet or MAC header (data or mgmt) */
memcpy(srcAdd, &skb->data[10], 6);
memcpy(bssid, &skb->data[16], 6);
/* if source address and bssid fields are equal>>Mac header */
/*send it to mgmt frames handler */
if (!(memcmp(srcAdd, bssid, 6))) {
- mon_mgmt_tx(mon_priv->real_ndev, skb->data, skb->len);
+ ret = mon_mgmt_tx(mon_priv->real_ndev, skb->data, skb->len);
+ if (ret)
+ netdev_err(dev, "fail to mgmt tx\n");
dev_kfree_skb(skb);
} else {
ret = wilc_mac_xmit(skb, mon_priv->real_ndev);
@@ -302,15 +261,12 @@
struct WILC_WFI_mon_priv *priv;
/*If monitor interface is already initialized, return it*/
- if (wilc_wfi_mon)
+ if (wilc_wfi_mon)
return wilc_wfi_mon;
wilc_wfi_mon = alloc_etherdev(sizeof(struct WILC_WFI_mon_priv));
- if (!wilc_wfi_mon) {
- PRINT_ER("failed to allocate memory\n");
+ if (!wilc_wfi_mon)
return NULL;
- }
-
wilc_wfi_mon->type = ARPHRD_IEEE80211_RADIOTAP;
strncpy(wilc_wfi_mon->name, name, IFNAMSIZ);
wilc_wfi_mon->name[IFNAMSIZ - 1] = 0;
@@ -318,14 +274,12 @@
ret = register_netdevice(wilc_wfi_mon);
if (ret) {
- PRINT_ER(" register_netdevice failed (%d)\n", ret);
+ netdev_err(real_dev, "register_netdevice failed\n");
return NULL;
}
priv = netdev_priv(wilc_wfi_mon);
- if (!priv) {
- PRINT_ER("private structure is NULL\n");
+ if (!priv)
return NULL;
- }
priv->real_ndev = real_dev;
@@ -346,13 +300,10 @@
bool rollback_lock = false;
if (wilc_wfi_mon) {
- PRINT_D(HOSTAPD_DBG, "In Deinit monitor interface\n");
- PRINT_D(HOSTAPD_DBG, "RTNL is being locked\n");
if (rtnl_is_locked()) {
rtnl_unlock();
rollback_lock = true;
}
- PRINT_D(HOSTAPD_DBG, "Unregister netdev\n");
unregister_netdev(wilc_wfi_mon);
if (rollback_lock) {
diff --git a/drivers/staging/wilc1000/linux_wlan.c b/drivers/staging/wilc1000/linux_wlan.c
index a731b46..bfa754b 100644
--- a/drivers/staging/wilc1000/linux_wlan.c
+++ b/drivers/staging/wilc1000/linux_wlan.c
@@ -1,5 +1,4 @@
#include "wilc_wfi_cfgoperations.h"
-#include "linux_wlan_common.h"
#include "wilc_wlan_if.h"
#include "wilc_wlan.h"
@@ -13,7 +12,6 @@
#include <linux/kthread.h>
#include <linux/firmware.h>
-#include <linux/delay.h>
#include <linux/init.h>
#include <linux/netdevice.h>
@@ -224,11 +222,6 @@
}
}
-void wilc_dbg(u8 *buff)
-{
- PRINT_D(INIT_DBG, "%d\n", *buff);
-}
-
int wilc_lock_timeout(struct wilc *nic, void *vp, u32 timeout)
{
/* FIXME: replace with mutex_lock or wait_for_completion */
@@ -1312,7 +1305,7 @@
int wilc_netdev_init(struct wilc **wilc, struct device *dev, int io_type,
int gpio, const struct wilc_hif_func *ops)
{
- int i;
+ int i, ret;
struct wilc_vif *vif;
struct net_device *ndev;
struct wilc *wl;
@@ -1333,7 +1326,7 @@
for (i = 0; i < NUM_CONCURRENT_IFC; i++) {
ndev = alloc_etherdev(sizeof(struct wilc_vif));
if (!ndev)
- return -1;
+ return -ENOMEM;
vif = netdev_priv(ndev);
memset(vif, 0, sizeof(struct wilc_vif));
@@ -1372,8 +1365,9 @@
vif->netstats.tx_bytes = 0;
}
- if (register_netdev(ndev))
- return -1;
+ ret = register_netdev(ndev);
+ if (ret)
+ return ret;
vif->iftype = STATION_MODE;
vif->mac_opened = 0;
diff --git a/drivers/staging/wilc1000/linux_wlan_common.h b/drivers/staging/wilc1000/linux_wlan_common.h
deleted file mode 100644
index 0d9a71f..0000000
--- a/drivers/staging/wilc1000/linux_wlan_common.h
+++ /dev/null
@@ -1,138 +0,0 @@
-#ifndef LINUX_WLAN_COMMON_H
-#define LINUX_WLAN_COMMON_H
-
-enum debug_region {
- Hostapd_debug = 0,
- CFG80211_debug,
- Init_debug,
- COMP = 0xFFFFFFFF,
-};
-
-#define HOSTAPD_DBG (1 << Hostapd_debug)
-#define CFG80211_DBG (1 << CFG80211_debug)
-#define INIT_DBG (1 << Init_debug)
-
-#if defined(WILC_DEBUGFS)
-extern atomic_t WILC_REGION;
-extern atomic_t WILC_DEBUG_LEVEL;
-
-#define DEBUG BIT(0)
-#define INFO BIT(1)
-#define WRN BIT(2)
-#define ERR BIT(3)
-
-#define PRINT_D(region, ...) \
- do { \
- if ((atomic_read(&WILC_DEBUG_LEVEL) & DEBUG) && \
- ((atomic_read(&WILC_REGION)) & (region))) { \
- printk("DBG [%s: %d]", __func__, __LINE__); \
- printk(__VA_ARGS__); \
- } \
- } while (0)
-
-#define PRINT_INFO(region, ...) \
- do { \
- if ((atomic_read(&WILC_DEBUG_LEVEL) & INFO) && \
- ((atomic_read(&WILC_REGION)) & (region))) { \
- printk("INFO [%s]", __func__); \
- printk(__VA_ARGS__); \
- } \
- } while (0)
-
-#define PRINT_WRN(region, ...) \
- do { \
- if ((atomic_read(&WILC_DEBUG_LEVEL) & WRN) && \
- ((atomic_read(&WILC_REGION)) & (region))) { \
- printk("WRN [%s: %d]", __func__, __LINE__); \
- printk(__VA_ARGS__); \
- } \
- } while (0)
-
-#define PRINT_ER(...) \
- do { \
- if ((atomic_read(&WILC_DEBUG_LEVEL) & ERR)) { \
- printk("ERR [%s: %d]", __func__, __LINE__); \
- printk(__VA_ARGS__); \
- } \
- } while (0)
-
-#else
-
-#define REGION (INIT_DBG | GENERIC_DBG | CFG80211_DBG | FIRM_DBG | HOSTAPD_DBG)
-
-#define DEBUG 1
-#define INFO 0
-#define WRN 0
-
-#define PRINT_D(region, ...) \
- do { \
- if (DEBUG == 1 && ((REGION)&(region))) { \
- printk("DBG [%s: %d]", __func__, __LINE__); \
- printk(__VA_ARGS__); \
- } \
- } while (0)
-
-#define PRINT_INFO(region, ...) \
- do { \
- if (INFO == 1 && ((REGION)&(region))) { \
- printk("INFO [%s]", __func__); \
- printk(__VA_ARGS__); \
- } \
- } while (0)
-
-#define PRINT_WRN(region, ...) \
- do { \
- if (WRN == 1 && ((REGION)&(region))) { \
- printk("WRN [%s: %d]", __func__, __LINE__); \
- printk(__VA_ARGS__); \
- } \
- } while (0)
-
-#define PRINT_ER(...) \
- do { \
- printk("ERR [%s: %d]", __func__, __LINE__); \
- printk(__VA_ARGS__); \
- } while (0)
-
-#endif
-
-#define LINUX_RX_SIZE (96 * 1024)
-#define LINUX_TX_SIZE (64 * 1024)
-
-
-#define WILC_MULTICAST_TABLE_SIZE 8
-
-#if defined(BEAGLE_BOARD)
- #define SPI_CHANNEL 4
-
- #if SPI_CHANNEL == 4
- #define MODALIAS "wilc_spi4"
- #define GPIO_NUM 162
- #else
- #define MODALIAS "wilc_spi3"
- #define GPIO_NUM 133
- #endif
-#elif defined(PLAT_WMS8304) /* rachel */
- #define MODALIAS "wilc_spi"
- #define GPIO_NUM 139
-#elif defined(PLAT_RKXXXX)
- #define MODALIAS "WILC_IRQ"
- #define GPIO_NUM RK30_PIN3_PD2 /* RK30_PIN3_PA1 */
-/* RK30_PIN3_PD2 */
-/* RK2928_PIN1_PA7 */
-
-#elif defined(CUSTOMER_PLATFORM)
-/*
- TODO : specify MODALIAS name and GPIO number. This is certainly necessary for SPI interface.
- *
- * ex)
- * #define MODALIAS "WILC_SPI"
- * #define GPIO_NUM 139
- */
-
-#else
-/* base on SAMA5D3_Xplained Board */
- #define MODALIAS "WILC_SPI"
- #define GPIO_NUM 0x44
-#endif
-#endif
diff --git a/drivers/staging/wilc1000/wilc_debugfs.c b/drivers/staging/wilc1000/wilc_debugfs.c
index da7ec8b..fcbc95d 100644
--- a/drivers/staging/wilc1000/wilc_debugfs.c
+++ b/drivers/staging/wilc1000/wilc_debugfs.c
@@ -23,12 +23,12 @@
/*
* --------------------------------------------------------------------------------
*/
+#define DEBUG BIT(0)
+#define INFO BIT(1)
+#define WRN BIT(2)
+#define ERR BIT(3)
-#define DBG_REGION_ALL (HOSTAPD_DBG | CFG80211_DBG | INIT_DBG)
#define DBG_LEVEL_ALL (DEBUG | INFO | WRN | ERR)
-atomic_t WILC_REGION = ATOMIC_INIT(INIT_DBG | CFG80211_DBG |
- HOSTAPD_DBG);
-EXPORT_SYMBOL_GPL(WILC_REGION);
atomic_t WILC_DEBUG_LEVEL = ATOMIC_INIT(ERR);
EXPORT_SYMBOL_GPL(WILC_DEBUG_LEVEL);
@@ -76,44 +76,6 @@
return count;
}
-static ssize_t wilc_debug_region_read(struct file *file, char __user *userbuf, size_t count, loff_t *ppos)
-{
- char buf[128];
- int res = 0;
-
- /* only allow read from start */
- if (*ppos > 0)
- return 0;
-
- res = scnprintf(buf, sizeof(buf), "Debug region: %x\n", atomic_read(&WILC_REGION));
-
- return simple_read_from_buffer(userbuf, count, ppos, buf, res);
-}
-
-static ssize_t wilc_debug_region_write(struct file *filp, const char __user *buf, size_t count, loff_t *ppos)
-{
- char buffer[128] = {};
- int flag;
-
- if (count > sizeof(buffer))
- return -EINVAL;
-
- if (copy_from_user(buffer, buf, count))
- return -EFAULT;
-
- flag = buffer[0] - '0';
-
- if (flag > DBG_REGION_ALL) {
- printk("%s, value (0x%08x) is out of range, stay previous flag (0x%08x)\n", __func__, flag, atomic_read(&WILC_REGION));
- return -EFAULT;
- }
-
- atomic_set(&WILC_REGION, (int)flag);
- printk("new debug-region is %x\n", atomic_read(&WILC_REGION));
-
- return count;
-}
-
/*
* --------------------------------------------------------------------------------
*/
@@ -130,12 +92,11 @@
const char *name;
int perm;
unsigned int data;
- struct file_operations fops;
+ const struct file_operations fops;
};
static struct wilc_debugfs_info_t debugfs_info[] = {
{ "wilc_debug_level", 0666, (DEBUG | ERR), FOPS(NULL, wilc_debug_level_read, wilc_debug_level_write, NULL), },
- { "wilc_debug_region", 0666, (INIT_DBG | CFG80211_DBG), FOPS(NULL, wilc_debug_region_read, wilc_debug_region_write, NULL), },
};
static int __init wilc_debugfs_init(void)
diff --git a/drivers/staging/wilc1000/wilc_msgqueue.c b/drivers/staging/wilc1000/wilc_msgqueue.c
index 780ddd3..6cb894e 100644
--- a/drivers/staging/wilc1000/wilc_msgqueue.c
+++ b/drivers/staging/wilc1000/wilc_msgqueue.c
@@ -1,7 +1,6 @@
#include "wilc_msgqueue.h"
#include <linux/spinlock.h>
-#include "linux_wlan_common.h"
#include <linux/errno.h>
#include <linux/slab.h>
diff --git a/drivers/staging/wilc1000/wilc_sdio.c b/drivers/staging/wilc1000/wilc_sdio.c
index 963875af..029bd09 100644
--- a/drivers/staging/wilc1000/wilc_sdio.c
+++ b/drivers/staging/wilc1000/wilc_sdio.c
@@ -30,15 +30,15 @@
#define WILC_SDIO_BLOCK_SIZE 512
-typedef struct {
+struct wilc_sdio {
bool irq_gpio;
u32 block_size;
int nint;
#define MAX_NUN_INT_THRPT_ENH2 (5) /* Max num interrupts allowed in registers 0xf7, 0xf8 */
int has_thrpt_enh3;
-} wilc_sdio_t;
+};
-static wilc_sdio_t g_sdio;
+static struct wilc_sdio g_sdio;
static int sdio_write_reg(struct wilc *wilc, u32 addr, u32 data);
static int sdio_read_reg(struct wilc *wilc, u32 addr, u32 *data);
@@ -675,7 +675,7 @@
u32 chipid;
if (!resume) {
- memset(&g_sdio, 0, sizeof(wilc_sdio_t));
+ memset(&g_sdio, 0, sizeof(struct wilc_sdio));
g_sdio.irq_gpio = wilc->dev_irq_num;
}
diff --git a/drivers/staging/wilc1000/wilc_spi.c b/drivers/staging/wilc1000/wilc_spi.c
index 2928712..d41b8b6 100644
--- a/drivers/staging/wilc1000/wilc_spi.c
+++ b/drivers/staging/wilc1000/wilc_spi.c
@@ -18,19 +18,18 @@
#include <linux/spi/spi.h>
#include <linux/of_gpio.h>
-#include "linux_wlan_common.h"
#include <linux/string.h>
#include "wilc_wlan_if.h"
#include "wilc_wlan.h"
#include "wilc_wfi_netdevice.h"
-typedef struct {
+struct wilc_spi {
int crc_off;
int nint;
int has_thrpt_enh;
-} wilc_spi_t;
+};
-static wilc_spi_t g_spi;
+static struct wilc_spi g_spi;
static int wilc_spi_read(struct wilc *wilc, u32, u8 *, u32);
static int wilc_spi_write(struct wilc *wilc, u32, u8 *, u32);
@@ -380,9 +379,8 @@
break;
}
- if (result != N_OK) {
+ if (result != N_OK)
return result;
- }
if (!g_spi.crc_off)
wb[len - 1] = (crc7(0x7f, (const u8 *)&wb[0], len - 1)) << 1;
@@ -419,9 +417,8 @@
return result;
}
/* zero spi write buffers. */
- for (wix = len; wix < len2; wix++) {
+ for (wix = len; wix < len2; wix++)
wb[wix] = 0;
- }
rix = len;
if (wilc_spi_tx_rx(wilc, wb, rb, len2)) {
@@ -445,8 +442,9 @@
/* } while(&rptr[1] <= &rb[len2]); */
if (rsp != cmd) {
- dev_err(&spi->dev, "Failed cmd response, cmd (%02x)"
- ", resp (%02x)\n", cmd, rsp);
+ dev_err(&spi->dev,
+ "Failed cmd response, cmd (%02x), resp (%02x)\n",
+ cmd, rsp);
result = N_FAIL;
return result;
}
@@ -514,7 +512,7 @@
crc[0] = rb[rix++];
crc[1] = rb[rix++];
} else {
- dev_err(&spi->dev,"buffer overrun when reading crc.\n");
+ dev_err(&spi->dev, "buffer overrun when reading crc.\n");
result = N_FAIL;
return result;
}
@@ -523,9 +521,8 @@
int ix;
/* some data may be read in response to dummy bytes. */
- for (ix = 0; (rix < len2) && (ix < sz); ) {
+ for (ix = 0; (rix < len2) && (ix < sz); )
b[ix++] = rb[rix++];
- }
sz -= ix;
@@ -680,7 +677,7 @@
**/
if (!g_spi.crc_off) {
if (wilc_spi_tx(wilc, crc, 2)) {
- dev_err(&spi->dev,"Failed data block crc write, bus error...\n");
+ dev_err(&spi->dev, "Failed data block crc write, bus error...\n");
result = N_FAIL;
break;
}
@@ -711,9 +708,8 @@
dat = cpu_to_le32(dat);
result = spi_cmd_complete(wilc, CMD_INTERNAL_WRITE, adr, (u8 *)&dat, 4,
0);
- if (result != N_OK) {
+ if (result != N_OK)
dev_err(&spi->dev, "Failed internal write cmd...\n");
- }
return result;
}
@@ -756,9 +752,8 @@
}
result = spi_cmd_complete(wilc, cmd, addr, (u8 *)&data, 4, clockless);
- if (result != N_OK) {
+ if (result != N_OK)
dev_err(&spi->dev, "Failed cmd, write reg (%08x)...\n", addr);
- }
return result;
}
@@ -786,9 +781,8 @@
* Data
**/
result = spi_data_write(wilc, buf, size);
- if (result != N_OK) {
+ if (result != N_OK)
dev_err(&spi->dev, "Failed block data write...\n");
- }
return 1;
}
@@ -867,7 +861,7 @@
return 1;
}
- memset(&g_spi, 0, sizeof(wilc_spi_t));
+ memset(&g_spi, 0, sizeof(struct wilc_spi));
/**
* configure protocol
@@ -1074,7 +1068,7 @@
ret = wilc_spi_write_reg(wilc,
WILC_VMM_CORE_CTL, 1);
if (!ret) {
- dev_err(&spi->dev,"fail write reg vmm_core_ctl...\n");
+ dev_err(&spi->dev, "fail write reg vmm_core_ctl...\n");
goto _fail_;
}
}
@@ -1124,9 +1118,9 @@
return 0;
}
- for (i = 0; (i < 5) && (nint > 0); i++, nint--) {
+ for (i = 0; (i < 5) && (nint > 0); i++, nint--)
reg |= (BIT((27 + i)));
- }
+
ret = wilc_spi_write_reg(wilc, WILC_INTR_ENABLE, reg);
if (!ret) {
dev_err(&spi->dev, "Failed write reg (%08x)...\n",
@@ -1141,9 +1135,8 @@
return 0;
}
- for (i = 0; (i < 3) && (nint > 0); i++, nint--) {
+ for (i = 0; (i < 3) && (nint > 0); i++, nint--)
reg |= BIT(i);
- }
ret = wilc_spi_read_reg(wilc, WILC_INTR2_ENABLE, ®);
if (!ret) {
diff --git a/drivers/staging/wilc1000/wilc_wfi_cfgoperations.c b/drivers/staging/wilc1000/wilc_wfi_cfgoperations.c
index 81a2ee9..378c350 100644
--- a/drivers/staging/wilc1000/wilc_wfi_cfgoperations.c
+++ b/drivers/staging/wilc1000/wilc_wfi_cfgoperations.c
@@ -157,7 +157,7 @@
static u8 curr_channel;
static u8 p2p_oui[] = {0x50, 0x6f, 0x9A, 0x09};
static u8 p2p_local_random = 0x01;
-static u8 p2p_recv_random = 0x00;
+static u8 p2p_recv_random;
static u8 p2p_vendor_spec[] = {0xdd, 0x05, 0x00, 0x08, 0x40, 0x03};
static bool wilc_ie;
@@ -290,9 +290,6 @@
for (i = 0; i < last_scanned_cnt; i++) {
if (time_after(now, last_scanned_shadow[i].time_scan +
(unsigned long)(SCAN_RESULT_EXPIRE))) {
- PRINT_D(CFG80211_DBG, "Network expired ScanShadow:%s\n",
- last_scanned_shadow[i].ssid);
-
kfree(last_scanned_shadow[i].ies);
last_scanned_shadow[i].ies = NULL;
@@ -305,13 +302,9 @@
}
}
- PRINT_D(CFG80211_DBG, "Number of cached networks: %d\n",
- last_scanned_cnt);
if (last_scanned_cnt != 0) {
hAgingTimer.data = arg;
mod_timer(&hAgingTimer, jiffies + msecs_to_jiffies(AGING_TIME));
- } else {
- PRINT_D(CFG80211_DBG, "No need to restart Aging timer\n");
}
}
@@ -327,7 +320,6 @@
int i;
if (last_scanned_cnt == 0) {
- PRINT_D(CFG80211_DBG, "Starting Aging timer\n");
hAgingTimer.data = (unsigned long)user_void;
mod_timer(&hAgingTimer, jiffies + msecs_to_jiffies(AGING_TIME));
state = -1;
@@ -350,10 +342,9 @@
u32 ap_index = 0;
u8 rssi_index = 0;
- if (last_scanned_cnt >= MAX_NUM_SCANNED_NETWORKS_SHADOW) {
- PRINT_D(CFG80211_DBG, "Shadow network reached its maximum limit\n");
+ if (last_scanned_cnt >= MAX_NUM_SCANNED_NETWORKS_SHADOW)
return;
- }
+
if (ap_found == -1) {
ap_index = last_scanned_cnt;
last_scanned_cnt++;
@@ -424,21 +415,8 @@
if (!channel)
return;
- PRINT_INFO(CFG80211_DBG, "Network Info::"
- "CHANNEL Frequency: %d,"
- "RSSI: %d,"
- "Capability Info: %d,"
- "Beacon Period: %d\n",
- channel->center_freq,
- (s32)network_info->rssi * 100,
- network_info->cap_info,
- network_info->beacon_period);
-
if (network_info->new_network) {
if (priv->u32RcvdChCount < MAX_NUM_SCANNED_NETWORKS) {
- PRINT_D(CFG80211_DBG,
- "Network %s found\n",
- network_info->ssid);
priv->u32RcvdChCount++;
add_network_to_shadow(network_info, priv, join_params);
@@ -463,8 +441,6 @@
for (i = 0; i < priv->u32RcvdChCount; i++) {
if (memcmp(last_scanned_shadow[i].bssid, network_info->bssid, 6) == 0) {
- PRINT_D(CFG80211_DBG, "Update RSSI of %s\n", last_scanned_shadow[i].ssid);
-
last_scanned_shadow[i].rssi = network_info->rssi;
last_scanned_shadow[i].time_scan = jiffies;
break;
@@ -473,15 +449,8 @@
}
}
} else if (scan_event == SCAN_EVENT_DONE) {
- PRINT_D(CFG80211_DBG, "Scan Done[%p]\n", priv->dev);
- PRINT_D(CFG80211_DBG, "Refreshing Scan ...\n");
refresh_scan(priv, 1, false);
- if (priv->u32RcvdChCount > 0)
- PRINT_D(CFG80211_DBG, "%d Network(s) found\n", priv->u32RcvdChCount);
- else
- PRINT_D(CFG80211_DBG, "No networks found\n");
-
down(&(priv->hSemScanReq));
if (priv->pstrScanReq) {
@@ -494,7 +463,6 @@
} else if (scan_event == SCAN_EVENT_ABORTED) {
down(&(priv->hSemScanReq));
- PRINT_D(CFG80211_DBG, "Scan Aborted\n");
if (priv->pstrScanReq) {
update_scan_time();
refresh_scan(priv, 1, false);
@@ -511,9 +479,9 @@
int wilc_connecting;
static void CfgConnectResult(enum conn_event enuConnDisconnEvent,
- tstrConnectInfo *pstrConnectInfo,
+ struct connect_info *pstrConnectInfo,
u8 u8MacStatus,
- tstrDisconnectNotifInfo *pstrDisconnectNotifInfo,
+ struct disconnect_info *pstrDisconnectNotifInfo,
void *pUserVoid)
{
struct wilc_priv *priv;
@@ -534,12 +502,10 @@
if (enuConnDisconnEvent == CONN_DISCONN_EVENT_CONN_RESP) {
u16 u16ConnectStatus;
- u16ConnectStatus = pstrConnectInfo->u16ConnectStatus;
-
- PRINT_D(CFG80211_DBG, " Connection response received = %d\n", u8MacStatus);
+ u16ConnectStatus = pstrConnectInfo->status;
if ((u8MacStatus == MAC_DISCONNECTED) &&
- (pstrConnectInfo->u16ConnectStatus == SUCCESSFUL_STATUSCODE)) {
+ (pstrConnectInfo->status == SUCCESSFUL_STATUSCODE)) {
u16ConnectStatus = WLAN_STATUS_UNSPECIFIED_FAILURE;
wilc_wlan_set_bssid(priv->dev, NullBssid,
STATION_MODE);
@@ -555,14 +521,12 @@
bool bNeedScanRefresh = false;
u32 i;
- PRINT_INFO(CFG80211_DBG, "Connection Successful:: BSSID: %x%x%x%x%x%x\n", pstrConnectInfo->au8bssid[0],
- pstrConnectInfo->au8bssid[1], pstrConnectInfo->au8bssid[2], pstrConnectInfo->au8bssid[3], pstrConnectInfo->au8bssid[4], pstrConnectInfo->au8bssid[5]);
- memcpy(priv->au8AssociatedBss, pstrConnectInfo->au8bssid, ETH_ALEN);
+ memcpy(priv->au8AssociatedBss, pstrConnectInfo->bssid, ETH_ALEN);
for (i = 0; i < last_scanned_cnt; i++) {
if (memcmp(last_scanned_shadow[i].bssid,
- pstrConnectInfo->au8bssid,
+ pstrConnectInfo->bssid,
ETH_ALEN) == 0) {
unsigned long now = jiffies;
@@ -579,14 +543,9 @@
refresh_scan(priv, 1, true);
}
-
- PRINT_D(CFG80211_DBG, "Association request info elements length = %zu\n", pstrConnectInfo->ReqIEsLen);
-
- PRINT_D(CFG80211_DBG, "Association response info elements length = %d\n", pstrConnectInfo->u16RespIEsLen);
-
- cfg80211_connect_result(dev, pstrConnectInfo->au8bssid,
- pstrConnectInfo->pu8ReqIEs, pstrConnectInfo->ReqIEsLen,
- pstrConnectInfo->pu8RespIEs, pstrConnectInfo->u16RespIEsLen,
+ cfg80211_connect_result(dev, pstrConnectInfo->bssid,
+ pstrConnectInfo->req_ies, pstrConnectInfo->req_ies_len,
+ pstrConnectInfo->resp_ies, pstrConnectInfo->resp_ies_len,
u16ConnectStatus, GFP_KERNEL);
} else if (enuConnDisconnEvent == CONN_DISCONN_EVENT_DISCONN_NOTIF) {
wilc_optaining_ip = false;
@@ -600,11 +559,11 @@
if (!pstrWFIDrv->p2p_connect)
wlan_channel = INVALID_CHANNEL;
if ((pstrWFIDrv->IFC_UP) && (dev == wl->vif[1]->ndev)) {
- pstrDisconnectNotifInfo->u16reason = 3;
+ pstrDisconnectNotifInfo->reason = 3;
} else if ((!pstrWFIDrv->IFC_UP) && (dev == wl->vif[1]->ndev)) {
- pstrDisconnectNotifInfo->u16reason = 1;
+ pstrDisconnectNotifInfo->reason = 1;
}
- cfg80211_disconnected(dev, pstrDisconnectNotifInfo->u16reason, pstrDisconnectNotifInfo->ie,
+ cfg80211_disconnected(dev, pstrDisconnectNotifInfo->reason, pstrDisconnectNotifInfo->ie,
pstrDisconnectNotifInfo->ie_len, false,
GFP_KERNEL);
}
@@ -622,7 +581,6 @@
vif = netdev_priv(priv->dev);
channelnum = ieee80211_frequency_to_channel(chandef->chan->center_freq);
- PRINT_D(CFG80211_DBG, "Setting channel %d with frequency %d\n", channelnum, chandef->chan->center_freq);
curr_channel = channelnum;
result = wilc_set_mac_chnl_num(vif, channelnum);
@@ -653,18 +611,16 @@
priv->bCfgScanning = true;
if (request->n_channels <= MAX_NUM_SCANNED_NETWORKS) {
- for (i = 0; i < request->n_channels; i++) {
+ for (i = 0; i < request->n_channels; i++)
au8ScanChanList[i] = (u8)ieee80211_frequency_to_channel(request->channels[i]->center_freq);
- PRINT_INFO(CFG80211_DBG, "ScanChannel List[%d] = %d,", i, au8ScanChanList[i]);
- }
-
- PRINT_D(CFG80211_DBG, "Requested num of scan channel %d\n", request->n_channels);
- PRINT_D(CFG80211_DBG, "Scan Request IE len = %zu\n", request->ie_len);
-
- PRINT_D(CFG80211_DBG, "Number of SSIDs %d\n", request->n_ssids);
if (request->n_ssids >= 1) {
- strHiddenNetwork.net_info = kmalloc(request->n_ssids * sizeof(struct hidden_network), GFP_KERNEL);
+ strHiddenNetwork.net_info =
+ kmalloc_array(request->n_ssids,
+ sizeof(struct hidden_network),
+ GFP_KERNEL);
+ if (!strHiddenNetwork.net_info)
+ return -ENOMEM;
strHiddenNetwork.n_ssids = request->n_ssids;
@@ -675,11 +631,9 @@
memcpy(strHiddenNetwork.net_info[i].ssid, request->ssids[i].ssid, request->ssids[i].ssid_len);
strHiddenNetwork.net_info[i].ssid_len = request->ssids[i].ssid_len;
} else {
- PRINT_D(CFG80211_DBG, "Received one NULL SSID\n");
strHiddenNetwork.n_ssids -= 1;
}
}
- PRINT_D(CFG80211_DBG, "Trigger Scan Request\n");
s32Error = wilc_scan(vif, USER_SCAN, ACTIVE_SCAN,
au8ScanChanList,
request->n_channels,
@@ -687,7 +641,6 @@
request->ie_len, CfgScanResult,
(void *)priv, &strHiddenNetwork);
} else {
- PRINT_D(CFG80211_DBG, "Trigger Scan Request\n");
s32Error = wilc_scan(vif, USER_SCAN, ACTIVE_SCAN,
au8ScanChanList,
request->n_channels,
@@ -699,10 +652,8 @@
netdev_err(priv->dev, "Requested scanned channels over\n");
}
- if (s32Error != 0) {
+ if (s32Error != 0)
s32Error = -EBUSY;
- PRINT_WRN(CFG80211_DBG, "Device is busy: Error(%d)\n", s32Error);
- }
return s32Error;
}
@@ -728,53 +679,30 @@
vif = netdev_priv(priv->dev);
pstrWFIDrv = (struct host_if_drv *)priv->hif_drv;
- PRINT_D(CFG80211_DBG,
- "Connecting to SSID [%s] on netdev [%p] host if [%p]\n",
- sme->ssid, dev, priv->hif_drv);
- if (!(strncmp(sme->ssid, "DIRECT-", 7))) {
- PRINT_D(CFG80211_DBG, "Connected to Direct network,OBSS disabled\n");
+ if (!(strncmp(sme->ssid, "DIRECT-", 7)))
pstrWFIDrv->p2p_connect = 1;
- } else {
+ else
pstrWFIDrv->p2p_connect = 0;
- }
- PRINT_INFO(CFG80211_DBG, "Required SSID = %s\n , AuthType = %d\n", sme->ssid, sme->auth_type);
for (i = 0; i < last_scanned_cnt; i++) {
if ((sme->ssid_len == last_scanned_shadow[i].ssid_len) &&
memcmp(last_scanned_shadow[i].ssid,
sme->ssid,
sme->ssid_len) == 0) {
- PRINT_INFO(CFG80211_DBG, "Network with required SSID is found %s\n", sme->ssid);
- if (!sme->bssid) {
- PRINT_INFO(CFG80211_DBG, "BSSID is not passed from the user\n");
+ if (!sme->bssid)
break;
- } else {
+ else
if (memcmp(last_scanned_shadow[i].bssid,
sme->bssid,
- ETH_ALEN) == 0) {
- PRINT_INFO(CFG80211_DBG, "BSSID is passed from the user and matched\n");
+ ETH_ALEN) == 0)
break;
- }
- }
}
}
if (i < last_scanned_cnt) {
- PRINT_D(CFG80211_DBG, "Required bss is in scan results\n");
-
pstrNetworkInfo = &last_scanned_shadow[i];
-
- PRINT_INFO(CFG80211_DBG, "network BSSID to be associated:"
- "%x%x%x%x%x%x\n",
- pstrNetworkInfo->bssid[0], pstrNetworkInfo->bssid[1],
- pstrNetworkInfo->bssid[2], pstrNetworkInfo->bssid[3],
- pstrNetworkInfo->bssid[4], pstrNetworkInfo->bssid[5]);
} else {
s32Error = -ENOENT;
- if (last_scanned_cnt == 0)
- PRINT_D(CFG80211_DBG, "No Scan results yet\n");
- else
- PRINT_D(CFG80211_DBG, "Required bss not in scan results: Error(%d)\n", s32Error);
wilc_connecting = 0;
return s32Error;
}
@@ -782,18 +710,12 @@
memset(priv->WILC_WFI_wep_key, 0, sizeof(priv->WILC_WFI_wep_key));
memset(priv->WILC_WFI_wep_key_len, 0, sizeof(priv->WILC_WFI_wep_key_len));
- PRINT_INFO(CFG80211_DBG, "sme->crypto.wpa_versions=%x\n", sme->crypto.wpa_versions);
- PRINT_INFO(CFG80211_DBG, "sme->crypto.cipher_group=%x\n", sme->crypto.cipher_group);
-
- PRINT_INFO(CFG80211_DBG, "sme->crypto.n_ciphers_pairwise=%d\n", sme->crypto.n_ciphers_pairwise);
-
if (sme->crypto.cipher_group != NO_ENCRYPT) {
pcwpa_version = "Default";
if (sme->crypto.cipher_group == WLAN_CIPHER_SUITE_WEP40) {
u8security = ENCRYPT_ENABLED | WEP;
pcgroup_encrypt_val = "WEP40";
pccipher_group = "WLAN_CIPHER_SUITE_WEP40";
- PRINT_INFO(CFG80211_DBG, "WEP Default Key Idx = %d\n", sme->key_idx);
priv->WILC_WFI_wep_key_len[sme->key_idx] = sme->key_len;
memcpy(priv->WILC_WFI_wep_key[sme->key_idx], sme->key, sme->key_len);
@@ -866,22 +788,17 @@
}
}
- PRINT_D(CFG80211_DBG, "Adding key with cipher group = %x\n", sme->crypto.cipher_group);
-
- PRINT_D(CFG80211_DBG, "Authentication Type = %d\n", sme->auth_type);
switch (sme->auth_type) {
case NL80211_AUTHTYPE_OPEN_SYSTEM:
- PRINT_D(CFG80211_DBG, "In OPEN SYSTEM\n");
tenuAuth_type = OPEN_SYSTEM;
break;
case NL80211_AUTHTYPE_SHARED_KEY:
tenuAuth_type = SHARED_KEY;
- PRINT_D(CFG80211_DBG, "In SHARED KEY\n");
break;
default:
- PRINT_D(CFG80211_DBG, "Automatic Authentation type = %d\n", sme->auth_type);
+ break;
}
if (sme->crypto.n_akm_suites) {
@@ -895,12 +812,6 @@
}
}
-
- PRINT_INFO(CFG80211_DBG, "Required Ch = %d\n", pstrNetworkInfo->ch);
-
- PRINT_INFO(CFG80211_DBG, "Group encryption value = %s\n Cipher Group = %s\n WPA version = %s\n",
- pcgroup_encrypt_val, pccipher_group, pcwpa_version);
-
curr_channel = pstrNetworkInfo->ch;
if (!pstrWFIDrv->p2p_connect)
@@ -941,8 +852,6 @@
wlan_channel = INVALID_CHANNEL;
wilc_wlan_set_bssid(priv->dev, NullBssid, STATION_MODE);
- PRINT_D(CFG80211_DBG, "Disconnecting with reason code(%d)\n", reason_code);
-
p2p_local_random = 0x01;
p2p_recv_random = 0x00;
wilc_ie = false;
@@ -963,7 +872,6 @@
{
s32 s32Error = 0, KeyLen = params->key_len;
- u32 i;
struct wilc_priv *priv;
const u8 *pu8RxMic = NULL;
const u8 *pu8TxMic = NULL;
@@ -978,15 +886,6 @@
vif = netdev_priv(netdev);
wl = vif->wilc;
- PRINT_D(CFG80211_DBG, "Adding key with cipher suite = %x\n", params->cipher);
-
- PRINT_D(CFG80211_DBG, "%p %p %d\n", wiphy, netdev, key_index);
-
- PRINT_D(CFG80211_DBG, "key %x %x %x\n", params->key[0],
- params->key[1],
- params->key[2]);
-
-
switch (params->cipher) {
case WLAN_CIPHER_SUITE_WEP40:
case WLAN_CIPHER_SUITE_WEP104:
@@ -994,12 +893,6 @@
priv->WILC_WFI_wep_key_len[key_index] = params->key_len;
memcpy(priv->WILC_WFI_wep_key[key_index], params->key, params->key_len);
- PRINT_D(CFG80211_DBG, "Adding AP WEP Default key Idx = %d\n", key_index);
- PRINT_D(CFG80211_DBG, "Adding AP WEP Key len= %d\n", params->key_len);
-
- for (i = 0; i < params->key_len; i++)
- PRINT_D(CFG80211_DBG, "WEP AP key val[%d] = %x\n", i, params->key[i]);
-
tenuAuth_type = OPEN_SYSTEM;
if (params->cipher == WLAN_CIPHER_SUITE_WEP40)
@@ -1016,12 +909,6 @@
priv->WILC_WFI_wep_key_len[key_index] = params->key_len;
memcpy(priv->WILC_WFI_wep_key[key_index], params->key, params->key_len);
- PRINT_D(CFG80211_DBG, "Adding WEP Default key Idx = %d\n", key_index);
- PRINT_D(CFG80211_DBG, "Adding WEP Key length = %d\n", params->key_len);
- if (INFO) {
- for (i = 0; i < params->key_len; i++)
- PRINT_INFO(CFG80211_DBG, "WEP key value[%d] = %d\n", i, params->key[i]);
- }
wilc_add_wep_key_bss_sta(vif, params->key,
params->key_len, key_index);
}
@@ -1072,22 +959,12 @@
priv->wilc_gtk[key_index]->key_len = params->key_len;
priv->wilc_gtk[key_index]->seq_len = params->seq_len;
- if (INFO) {
- for (i = 0; i < params->key_len; i++)
- PRINT_INFO(CFG80211_DBG, "Adding group key value[%d] = %x\n", i, params->key[i]);
- for (i = 0; i < params->seq_len; i++)
- PRINT_INFO(CFG80211_DBG, "Adding group seq value[%d] = %x\n", i, params->seq[i]);
- }
-
-
wilc_add_rx_gtk(vif, params->key, KeyLen,
key_index, params->seq_len,
params->seq, pu8RxMic,
pu8TxMic, AP_MODE, u8gmode);
} else {
- PRINT_INFO(CFG80211_DBG, "STA Address: %x%x%x%x%x\n", mac_addr[0], mac_addr[1], mac_addr[2], mac_addr[3], mac_addr[4]);
-
if (params->cipher == WLAN_CIPHER_SUITE_TKIP)
u8pmode = ENCRYPT_ENABLED | WPA | TKIP;
else
@@ -1109,14 +986,6 @@
if ((params->seq_len) > 0)
priv->wilc_ptk[key_index]->seq = kmalloc(params->seq_len, GFP_KERNEL);
- if (INFO) {
- for (i = 0; i < params->key_len; i++)
- PRINT_INFO(CFG80211_DBG, "Adding pairwise key value[%d] = %x\n", i, params->key[i]);
-
- for (i = 0; i < params->seq_len; i++)
- PRINT_INFO(CFG80211_DBG, "Adding group seq value[%d] = %x\n", i, params->seq[i]);
- }
-
memcpy(priv->wilc_ptk[key_index]->key, params->key, params->key_len);
if ((params->seq_len) > 0)
@@ -1160,10 +1029,6 @@
memcpy(g_key_gtk_params.seq, params->seq, params->seq_len);
}
g_key_gtk_params.cipher = params->cipher;
-
- PRINT_D(CFG80211_DBG, "key %x %x %x\n", g_key_gtk_params.key[0],
- g_key_gtk_params.key[1],
- g_key_gtk_params.key[2]);
g_gtk_keys_saved = true;
}
@@ -1197,21 +1062,12 @@
memcpy(g_key_ptk_params.seq, params->seq, params->seq_len);
}
g_key_ptk_params.cipher = params->cipher;
-
- PRINT_D(CFG80211_DBG, "key %x %x %x\n", g_key_ptk_params.key[0],
- g_key_ptk_params.key[1],
- g_key_ptk_params.key[2]);
g_ptk_keys_saved = true;
}
wilc_add_ptk(vif, params->key, KeyLen,
mac_addr, pu8RxMic, pu8TxMic,
STATION_MODE, u8mode, key_index);
- PRINT_D(CFG80211_DBG, "Adding pairwise key\n");
- if (INFO) {
- for (i = 0; i < params->key_len; i++)
- PRINT_INFO(CFG80211_DBG, "Adding pairwise key value[%d] = %d\n", i, params->key[i]);
- }
}
}
break;
@@ -1279,11 +1135,8 @@
if (key_index >= 0 && key_index <= 3) {
memset(priv->WILC_WFI_wep_key[key_index], 0, priv->WILC_WFI_wep_key_len[key_index]);
priv->WILC_WFI_wep_key_len[key_index] = 0;
-
- PRINT_D(CFG80211_DBG, "Removing WEP key with index = %d\n", key_index);
wilc_remove_wep_key(vif, key_index);
} else {
- PRINT_D(CFG80211_DBG, "Removing all installed keys\n");
wilc_remove_key(priv->hif_drv, mac_addr);
}
@@ -1296,26 +1149,17 @@
{
struct wilc_priv *priv;
struct key_params key_params;
- u32 i;
priv = wiphy_priv(wiphy);
if (!pairwise) {
- PRINT_D(CFG80211_DBG, "Getting group key idx: %x\n", key_index);
-
key_params.key = priv->wilc_gtk[key_index]->key;
key_params.cipher = priv->wilc_gtk[key_index]->cipher;
key_params.key_len = priv->wilc_gtk[key_index]->key_len;
key_params.seq = priv->wilc_gtk[key_index]->seq;
key_params.seq_len = priv->wilc_gtk[key_index]->seq_len;
- if (INFO) {
- for (i = 0; i < key_params.key_len; i++)
- PRINT_INFO(CFG80211_DBG, "Retrieved key value %x\n", key_params.key[i]);
- }
} else {
- PRINT_D(CFG80211_DBG, "Getting pairwise key\n");
-
key_params.key = priv->wilc_ptk[key_index]->key;
key_params.cipher = priv->wilc_ptk[key_index]->cipher;
key_params.key_len = priv->wilc_ptk[key_index]->key_len;
@@ -1337,8 +1181,6 @@
priv = wiphy_priv(wiphy);
vif = netdev_priv(priv->dev);
- PRINT_D(CFG80211_DBG, "Setting default key with idx = %d\n", key_index);
-
wilc_set_wep_default_keyid(vif, key_index);
return 0;
@@ -1356,10 +1198,6 @@
vif = netdev_priv(dev);
if (vif->iftype == AP_MODE || vif->iftype == GO_MODE) {
- PRINT_D(HOSTAPD_DBG, "Getting station parameters\n");
-
- PRINT_INFO(HOSTAPD_DBG, ": %x%x%x%x%x\n", mac[0], mac[1], mac[2], mac[3], mac[4]);
-
for (i = 0; i < NUM_STA_ASSOCIATED; i++) {
if (!(memcmp(mac, priv->assoc_stainfo.au8Sta_AssociatedBss[i], ETH_ALEN))) {
associatedsta = i;
@@ -1376,7 +1214,6 @@
wilc_get_inactive_time(vif, mac, &inactive_time);
sinfo->inactive_time = 1000 * inactive_time;
- PRINT_D(CFG80211_DBG, "Inactive time %d\n", sinfo->inactive_time);
}
if (vif->iftype == STATION_MODE) {
@@ -1408,14 +1245,13 @@
static int change_bss(struct wiphy *wiphy, struct net_device *dev,
struct bss_parameters *params)
{
- PRINT_D(CFG80211_DBG, "Changing Bss parametrs\n");
return 0;
}
static int set_wiphy_params(struct wiphy *wiphy, u32 changed)
{
s32 s32Error = 0;
- struct cfg_param_val pstrCfgParamVal;
+ struct cfg_param_attr pstrCfgParamVal;
struct wilc_priv *priv;
struct wilc_vif *vif;
@@ -1423,33 +1259,25 @@
vif = netdev_priv(priv->dev);
pstrCfgParamVal.flag = 0;
- PRINT_D(CFG80211_DBG, "Setting Wiphy params\n");
if (changed & WIPHY_PARAM_RETRY_SHORT) {
- PRINT_D(CFG80211_DBG, "Setting WIPHY_PARAM_RETRY_SHORT %d\n",
- priv->dev->ieee80211_ptr->wiphy->retry_short);
pstrCfgParamVal.flag |= RETRY_SHORT;
pstrCfgParamVal.short_retry_limit = priv->dev->ieee80211_ptr->wiphy->retry_short;
}
if (changed & WIPHY_PARAM_RETRY_LONG) {
- PRINT_D(CFG80211_DBG, "Setting WIPHY_PARAM_RETRY_LONG %d\n", priv->dev->ieee80211_ptr->wiphy->retry_long);
pstrCfgParamVal.flag |= RETRY_LONG;
pstrCfgParamVal.long_retry_limit = priv->dev->ieee80211_ptr->wiphy->retry_long;
}
if (changed & WIPHY_PARAM_FRAG_THRESHOLD) {
- PRINT_D(CFG80211_DBG, "Setting WIPHY_PARAM_FRAG_THRESHOLD %d\n", priv->dev->ieee80211_ptr->wiphy->frag_threshold);
pstrCfgParamVal.flag |= FRAG_THRESHOLD;
pstrCfgParamVal.frag_threshold = priv->dev->ieee80211_ptr->wiphy->frag_threshold;
}
if (changed & WIPHY_PARAM_RTS_THRESHOLD) {
- PRINT_D(CFG80211_DBG, "Setting WIPHY_PARAM_RTS_THRESHOLD %d\n", priv->dev->ieee80211_ptr->wiphy->rts_threshold);
-
pstrCfgParamVal.flag |= RTS_THRESHOLD;
pstrCfgParamVal.rts_threshold = priv->dev->ieee80211_ptr->wiphy->rts_threshold;
}
- PRINT_D(CFG80211_DBG, "Setting CFG params in the host interface\n");
s32Error = wilc_hif_set_cfg(vif, &pstrCfgParamVal);
if (s32Error)
netdev_err(priv->dev, "Error in setting WIPHY PARAMS\n");
@@ -1467,19 +1295,16 @@
struct wilc_priv *priv = wiphy_priv(wiphy);
vif = netdev_priv(priv->dev);
- PRINT_D(CFG80211_DBG, "Setting PMKSA\n");
for (i = 0; i < priv->pmkid_list.numpmkid; i++) {
if (!memcmp(pmksa->bssid, priv->pmkid_list.pmkidlist[i].bssid,
ETH_ALEN)) {
flag = PMKID_FOUND;
- PRINT_D(CFG80211_DBG, "PMKID already exists\n");
break;
}
}
if (i < WILC_MAX_NUM_PMKIDS) {
- PRINT_D(CFG80211_DBG, "Setting PMKID in private structure\n");
memcpy(priv->pmkid_list.pmkidlist[i].bssid, pmksa->bssid,
ETH_ALEN);
memcpy(priv->pmkid_list.pmkidlist[i].pmkid, pmksa->pmkid,
@@ -1491,10 +1316,9 @@
s32Error = -EINVAL;
}
- if (!s32Error) {
- PRINT_D(CFG80211_DBG, "Setting pmkid in the host interface\n");
+ if (!s32Error)
s32Error = wilc_set_pmkid_info(vif, &priv->pmkid_list);
- }
+
return s32Error;
}
@@ -1506,12 +1330,9 @@
struct wilc_priv *priv = wiphy_priv(wiphy);
- PRINT_D(CFG80211_DBG, "Deleting PMKSA keys\n");
-
for (i = 0; i < priv->pmkid_list.numpmkid; i++) {
if (!memcmp(pmksa->bssid, priv->pmkid_list.pmkidlist[i].bssid,
ETH_ALEN)) {
- PRINT_D(CFG80211_DBG, "Reseting PMKID values\n");
memset(&priv->pmkid_list.pmkidlist[i], 0, sizeof(struct host_if_pmkid));
break;
}
@@ -1538,8 +1359,6 @@
{
struct wilc_priv *priv = wiphy_priv(wiphy);
- PRINT_D(CFG80211_DBG, "Flushing PMKID key values\n");
-
memset(&priv->pmkid_list, 0, sizeof(struct host_if_pmkid_attr));
return 0;
@@ -1795,8 +1614,6 @@
priv = wiphy_priv(wiphy);
vif = netdev_priv(priv->dev);
- PRINT_D(CFG80211_DBG, "Cancel remain on channel\n");
-
s32Error = wilc_listen_state_expired(vif, priv->strRemainOnChanParams.u32ListenSessionID);
return s32Error;
}
@@ -1979,7 +1796,6 @@
static int set_cqm_rssi_config(struct wiphy *wiphy, struct net_device *dev,
s32 rssi_thold, u32 rssi_hyst)
{
- PRINT_D(CFG80211_DBG, "Setting CQM RSSi Function\n");
return 0;
}
@@ -1989,8 +1805,6 @@
struct wilc_priv *priv;
struct wilc_vif *vif;
- PRINT_D(CFG80211_DBG, "Dumping station information\n");
-
if (idx != 0)
return -ENOENT;
@@ -2010,8 +1824,6 @@
struct wilc_priv *priv;
struct wilc_vif *vif;
- PRINT_D(CFG80211_DBG, " Power save Enabled= %d , TimeOut = %d\n", enabled, timeout);
-
if (!wiphy)
return -ENOENT;
@@ -2037,9 +1849,6 @@
vif = netdev_priv(dev);
priv = wiphy_priv(wiphy);
wl = vif->wilc;
-
- PRINT_D(HOSTAPD_DBG, "In Change virtual interface function\n");
- PRINT_D(HOSTAPD_DBG, "Wireless interface name =%s\n", dev->name);
p2p_local_random = 0x01;
p2p_recv_random = 0x00;
wilc_ie = false;
@@ -2049,8 +1858,6 @@
switch (type) {
case NL80211_IFTYPE_STATION:
wilc_connecting = 0;
- PRINT_D(HOSTAPD_DBG, "Interface type = NL80211_IFTYPE_STATION\n");
-
dev->ieee80211_ptr->iftype = type;
priv->wdev->iftype = type;
vif->monitor_flag = 0;
@@ -2065,8 +1872,6 @@
case NL80211_IFTYPE_P2P_CLIENT:
wilc_connecting = 0;
- PRINT_D(HOSTAPD_DBG, "Interface type = NL80211_IFTYPE_P2P_CLIENT\n");
-
dev->ieee80211_ptr->iftype = type;
priv->wdev->iftype = type;
vif->monitor_flag = 0;
@@ -2079,7 +1884,6 @@
case NL80211_IFTYPE_AP:
wilc_enable_ps = false;
- PRINT_D(HOSTAPD_DBG, "Interface type = NL80211_IFTYPE_AP %d\n", type);
dev->ieee80211_ptr->iftype = type;
priv->wdev->iftype = type;
vif->iftype = AP_MODE;
@@ -2096,8 +1900,6 @@
wilc_optaining_ip = true;
mod_timer(&wilc_during_ip_timer,
jiffies + msecs_to_jiffies(during_ip_time));
- PRINT_D(HOSTAPD_DBG, "Interface type = NL80211_IFTYPE_GO\n");
-
wilc_set_operation_mode(vif, AP_MODE);
dev->ieee80211_ptr->iftype = type;
priv->wdev->iftype = type;
@@ -2126,11 +1928,7 @@
priv = wiphy_priv(wiphy);
vif = netdev_priv(dev);
- wl = vif ->wilc;
- PRINT_D(HOSTAPD_DBG, "Starting ap\n");
-
- PRINT_D(HOSTAPD_DBG, "Interval = %d\n DTIM period = %d\n Head length = %zu Tail length = %zu\n",
- settings->beacon_interval, settings->dtim_period, beacon->head_len, beacon->tail_len);
+ wl = vif->wilc;
s32Error = set_channel(wiphy, &settings->chandef);
@@ -2157,8 +1955,6 @@
priv = wiphy_priv(wiphy);
vif = netdev_priv(priv->dev);
- PRINT_D(HOSTAPD_DBG, "Setting beacon\n");
-
s32Error = wilc_add_beacon(vif, 0, 0, beacon->head_len,
(u8 *)beacon->head, beacon->tail_len,
@@ -2180,8 +1976,6 @@
priv = wiphy_priv(wiphy);
vif = netdev_priv(priv->dev);
- PRINT_D(HOSTAPD_DBG, "Deleting beacon\n");
-
wilc_wlan_set_bssid(dev, NullBssid, AP_MODE);
s32Error = wilc_del_beacon(vif);
@@ -2213,14 +2007,6 @@
strStaParams.rates_len = params->supported_rates_len;
strStaParams.rates = params->supported_rates;
- PRINT_D(CFG80211_DBG, "Adding station parameters %d\n", params->aid);
-
- PRINT_D(CFG80211_DBG, "BSSID = %x%x%x%x%x%x\n", priv->assoc_stainfo.au8Sta_AssociatedBss[params->aid][0], priv->assoc_stainfo.au8Sta_AssociatedBss[params->aid][1], priv->assoc_stainfo.au8Sta_AssociatedBss[params->aid][2], priv->assoc_stainfo.au8Sta_AssociatedBss[params->aid][3], priv->assoc_stainfo.au8Sta_AssociatedBss[params->aid][4],
- priv->assoc_stainfo.au8Sta_AssociatedBss[params->aid][5]);
- PRINT_D(HOSTAPD_DBG, "ASSOC ID = %d\n", strStaParams.aid);
- PRINT_D(HOSTAPD_DBG, "Number of supported rates = %d\n",
- strStaParams.rates_len);
-
if (!params->ht_capa) {
strStaParams.ht_supported = false;
} else {
@@ -2238,23 +2024,6 @@
strStaParams.flags_mask = params->sta_flags_mask;
strStaParams.flags_set = params->sta_flags_set;
- PRINT_D(HOSTAPD_DBG, "IS HT supported = %d\n",
- strStaParams.ht_supported);
- PRINT_D(HOSTAPD_DBG, "Capability Info = %d\n",
- strStaParams.ht_capa_info);
- PRINT_D(HOSTAPD_DBG, "AMPDU Params = %d\n",
- strStaParams.ht_ampdu_params);
- PRINT_D(HOSTAPD_DBG, "HT Extended params = %d\n",
- strStaParams.ht_ext_params);
- PRINT_D(HOSTAPD_DBG, "Tx Beamforming Cap = %d\n",
- strStaParams.ht_tx_bf_cap);
- PRINT_D(HOSTAPD_DBG, "Antenna selection info = %d\n",
- strStaParams.ht_ante_sel);
- PRINT_D(HOSTAPD_DBG, "Flag Mask = %d\n",
- strStaParams.flags_mask);
- PRINT_D(HOSTAPD_DBG, "Flag Set = %d\n",
- strStaParams.flags_set);
-
s32Error = wilc_add_station(vif, &strStaParams);
if (s32Error)
netdev_err(dev, "Host add station fail\n");
@@ -2278,16 +2047,9 @@
vif = netdev_priv(dev);
if (vif->iftype == AP_MODE || vif->iftype == GO_MODE) {
- PRINT_D(HOSTAPD_DBG, "Deleting station\n");
-
-
- if (!mac) {
- PRINT_D(HOSTAPD_DBG, "All associated stations\n");
+ if (!mac)
s32Error = wilc_del_allstation(vif,
priv->assoc_stainfo.au8Sta_AssociatedBss);
- } else {
- PRINT_D(HOSTAPD_DBG, "With mac address: %x%x%x%x%x%x\n", mac[0], mac[1], mac[2], mac[3], mac[4], mac[5]);
- }
s32Error = wilc_del_station(vif, mac);
@@ -2305,9 +2067,6 @@
struct add_sta_param strStaParams = { {0} };
struct wilc_vif *vif;
-
- PRINT_D(HOSTAPD_DBG, "Change station paramters\n");
-
if (!wiphy)
return -EFAULT;
@@ -2320,14 +2079,6 @@
strStaParams.rates_len = params->supported_rates_len;
strStaParams.rates = params->supported_rates;
- PRINT_D(HOSTAPD_DBG, "BSSID = %x%x%x%x%x%x\n",
- strStaParams.bssid[0], strStaParams.bssid[1],
- strStaParams.bssid[2], strStaParams.bssid[3],
- strStaParams.bssid[4], strStaParams.bssid[5]);
- PRINT_D(HOSTAPD_DBG, "ASSOC ID = %d\n", strStaParams.aid);
- PRINT_D(HOSTAPD_DBG, "Number of supported rates = %d\n",
- strStaParams.rates_len);
-
if (!params->ht_capa) {
strStaParams.ht_supported = false;
} else {
@@ -2345,23 +2096,6 @@
strStaParams.flags_mask = params->sta_flags_mask;
strStaParams.flags_set = params->sta_flags_set;
- PRINT_D(HOSTAPD_DBG, "IS HT supported = %d\n",
- strStaParams.ht_supported);
- PRINT_D(HOSTAPD_DBG, "Capability Info = %d\n",
- strStaParams.ht_capa_info);
- PRINT_D(HOSTAPD_DBG, "AMPDU Params = %d\n",
- strStaParams.ht_ampdu_params);
- PRINT_D(HOSTAPD_DBG, "HT Extended params = %d\n",
- strStaParams.ht_ext_params);
- PRINT_D(HOSTAPD_DBG, "Tx Beamforming Cap = %d\n",
- strStaParams.ht_tx_bf_cap);
- PRINT_D(HOSTAPD_DBG, "Antenna selection info = %d\n",
- strStaParams.ht_ante_sel);
- PRINT_D(HOSTAPD_DBG, "Flag Mask = %d\n",
- strStaParams.flags_mask);
- PRINT_D(HOSTAPD_DBG, "Flag Set = %d\n",
- strStaParams.flags_set);
-
s32Error = wilc_edit_station(vif, &strStaParams);
if (s32Error)
netdev_err(dev, "Host edit station fail\n");
@@ -2381,20 +2115,12 @@
struct net_device *new_ifc = NULL;
priv = wiphy_priv(wiphy);
-
-
-
- PRINT_D(HOSTAPD_DBG, "Adding monitor interface[%p]\n", priv->wdev->netdev);
-
vif = netdev_priv(priv->wdev->netdev);
if (type == NL80211_IFTYPE_MONITOR) {
- PRINT_D(HOSTAPD_DBG, "Monitor interface mode: Initializing mon interface virtual device driver\n");
- PRINT_D(HOSTAPD_DBG, "Adding monitor interface[%p]\n", vif->ndev);
new_ifc = WILC_WFI_init_mon_interface(name, vif->ndev);
if (new_ifc) {
- PRINT_D(HOSTAPD_DBG, "Setting monitor flag in private structure\n");
vif = netdev_priv(priv->wdev->netdev);
vif->monitor_flag = 1;
}
@@ -2404,7 +2130,6 @@
static int del_virtual_intf(struct wiphy *wiphy, struct wireless_dev *wdev)
{
- PRINT_D(HOSTAPD_DBG, "Deleting virtual interface\n");
return 0;
}
@@ -2514,42 +2239,10 @@
};
-int WILC_WFI_update_stats(struct wiphy *wiphy, u32 pktlen, u8 changed)
-{
- struct wilc_priv *priv;
-
- priv = wiphy_priv(wiphy);
- switch (changed) {
- case WILC_WFI_RX_PKT:
- {
- priv->netstats.rx_packets++;
- priv->netstats.rx_bytes += pktlen;
- priv->netstats.rx_time = get_jiffies_64();
- }
- break;
-
- case WILC_WFI_TX_PKT:
- {
- priv->netstats.tx_packets++;
- priv->netstats.tx_bytes += pktlen;
- priv->netstats.tx_time = get_jiffies_64();
-
- }
- break;
-
- default:
- break;
- }
- return 0;
-}
-
static struct wireless_dev *WILC_WFI_CfgAlloc(void)
{
struct wireless_dev *wdev;
-
- PRINT_D(CFG80211_DBG, "Allocating wireless device\n");
-
wdev = kzalloc(sizeof(struct wireless_dev), GFP_KERNEL);
if (!wdev)
goto _fail_;
@@ -2580,8 +2273,6 @@
struct wireless_dev *wdev;
s32 s32Error = 0;
- PRINT_D(CFG80211_DBG, "Registering wifi device\n");
-
wdev = WILC_WFI_CfgAlloc();
if (!wdev) {
netdev_err(net, "wiphy new allocate failed\n");
@@ -2596,8 +2287,6 @@
wdev->wiphy->wowlan = &wowlan_support;
#endif
wdev->wiphy->max_num_pmkids = WILC_MAX_NUM_PMKIDS;
- PRINT_INFO(CFG80211_DBG, "Max number of PMKIDs = %d\n", wdev->wiphy->max_num_pmkids);
-
wdev->wiphy->max_scan_ie_len = 1000;
wdev->wiphy->signal_type = CFG80211_SIGNAL_TYPE_MBM;
wdev->wiphy->cipher_suites = cipher_suites;
@@ -2610,19 +2299,11 @@
wdev->wiphy->flags |= WIPHY_FLAG_HAS_REMAIN_ON_CHANNEL;
wdev->iftype = NL80211_IFTYPE_STATION;
-
-
- PRINT_INFO(CFG80211_DBG, "Max scan ids = %d,Max scan IE len = %d,Signal Type = %d,Interface Modes = %d,Interface Type = %d\n",
- wdev->wiphy->max_scan_ssids, wdev->wiphy->max_scan_ie_len, wdev->wiphy->signal_type,
- wdev->wiphy->interface_modes, wdev->iftype);
-
set_wiphy_dev(wdev->wiphy, dev);
s32Error = wiphy_register(wdev->wiphy);
if (s32Error)
netdev_err(net, "Cannot register wiphy device\n");
- else
- PRINT_D(CFG80211_DBG, "Successful Registering\n");
priv->dev = net;
return wdev;
@@ -2634,7 +2315,6 @@
struct wilc_priv *priv;
- PRINT_D(INIT_DBG, "Host[%p][%p]\n", net, net->ieee80211_ptr);
priv = wdev_priv(net->ieee80211_ptr);
if (op_ifcs == 0) {
setup_timer(&hAgingTimer, remove_network_from_shadow, 0);
@@ -2683,26 +2363,17 @@
void wilc_free_wiphy(struct net_device *net)
{
- PRINT_D(CFG80211_DBG, "Unregistering wiphy\n");
-
- if (!net) {
- PRINT_D(INIT_DBG, "net_device is NULL\n");
+ if (!net)
return;
- }
- if (!net->ieee80211_ptr) {
- PRINT_D(INIT_DBG, "ieee80211_ptr is NULL\n");
+ if (!net->ieee80211_ptr)
return;
- }
- if (!net->ieee80211_ptr->wiphy) {
- PRINT_D(INIT_DBG, "wiphy is NULL\n");
+ if (!net->ieee80211_ptr->wiphy)
return;
- }
wiphy_unregister(net->ieee80211_ptr->wiphy);
- PRINT_D(INIT_DBG, "Freeing wiphy\n");
wiphy_free(net->ieee80211_ptr->wiphy);
kfree(net->ieee80211_ptr);
}
diff --git a/drivers/staging/wilc1000/wilc_wfi_cfgoperations.h b/drivers/staging/wilc1000/wilc_wfi_cfgoperations.h
index ab53d9d..85a3810 100644
--- a/drivers/staging/wilc1000/wilc_wfi_cfgoperations.h
+++ b/drivers/staging/wilc1000/wilc_wfi_cfgoperations.h
@@ -12,7 +12,6 @@
struct wireless_dev *wilc_create_wiphy(struct net_device *net, struct device *dev);
void wilc_free_wiphy(struct net_device *net);
-int WILC_WFI_update_stats(struct wiphy *wiphy, u32 pktlen, u8 changed);
int wilc_deinit_host_int(struct net_device *net);
int wilc_init_host_int(struct net_device *net);
void WILC_WFI_monitor_rx(u8 *buff, u32 size);
diff --git a/drivers/staging/wilc1000/wilc_wfi_netdevice.h b/drivers/staging/wilc1000/wilc_wfi_netdevice.h
index 3077f5d4..4123cff 100644
--- a/drivers/staging/wilc1000/wilc_wfi_netdevice.h
+++ b/drivers/staging/wilc1000/wilc_wfi_netdevice.h
@@ -35,8 +35,6 @@
#include <linux/skbuff.h>
#include <linux/ieee80211.h>
#include <net/cfg80211.h>
-#include <linux/ieee80211.h>
-#include <net/cfg80211.h>
#include <net/ieee80211_radiotap.h>
#include <linux/if_arp.h>
#include <linux/in6.h>
@@ -228,8 +226,6 @@
void wilc_frmw_to_linux(struct wilc *wilc, u8 *buff, u32 size, u32 pkt_offset);
void wilc_mac_indicate(struct wilc *wilc, int flag);
-void wilc_dbg(u8 *buff);
-
int wilc_lock_timeout(struct wilc *wilc, void *, u32 timeout);
void wilc_netdev_cleanup(struct wilc *wilc);
int wilc_netdev_init(struct wilc **wilc, struct device *, int io_type, int gpio,
diff --git a/drivers/staging/wilc1000/wilc_wlan.c b/drivers/staging/wilc1000/wilc_wlan.c
index a396ac9..0020620 100644
--- a/drivers/staging/wilc1000/wilc_wlan.c
+++ b/drivers/staging/wilc1000/wilc_wlan.c
@@ -3,23 +3,6 @@
#include "wilc_wfi_netdevice.h"
#include "wilc_wlan_cfg.h"
-static u32 dbgflag = N_INIT | N_ERR | N_INTR | N_TXQ | N_RXQ;
-
-/* FIXME: replace with dev_debug() */
-static void wilc_debug(u32 flag, char *fmt, ...)
-{
- char buf[256];
- va_list args;
-
- if (flag & dbgflag) {
- va_start(args, fmt);
- vsprintf(buf, fmt, args);
- va_end(args);
-
- wilc_dbg(buf);
- }
-}
-
static CHIP_PS_STATE_T chip_ps_state = CHIP_WAKEDUP;
static inline void acquire_bus(struct wilc *wilc, BUS_ACQUIRE_T acquire)
@@ -38,7 +21,6 @@
static void wilc_wlan_txq_remove(struct wilc *wilc, struct txq_entry_t *tqe)
{
-
if (tqe == wilc->txq_head) {
wilc->txq_head = tqe->next;
if (wilc->txq_head)
@@ -157,7 +139,6 @@
struct txq_entry_t *txqe;
};
-
#define NOT_TCP_ACK (-1)
#define MAX_TCP_SESSION 25
@@ -207,9 +188,8 @@
return 0;
}
-static inline int tcp_process(struct net_device *dev, struct txq_entry_t *tqe)
+static inline void tcp_process(struct net_device *dev, struct txq_entry_t *tqe)
{
- int ret;
u8 *eth_hdr_ptr;
u8 *buffer = tqe->buffer;
unsigned short h_proto;
@@ -266,15 +246,9 @@
add_tcp_pending_ack(ack_no, i, tqe);
}
-
- } else {
- ret = 0;
}
- } else {
- ret = 0;
}
spin_unlock_irqrestore(&wilc->txq_spinlock, flags);
- return ret;
}
static int wilc_wlan_txq_filter_dup_tcp_ack(struct net_device *dev)
@@ -325,7 +299,7 @@
return 1;
}
-static bool enabled = false;
+static bool enabled;
void wilc_enable_tcp_ack_filter(bool value)
{
@@ -452,7 +426,6 @@
static int wilc_wlan_rxq_add(struct wilc *wilc, struct rxq_entry_t *rqe)
{
-
if (wilc->quit)
return 0;
@@ -473,7 +446,6 @@
static struct rxq_entry_t *wilc_wlan_rxq_remove(struct wilc *wilc)
{
-
if (wilc->rxq_head) {
struct rxq_entry_t *rqe;
@@ -500,7 +472,7 @@
void chip_wakeup(struct wilc *wilc)
{
- u32 reg, clk_status_reg, trials = 0;
+ u32 reg, clk_status_reg;
if ((wilc->io_type & 0x1) == HIF_SPI) {
do {
@@ -510,11 +482,8 @@
do {
usleep_range(2 * 1000, 2 * 1000);
- if ((wilc_get_chipid(wilc, true) == 0))
- wilc_debug(N_ERR, "Couldn't read chip id. Wake up failed\n");
-
- } while ((wilc_get_chipid(wilc, true) == 0) && ((++trials % 3) == 0));
-
+ wilc_get_chipid(wilc, true);
+ } while (wilc_get_chipid(wilc, true) == 0);
} while (wilc_get_chipid(wilc, true) == 0);
} else if ((wilc->io_type & 0x1) == HIF_SDIO) {
wilc->hif_func->hif_write_reg(wilc, 0xfa, 1);
@@ -526,14 +495,11 @@
wilc->hif_func->hif_read_reg(wilc, 0xf1,
&clk_status_reg);
- while (((clk_status_reg & 0x1) == 0) && (((++trials) % 3) == 0)) {
+ while ((clk_status_reg & 0x1) == 0) {
usleep_range(2 * 1000, 2 * 1000);
wilc->hif_func->hif_read_reg(wilc, 0xf1,
&clk_status_reg);
-
- if ((clk_status_reg & 0x1) == 0)
- wilc_debug(N_ERR, "clocks still OFF. Wake up failed\n");
}
if ((clk_status_reg & 0x1) == 0) {
wilc->hif_func->hif_write_reg(wilc, 0xf0,
@@ -1028,7 +994,6 @@
ret = -EIO;
goto _fail_;
}
- PRINT_D(INIT_DBG, "Offset = %d\n", offset);
} while (offset < buffer_size);
_fail_:
@@ -1117,12 +1082,6 @@
return (ret < 0) ? ret : 0;
}
-void wilc_wlan_global_reset(struct wilc *wilc)
-{
- acquire_bus(wilc, ACQUIRE_AND_WAKEUP);
- wilc->hif_func->hif_write_reg(wilc, WILC_GLB_RESET_0, 0x0);
- release_bus(wilc, RELEASE_ONLY);
-}
int wilc_wlan_stop(struct wilc *wilc)
{
u32 reg = 0;
@@ -1342,11 +1301,7 @@
int wilc_wlan_cfg_get_val(u32 wid, u8 *buffer, u32 buffer_size)
{
- int ret;
-
- ret = wilc_wlan_cfg_get_wid_value((u16)wid, buffer, buffer_size);
-
- return ret;
+ return wilc_wlan_cfg_get_wid_value((u16)wid, buffer, buffer_size);
}
int wilc_send_config_pkt(struct wilc_vif *vif, u8 mode, struct wid *wids,
@@ -1404,18 +1359,18 @@
if ((chipid & 0xfff) != 0xa0) {
ret = wilc->hif_func->hif_read_reg(wilc, 0x1118, ®);
if (!ret) {
- wilc_debug(N_ERR, "[wilc start]: fail read reg 0x1118 ...\n");
+ netdev_err(dev, "fail read reg 0x1118\n");
return ret;
}
reg |= BIT(0);
ret = wilc->hif_func->hif_write_reg(wilc, 0x1118, reg);
if (!ret) {
- wilc_debug(N_ERR, "[wilc start]: fail write reg 0x1118 ...\n");
+ netdev_err(dev, "fail write reg 0x1118\n");
return ret;
}
ret = wilc->hif_func->hif_write_reg(wilc, 0xc0000, 0x71);
if (!ret) {
- wilc_debug(N_ERR, "[wilc start]: fail write reg 0xc0000 ...\n");
+ netdev_err(dev, "fail write reg 0xc0000\n");
return ret;
}
}
@@ -1461,8 +1416,6 @@
wilc = vif->wilc;
- PRINT_D(INIT_DBG, "Initializing WILC_Wlan ...\n");
-
wilc->quit = 0;
if (!wilc->hif_func->hif_init(wilc, false)) {
@@ -1470,7 +1423,7 @@
goto _fail_;
}
- if (!wilc_wlan_cfg_init(wilc_debug)) {
+ if (!wilc_wlan_cfg_init()) {
ret = -ENOBUFS;
goto _fail_;
}
@@ -1480,7 +1433,6 @@
if (!wilc->tx_buffer) {
ret = -ENOBUFS;
- PRINT_ER("Can't allocate Tx Buffer");
goto _fail_;
}
@@ -1489,7 +1441,6 @@
if (!wilc->rx_buffer) {
ret = -ENOBUFS;
- PRINT_ER("Can't allocate Rx Buffer");
goto _fail_;
}
diff --git a/drivers/staging/wilc1000/wilc_wlan.h b/drivers/staging/wilc1000/wilc_wlan.h
index 06d02ab..bcd4bfa 100644
--- a/drivers/staging/wilc1000/wilc_wlan.h
+++ b/drivers/staging/wilc1000/wilc_wlan.h
@@ -128,6 +128,11 @@
#define WILC_PLL_TO_SPI 2
#define ABORT_INT BIT(31)
+#define LINUX_RX_SIZE (96 * 1024)
+#define LINUX_TX_SIZE (64 * 1024)
+
+#define MODALIAS "WILC_SPI"
+#define GPIO_NUM 0x44
/*******************************************/
/* E0 and later Interrupt flags. */
/*******************************************/
diff --git a/drivers/staging/wilc1000/wilc_wlan_cfg.c b/drivers/staging/wilc1000/wilc_wlan_cfg.c
index 2bb684a..b992243 100644
--- a/drivers/staging/wilc1000/wilc_wlan_cfg.c
+++ b/drivers/staging/wilc1000/wilc_wlan_cfg.c
@@ -19,9 +19,7 @@
*
********************************************/
-typedef struct {
- wilc_debug_func dPrint;
-
+struct wilc_mac_cfg {
int mac_status;
u8 mac_address[7];
u8 ip_address[5];
@@ -40,11 +38,11 @@
u8 firmware_info[8];
u8 scan_result[256];
u8 scan_result1[256];
-} wilc_mac_cfg_t;
+};
-static wilc_mac_cfg_t g_mac;
+static struct wilc_mac_cfg g_mac;
-static wilc_cfg_byte_t g_cfg_byte[] = {
+static struct wilc_cfg_byte g_cfg_byte[] = {
{WID_BSS_TYPE, 0},
{WID_CURRENT_TX_RATE, 0},
{WID_CURRENT_CHANNEL, 0},
@@ -87,7 +85,7 @@
{WID_NIL, 0}
};
-static wilc_cfg_hword_t g_cfg_hword[] = {
+static struct wilc_cfg_hword g_cfg_hword[] = {
{WID_LINK_LOSS_THRESHOLD, 0},
{WID_RTS_THRESHOLD, 0},
{WID_FRAG_THRESHOLD, 0},
@@ -108,7 +106,7 @@
{WID_NIL, 0}
};
-static wilc_cfg_word_t g_cfg_word[] = {
+static struct wilc_cfg_word g_cfg_word[] = {
{WID_FAILED_COUNT, 0},
{WID_RETRY_COUNT, 0},
{WID_MULTIPLE_RETRY_COUNT, 0},
@@ -131,7 +129,7 @@
};
-static wilc_cfg_str_t g_cfg_str[] = {
+static struct wilc_cfg_str g_cfg_str[] = {
{WID_SSID, g_mac.ssid}, /* 33 + 1 bytes */
{WID_FIRMWARE_VERSION, g_mac.firmware_version},
{WID_OPERATIONAL_RATE_SET, g_mac.supp_rate},
@@ -349,7 +347,7 @@
static int wilc_wlan_parse_info_frame(u8 *info, int size)
{
- wilc_mac_cfg_t *pd = &g_mac;
+ struct wilc_mac_cfg *pd = &g_mac;
u32 wid, len;
int type = WILC_CFG_RSP_STATUS;
@@ -389,8 +387,6 @@
ret = wilc_wlan_cfg_set_str(frame, offset, id, buf, size);
} else if (type == 4) { /* binary command */
ret = wilc_wlan_cfg_set_bin(frame, offset, id, buf, size);
- } else {
- g_mac.dPrint(N_ERR, "illegal id\n");
}
return ret;
@@ -481,8 +477,6 @@
}
i++;
} while (1);
- } else {
- g_mac.dPrint(N_ERR, "[CFG]: illegal type (%08x)\n", wid);
}
return ret;
@@ -537,9 +531,8 @@
return ret;
}
-int wilc_wlan_cfg_init(wilc_debug_func func)
+int wilc_wlan_cfg_init(void)
{
- memset((void *)&g_mac, 0, sizeof(wilc_mac_cfg_t));
- g_mac.dPrint = func;
+ memset((void *)&g_mac, 0, sizeof(struct wilc_mac_cfg));
return 1;
}
diff --git a/drivers/staging/wilc1000/wilc_wlan_cfg.h b/drivers/staging/wilc1000/wilc_wlan_cfg.h
index 5f74eb8..b8641a2 100644
--- a/drivers/staging/wilc1000/wilc_wlan_cfg.h
+++ b/drivers/staging/wilc1000/wilc_wlan_cfg.h
@@ -10,25 +10,25 @@
#ifndef WILC_WLAN_CFG_H
#define WILC_WLAN_CFG_H
-typedef struct {
+struct wilc_cfg_byte {
u16 id;
u16 val;
-} wilc_cfg_byte_t;
+};
-typedef struct {
+struct wilc_cfg_hword {
u16 id;
u16 val;
-} wilc_cfg_hword_t;
+};
-typedef struct {
+struct wilc_cfg_word {
u32 id;
u32 val;
-} wilc_cfg_word_t;
+};
-typedef struct {
+struct wilc_cfg_str {
u32 id;
u8 *str;
-} wilc_cfg_str_t;
+};
struct wilc;
int wilc_wlan_cfg_set_wid(u8 *frame, u32 offset, u16 id, u8 *buf, int size);
@@ -36,6 +36,6 @@
int wilc_wlan_cfg_get_wid_value(u16 wid, u8 *buffer, u32 buffer_size);
int wilc_wlan_cfg_indicate_rx(struct wilc *wilc, u8 *frame, int size,
struct wilc_cfg_rsp *rsp);
-int wilc_wlan_cfg_init(wilc_debug_func func);
+int wilc_wlan_cfg_init(void);
#endif
diff --git a/drivers/staging/wilc1000/wilc_wlan_if.h b/drivers/staging/wilc1000/wilc_wlan_if.h
index 269c56e..fbe34eb 100644
--- a/drivers/staging/wilc1000/wilc_wlan_if.h
+++ b/drivers/staging/wilc1000/wilc_wlan_if.h
@@ -11,7 +11,6 @@
#define WILC_WLAN_IF_H
#include <linux/semaphore.h>
-#include "linux_wlan_common.h"
#include <linux/netdevice.h>
/********************************************
@@ -95,7 +94,7 @@
* Wlan Configuration ID
*
********************************************/
-
+#define WILC_MULTICAST_TABLE_SIZE 8
#define MAX_SSID_LEN 33
#define MAX_RATES_SUPPORTED 12
diff --git a/drivers/staging/wlan-ng/hfa384x.h b/drivers/staging/wlan-ng/hfa384x.h
index 8dfe438..cec6d0b 100644
--- a/drivers/staging/wlan-ng/hfa384x.h
+++ b/drivers/staging/wlan-ng/hfa384x.h
@@ -1360,7 +1360,6 @@
int
hfa384x_corereset(hfa384x_t *hw, int holdtime, int settletime, int genesis);
-int hfa384x_drvr_commtallies(hfa384x_t *hw);
int hfa384x_drvr_disable(hfa384x_t *hw, u16 macport);
int hfa384x_drvr_enable(hfa384x_t *hw, u16 macport);
int hfa384x_drvr_flashdl_enable(hfa384x_t *hw);
@@ -1391,10 +1390,6 @@
}
int
-hfa384x_drvr_getconfig_async(hfa384x_t *hw,
- u16 rid, ctlx_usercb_t usercb, void *usercb_data);
-
-int
hfa384x_drvr_setconfig_async(hfa384x_t *hw,
u16 rid,
void *buf,
diff --git a/drivers/staging/wlan-ng/hfa384x_usb.c b/drivers/staging/wlan-ng/hfa384x_usb.c
index fda8a95..09b4b2c 100644
--- a/drivers/staging/wlan-ng/hfa384x_usb.c
+++ b/drivers/staging/wlan-ng/hfa384x_usb.c
@@ -213,8 +213,6 @@
static void hfa384x_cb_status(hfa384x_t *hw, const hfa384x_usbctlx_t *ctlx);
-static void hfa384x_cb_rrid(hfa384x_t *hw, const hfa384x_usbctlx_t *ctlx);
-
static int
usbctlx_get_status(const hfa384x_usb_cmdresp_t *cmdresp,
hfa384x_cmdresult_t *result);
@@ -816,43 +814,6 @@
}
}
-/*----------------------------------------------------------------
-* hfa384x_cb_rrid
-*
-* CTLX completion handler for async RRID type control exchanges.
-*
-* Note: If the handling is changed here, it should probably be
-* changed in dorrid as well.
-*
-* Arguments:
-* hw hw struct
-* ctlx completed CTLX
-*
-* Returns:
-* nothing
-*
-* Side effects:
-*
-* Call context:
-* interrupt
-----------------------------------------------------------------*/
-static void hfa384x_cb_rrid(hfa384x_t *hw, const hfa384x_usbctlx_t *ctlx)
-{
- if (ctlx->usercb != NULL) {
- hfa384x_rridresult_t rridresult;
-
- if (ctlx->state != CTLX_COMPLETE) {
- memset(&rridresult, 0, sizeof(rridresult));
- rridresult.rid = le16_to_cpu(ctlx->outbuf.rridreq.rid);
- } else {
- usbctlx_get_rridresult(&ctlx->inbuf.rridresp,
- &rridresult);
- }
-
- ctlx->usercb(hw, &rridresult, ctlx->usercb_data);
- }
-}
-
static inline int hfa384x_docmd_wait(hfa384x_t *hw, hfa384x_metacmd_t *cmd)
{
return hfa384x_docmd(hw, DOWAIT, cmd, NULL, NULL, NULL);
@@ -1735,37 +1696,6 @@
}
/*----------------------------------------------------------------
-* hfa384x_drvr_commtallies
-*
-* Send a commtallies inquiry to the MAC. Note that this is an async
-* call that will result in an info frame arriving sometime later.
-*
-* Arguments:
-* hw device structure
-*
-* Returns:
-* zero success.
-*
-* Side effects:
-*
-* Call context:
-* process
-----------------------------------------------------------------*/
-int hfa384x_drvr_commtallies(hfa384x_t *hw)
-{
- hfa384x_metacmd_t cmd;
-
- cmd.cmd = HFA384x_CMDCODE_INQ;
- cmd.parm0 = HFA384x_IT_COMMTALLIES;
- cmd.parm1 = 0;
- cmd.parm2 = 0;
-
- hfa384x_docmd_async(hw, &cmd, NULL, NULL, NULL);
-
- return 0;
-}
-
-/*----------------------------------------------------------------
* hfa384x_drvr_disable
*
* Issues the disable command to stop communications on one of
@@ -2110,41 +2040,6 @@
}
/*----------------------------------------------------------------
- * hfa384x_drvr_getconfig_async
- *
- * Performs the sequence necessary to perform an async read of
- * of a config/info item.
- *
- * Arguments:
- * hw device structure
- * rid config/info record id (host order)
- * buf host side record buffer. Upon return it will
- * contain the body portion of the record (minus the
- * RID and len).
- * len buffer length (in bytes, should match record length)
- * cbfn caller supplied callback, called when the command
- * is done (successful or not).
- * cbfndata pointer to some caller supplied data that will be
- * passed in as an argument to the cbfn.
- *
- * Returns:
- * nothing the cbfn gets a status argument identifying if
- * any errors occur.
- * Side effects:
- * Queues an hfa384x_usbcmd_t for subsequent execution.
- *
- * Call context:
- * Any
- ----------------------------------------------------------------*/
-int
-hfa384x_drvr_getconfig_async(hfa384x_t *hw,
- u16 rid, ctlx_usercb_t usercb, void *usercb_data)
-{
- return hfa384x_dorrid_async(hw, rid, NULL, 0,
- hfa384x_cb_rrid, usercb, usercb_data);
-}
-
-/*----------------------------------------------------------------
* hfa384x_drvr_setconfig_async
*
* Performs the sequence necessary to write a config/info item.
diff --git a/drivers/staging/wlan-ng/p80211netdev.c b/drivers/staging/wlan-ng/p80211netdev.c
index a9c1e0b..88255ce 100644
--- a/drivers/staging/wlan-ng/p80211netdev.c
+++ b/drivers/staging/wlan-ng/p80211netdev.c
@@ -328,7 +328,7 @@
p80211_wep.data = NULL;
- if (skb == NULL)
+ if (!skb)
return NETDEV_TX_OK;
if (wlandev->state != WLAN_DEVICE_OPEN) {
@@ -388,7 +388,7 @@
goto failed;
}
}
- if (wlandev->txframe == NULL) {
+ if (!wlandev->txframe) {
result = 1;
goto failed;
}
@@ -736,7 +736,7 @@
/* Allocate and initialize the wiphy struct */
wiphy = wlan_create_wiphy(physdev, wlandev);
- if (wiphy == NULL) {
+ if (!wiphy) {
dev_err(physdev, "Failed to alloc wiphy.\n");
return 1;
}
@@ -744,7 +744,7 @@
/* Allocate and initialize the struct device */
netdev = alloc_netdev(sizeof(struct wireless_dev), "wlan%d",
NET_NAME_UNKNOWN, ether_setup);
- if (netdev == NULL) {
+ if (!netdev) {
dev_err(physdev, "Failed to alloc netdev.\n");
wlan_free_wiphy(wiphy);
result = 1;
diff --git a/drivers/staging/wlan-ng/prism2usb.c b/drivers/staging/wlan-ng/prism2usb.c
index 194f67e..ae6a53c 100644
--- a/drivers/staging/wlan-ng/prism2usb.c
+++ b/drivers/staging/wlan-ng/prism2usb.c
@@ -177,7 +177,8 @@
tasklet_kill(&hw->completion_bh);
tasklet_kill(&hw->reaper_bh);
- flush_scheduled_work();
+ cancel_work_sync(&hw->link_bh);
+ cancel_work_sync(&hw->commsqual_bh);
/* Now we complete any outstanding commands
* and tell everyone who is waiting for their
diff --git a/drivers/target/target_core_configfs.c b/drivers/target/target_core_configfs.c
index 3327c49..713c63d9 100644
--- a/drivers/target/target_core_configfs.c
+++ b/drivers/target/target_core_configfs.c
@@ -898,7 +898,7 @@
da->unmap_zeroes_data = flag;
pr_debug("dev[%p]: SE Device Thin Provisioning LBPRZ bit: %d\n",
da->da_dev, flag);
- return 0;
+ return count;
}
/*
diff --git a/drivers/target/target_core_device.c b/drivers/target/target_core_device.c
index cacd97a..da457e2 100644
--- a/drivers/target/target_core_device.c
+++ b/drivers/target/target_core_device.c
@@ -828,6 +828,50 @@
return dev;
}
+/*
+ * Check if the underlying struct block_device request_queue supports
+ * the QUEUE_FLAG_DISCARD bit for UNMAP/WRITE_SAME in SCSI + TRIM
+ * in ATA and we need to set TPE=1
+ */
+bool target_configure_unmap_from_queue(struct se_dev_attrib *attrib,
+ struct request_queue *q, int block_size)
+{
+ if (!blk_queue_discard(q))
+ return false;
+
+ attrib->max_unmap_lba_count = (q->limits.max_discard_sectors << 9) /
+ block_size;
+ /*
+ * Currently hardcoded to 1 in Linux/SCSI code..
+ */
+ attrib->max_unmap_block_desc_count = 1;
+ attrib->unmap_granularity = q->limits.discard_granularity / block_size;
+ attrib->unmap_granularity_alignment = q->limits.discard_alignment /
+ block_size;
+ attrib->unmap_zeroes_data = q->limits.discard_zeroes_data;
+ return true;
+}
+EXPORT_SYMBOL(target_configure_unmap_from_queue);
+
+/*
+ * Convert from blocksize advertised to the initiator to the 512 byte
+ * units unconditionally used by the Linux block layer.
+ */
+sector_t target_to_linux_sector(struct se_device *dev, sector_t lb)
+{
+ switch (dev->dev_attrib.block_size) {
+ case 4096:
+ return lb << 3;
+ case 2048:
+ return lb << 2;
+ case 1024:
+ return lb << 1;
+ default:
+ return lb;
+ }
+}
+EXPORT_SYMBOL(target_to_linux_sector);
+
int target_configure_device(struct se_device *dev)
{
struct se_hba *hba = dev->se_hba;
diff --git a/drivers/target/target_core_file.c b/drivers/target/target_core_file.c
index e319570..75f0f08 100644
--- a/drivers/target/target_core_file.c
+++ b/drivers/target/target_core_file.c
@@ -160,25 +160,11 @@
" block_device blocks: %llu logical_block_size: %d\n",
dev_size, div_u64(dev_size, fd_dev->fd_block_size),
fd_dev->fd_block_size);
- /*
- * Check if the underlying struct block_device request_queue supports
- * the QUEUE_FLAG_DISCARD bit for UNMAP/WRITE_SAME in SCSI + TRIM
- * in ATA and we need to set TPE=1
- */
- if (blk_queue_discard(q)) {
- dev->dev_attrib.max_unmap_lba_count =
- q->limits.max_discard_sectors;
- /*
- * Currently hardcoded to 1 in Linux/SCSI code..
- */
- dev->dev_attrib.max_unmap_block_desc_count = 1;
- dev->dev_attrib.unmap_granularity =
- q->limits.discard_granularity >> 9;
- dev->dev_attrib.unmap_granularity_alignment =
- q->limits.discard_alignment;
+
+ if (target_configure_unmap_from_queue(&dev->dev_attrib, q,
+ fd_dev->fd_block_size))
pr_debug("IFILE: BLOCK Discard support available,"
- " disabled by default\n");
- }
+ " disabled by default\n");
/*
* Enable write same emulation for IBLOCK and use 0xFFFF as
* the smaller WRITE_SAME(10) only has a two-byte block count.
@@ -490,9 +476,12 @@
if (S_ISBLK(inode->i_mode)) {
/* The backend is block device, use discard */
struct block_device *bdev = inode->i_bdev;
+ struct se_device *dev = cmd->se_dev;
- ret = blkdev_issue_discard(bdev, lba,
- nolb, GFP_KERNEL, 0);
+ ret = blkdev_issue_discard(bdev,
+ target_to_linux_sector(dev, lba),
+ target_to_linux_sector(dev, nolb),
+ GFP_KERNEL, 0);
if (ret < 0) {
pr_warn("FILEIO: blkdev_issue_discard() failed: %d\n",
ret);
diff --git a/drivers/target/target_core_iblock.c b/drivers/target/target_core_iblock.c
index 5a2899f..abe4eb9 100644
--- a/drivers/target/target_core_iblock.c
+++ b/drivers/target/target_core_iblock.c
@@ -121,29 +121,11 @@
dev->dev_attrib.hw_max_sectors = queue_max_hw_sectors(q);
dev->dev_attrib.hw_queue_depth = q->nr_requests;
- /*
- * Check if the underlying struct block_device request_queue supports
- * the QUEUE_FLAG_DISCARD bit for UNMAP/WRITE_SAME in SCSI + TRIM
- * in ATA and we need to set TPE=1
- */
- if (blk_queue_discard(q)) {
- dev->dev_attrib.max_unmap_lba_count =
- q->limits.max_discard_sectors;
-
- /*
- * Currently hardcoded to 1 in Linux/SCSI code..
- */
- dev->dev_attrib.max_unmap_block_desc_count = 1;
- dev->dev_attrib.unmap_granularity =
- q->limits.discard_granularity >> 9;
- dev->dev_attrib.unmap_granularity_alignment =
- q->limits.discard_alignment;
- dev->dev_attrib.unmap_zeroes_data =
- q->limits.discard_zeroes_data;
-
+ if (target_configure_unmap_from_queue(&dev->dev_attrib, q,
+ dev->dev_attrib.hw_block_size))
pr_debug("IBLOCK: BLOCK Discard support available,"
- " disabled by default\n");
- }
+ " disabled by default\n");
+
/*
* Enable write same emulation for IBLOCK and use 0xFFFF as
* the smaller WRITE_SAME(10) only has a two-byte block count.
@@ -415,9 +397,13 @@
iblock_execute_unmap(struct se_cmd *cmd, sector_t lba, sector_t nolb)
{
struct block_device *bdev = IBLOCK_DEV(cmd->se_dev)->ibd_bd;
+ struct se_device *dev = cmd->se_dev;
int ret;
- ret = blkdev_issue_discard(bdev, lba, nolb, GFP_KERNEL, 0);
+ ret = blkdev_issue_discard(bdev,
+ target_to_linux_sector(dev, lba),
+ target_to_linux_sector(dev, nolb),
+ GFP_KERNEL, 0);
if (ret < 0) {
pr_err("blkdev_issue_discard() failed: %d\n", ret);
return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
@@ -433,8 +419,10 @@
struct scatterlist *sg;
struct bio *bio;
struct bio_list list;
- sector_t block_lba = cmd->t_task_lba;
- sector_t sectors = sbc_get_write_same_sectors(cmd);
+ struct se_device *dev = cmd->se_dev;
+ sector_t block_lba = target_to_linux_sector(dev, cmd->t_task_lba);
+ sector_t sectors = target_to_linux_sector(dev,
+ sbc_get_write_same_sectors(cmd));
if (cmd->prot_op) {
pr_err("WRITE_SAME: Protection information with IBLOCK"
@@ -648,12 +636,12 @@
enum dma_data_direction data_direction)
{
struct se_device *dev = cmd->se_dev;
+ sector_t block_lba = target_to_linux_sector(dev, cmd->t_task_lba);
struct iblock_req *ibr;
struct bio *bio, *bio_start;
struct bio_list list;
struct scatterlist *sg;
u32 sg_num = sgl_nents;
- sector_t block_lba;
unsigned bio_cnt;
int rw = 0;
int i;
@@ -679,24 +667,6 @@
rw = READ;
}
- /*
- * Convert the blocksize advertised to the initiator to the 512 byte
- * units unconditionally used by the Linux block layer.
- */
- if (dev->dev_attrib.block_size == 4096)
- block_lba = (cmd->t_task_lba << 3);
- else if (dev->dev_attrib.block_size == 2048)
- block_lba = (cmd->t_task_lba << 2);
- else if (dev->dev_attrib.block_size == 1024)
- block_lba = (cmd->t_task_lba << 1);
- else if (dev->dev_attrib.block_size == 512)
- block_lba = cmd->t_task_lba;
- else {
- pr_err("Unsupported SCSI -> BLOCK LBA conversion:"
- " %u\n", dev->dev_attrib.block_size);
- return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
- }
-
ibr = kzalloc(sizeof(struct iblock_req), GFP_KERNEL);
if (!ibr)
goto fail;
diff --git a/drivers/target/target_core_internal.h b/drivers/target/target_core_internal.h
index dae0750c..db4412f 100644
--- a/drivers/target/target_core_internal.h
+++ b/drivers/target/target_core_internal.h
@@ -141,7 +141,6 @@
int transport_dump_vpd_assoc(struct t10_vpd *, unsigned char *, int);
int transport_dump_vpd_ident_type(struct t10_vpd *, unsigned char *, int);
int transport_dump_vpd_ident(struct t10_vpd *, unsigned char *, int);
-bool target_stop_cmd(struct se_cmd *cmd, unsigned long *flags);
void transport_clear_lun_ref(struct se_lun *);
void transport_send_task_abort(struct se_cmd *);
sense_reason_t target_cmd_size_check(struct se_cmd *cmd, unsigned int size);
diff --git a/drivers/target/target_core_tmr.c b/drivers/target/target_core_tmr.c
index fcdcb11..82a663b 100644
--- a/drivers/target/target_core_tmr.c
+++ b/drivers/target/target_core_tmr.c
@@ -68,23 +68,25 @@
if (dev) {
spin_lock_irqsave(&dev->se_tmr_lock, flags);
- list_del(&tmr->tmr_list);
+ list_del_init(&tmr->tmr_list);
spin_unlock_irqrestore(&dev->se_tmr_lock, flags);
}
kfree(tmr);
}
-static void core_tmr_handle_tas_abort(
- struct se_node_acl *tmr_nacl,
- struct se_cmd *cmd,
- int tas)
+static void core_tmr_handle_tas_abort(struct se_cmd *cmd, int tas)
{
- bool remove = true;
+ unsigned long flags;
+ bool remove = true, send_tas;
/*
* TASK ABORTED status (TAS) bit support
*/
- if ((tmr_nacl && (tmr_nacl != cmd->se_sess->se_node_acl)) && tas) {
+ spin_lock_irqsave(&cmd->t_state_lock, flags);
+ send_tas = (cmd->transport_state & CMD_T_TAS);
+ spin_unlock_irqrestore(&cmd->t_state_lock, flags);
+
+ if (send_tas) {
remove = false;
transport_send_task_abort(cmd);
}
@@ -107,6 +109,46 @@
return 1;
}
+static bool __target_check_io_state(struct se_cmd *se_cmd,
+ struct se_session *tmr_sess, int tas)
+{
+ struct se_session *sess = se_cmd->se_sess;
+
+ assert_spin_locked(&sess->sess_cmd_lock);
+ WARN_ON_ONCE(!irqs_disabled());
+ /*
+ * If command already reached CMD_T_COMPLETE state within
+ * target_complete_cmd() or CMD_T_FABRIC_STOP due to shutdown,
+ * this se_cmd has been passed to fabric driver and will
+ * not be aborted.
+ *
+ * Otherwise, obtain a local se_cmd->cmd_kref now for TMR
+ * ABORT_TASK + LUN_RESET for CMD_T_ABORTED processing as
+ * long as se_cmd->cmd_kref is still active unless zero.
+ */
+ spin_lock(&se_cmd->t_state_lock);
+ if (se_cmd->transport_state & (CMD_T_COMPLETE | CMD_T_FABRIC_STOP)) {
+ pr_debug("Attempted to abort io tag: %llu already complete or"
+ " fabric stop, skipping\n", se_cmd->tag);
+ spin_unlock(&se_cmd->t_state_lock);
+ return false;
+ }
+ if (sess->sess_tearing_down || se_cmd->cmd_wait_set) {
+ pr_debug("Attempted to abort io tag: %llu already shutdown,"
+ " skipping\n", se_cmd->tag);
+ spin_unlock(&se_cmd->t_state_lock);
+ return false;
+ }
+ se_cmd->transport_state |= CMD_T_ABORTED;
+
+ if ((tmr_sess != se_cmd->se_sess) && tas)
+ se_cmd->transport_state |= CMD_T_TAS;
+
+ spin_unlock(&se_cmd->t_state_lock);
+
+ return kref_get_unless_zero(&se_cmd->cmd_kref);
+}
+
void core_tmr_abort_task(
struct se_device *dev,
struct se_tmr_req *tmr,
@@ -130,34 +172,22 @@
if (tmr->ref_task_tag != ref_tag)
continue;
- if (!kref_get_unless_zero(&se_cmd->cmd_kref))
- continue;
-
printk("ABORT_TASK: Found referenced %s task_tag: %llu\n",
se_cmd->se_tfo->get_fabric_name(), ref_tag);
- spin_lock(&se_cmd->t_state_lock);
- if (se_cmd->transport_state & CMD_T_COMPLETE) {
- printk("ABORT_TASK: ref_tag: %llu already complete,"
- " skipping\n", ref_tag);
- spin_unlock(&se_cmd->t_state_lock);
+ if (!__target_check_io_state(se_cmd, se_sess, 0)) {
spin_unlock_irqrestore(&se_sess->sess_cmd_lock, flags);
-
target_put_sess_cmd(se_cmd);
-
goto out;
}
- se_cmd->transport_state |= CMD_T_ABORTED;
- spin_unlock(&se_cmd->t_state_lock);
-
list_del_init(&se_cmd->se_cmd_list);
spin_unlock_irqrestore(&se_sess->sess_cmd_lock, flags);
cancel_work_sync(&se_cmd->work);
transport_wait_for_tasks(se_cmd);
- target_put_sess_cmd(se_cmd);
transport_cmd_finish_abort(se_cmd, true);
+ target_put_sess_cmd(se_cmd);
printk("ABORT_TASK: Sending TMR_FUNCTION_COMPLETE for"
" ref_tag: %llu\n", ref_tag);
@@ -178,9 +208,11 @@
struct list_head *preempt_and_abort_list)
{
LIST_HEAD(drain_tmr_list);
+ struct se_session *sess;
struct se_tmr_req *tmr_p, *tmr_pp;
struct se_cmd *cmd;
unsigned long flags;
+ bool rc;
/*
* Release all pending and outgoing TMRs aside from the received
* LUN_RESET tmr..
@@ -206,17 +238,39 @@
if (target_check_cdb_and_preempt(preempt_and_abort_list, cmd))
continue;
+ sess = cmd->se_sess;
+ if (WARN_ON_ONCE(!sess))
+ continue;
+
+ spin_lock(&sess->sess_cmd_lock);
spin_lock(&cmd->t_state_lock);
- if (!(cmd->transport_state & CMD_T_ACTIVE)) {
+ if (!(cmd->transport_state & CMD_T_ACTIVE) ||
+ (cmd->transport_state & CMD_T_FABRIC_STOP)) {
spin_unlock(&cmd->t_state_lock);
+ spin_unlock(&sess->sess_cmd_lock);
continue;
}
if (cmd->t_state == TRANSPORT_ISTATE_PROCESSING) {
spin_unlock(&cmd->t_state_lock);
+ spin_unlock(&sess->sess_cmd_lock);
continue;
}
+ if (sess->sess_tearing_down || cmd->cmd_wait_set) {
+ spin_unlock(&cmd->t_state_lock);
+ spin_unlock(&sess->sess_cmd_lock);
+ continue;
+ }
+ cmd->transport_state |= CMD_T_ABORTED;
spin_unlock(&cmd->t_state_lock);
+ rc = kref_get_unless_zero(&cmd->cmd_kref);
+ if (!rc) {
+ printk("LUN_RESET TMR: non-zero kref_get_unless_zero\n");
+ spin_unlock(&sess->sess_cmd_lock);
+ continue;
+ }
+ spin_unlock(&sess->sess_cmd_lock);
+
list_move_tail(&tmr_p->tmr_list, &drain_tmr_list);
}
spin_unlock_irqrestore(&dev->se_tmr_lock, flags);
@@ -230,20 +284,26 @@
(preempt_and_abort_list) ? "Preempt" : "", tmr_p,
tmr_p->function, tmr_p->response, cmd->t_state);
+ cancel_work_sync(&cmd->work);
+ transport_wait_for_tasks(cmd);
+
transport_cmd_finish_abort(cmd, 1);
+ target_put_sess_cmd(cmd);
}
}
static void core_tmr_drain_state_list(
struct se_device *dev,
struct se_cmd *prout_cmd,
- struct se_node_acl *tmr_nacl,
+ struct se_session *tmr_sess,
int tas,
struct list_head *preempt_and_abort_list)
{
LIST_HEAD(drain_task_list);
+ struct se_session *sess;
struct se_cmd *cmd, *next;
unsigned long flags;
+ int rc;
/*
* Complete outstanding commands with TASK_ABORTED SAM status.
@@ -282,6 +342,16 @@
if (prout_cmd == cmd)
continue;
+ sess = cmd->se_sess;
+ if (WARN_ON_ONCE(!sess))
+ continue;
+
+ spin_lock(&sess->sess_cmd_lock);
+ rc = __target_check_io_state(cmd, tmr_sess, tas);
+ spin_unlock(&sess->sess_cmd_lock);
+ if (!rc)
+ continue;
+
list_move_tail(&cmd->state_list, &drain_task_list);
cmd->state_active = false;
}
@@ -289,7 +359,7 @@
while (!list_empty(&drain_task_list)) {
cmd = list_entry(drain_task_list.next, struct se_cmd, state_list);
- list_del(&cmd->state_list);
+ list_del_init(&cmd->state_list);
pr_debug("LUN_RESET: %s cmd: %p"
" ITT/CmdSN: 0x%08llx/0x%08x, i_state: %d, t_state: %d"
@@ -313,16 +383,11 @@
* loop above, but we do it down here given that
* cancel_work_sync may block.
*/
- if (cmd->t_state == TRANSPORT_COMPLETE)
- cancel_work_sync(&cmd->work);
+ cancel_work_sync(&cmd->work);
+ transport_wait_for_tasks(cmd);
- spin_lock_irqsave(&cmd->t_state_lock, flags);
- target_stop_cmd(cmd, &flags);
-
- cmd->transport_state |= CMD_T_ABORTED;
- spin_unlock_irqrestore(&cmd->t_state_lock, flags);
-
- core_tmr_handle_tas_abort(tmr_nacl, cmd, tas);
+ core_tmr_handle_tas_abort(cmd, tas);
+ target_put_sess_cmd(cmd);
}
}
@@ -334,6 +399,7 @@
{
struct se_node_acl *tmr_nacl = NULL;
struct se_portal_group *tmr_tpg = NULL;
+ struct se_session *tmr_sess = NULL;
int tas;
/*
* TASK_ABORTED status bit, this is configurable via ConfigFS
@@ -352,8 +418,9 @@
* or struct se_device passthrough..
*/
if (tmr && tmr->task_cmd && tmr->task_cmd->se_sess) {
- tmr_nacl = tmr->task_cmd->se_sess->se_node_acl;
- tmr_tpg = tmr->task_cmd->se_sess->se_tpg;
+ tmr_sess = tmr->task_cmd->se_sess;
+ tmr_nacl = tmr_sess->se_node_acl;
+ tmr_tpg = tmr_sess->se_tpg;
if (tmr_nacl && tmr_tpg) {
pr_debug("LUN_RESET: TMR caller fabric: %s"
" initiator port %s\n",
@@ -366,7 +433,7 @@
dev->transport->name, tas);
core_tmr_drain_tmr_list(dev, tmr, preempt_and_abort_list);
- core_tmr_drain_state_list(dev, prout_cmd, tmr_nacl, tas,
+ core_tmr_drain_state_list(dev, prout_cmd, tmr_sess, tas,
preempt_and_abort_list);
/*
diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c
index 9f3608e..867bc6d 100644
--- a/drivers/target/target_core_transport.c
+++ b/drivers/target/target_core_transport.c
@@ -534,9 +534,6 @@
}
EXPORT_SYMBOL(transport_deregister_session);
-/*
- * Called with cmd->t_state_lock held.
- */
static void target_remove_from_state_list(struct se_cmd *cmd)
{
struct se_device *dev = cmd->se_dev;
@@ -561,10 +558,6 @@
{
unsigned long flags;
- spin_lock_irqsave(&cmd->t_state_lock, flags);
- if (write_pending)
- cmd->t_state = TRANSPORT_WRITE_PENDING;
-
if (remove_from_lists) {
target_remove_from_state_list(cmd);
@@ -574,6 +567,10 @@
cmd->se_lun = NULL;
}
+ spin_lock_irqsave(&cmd->t_state_lock, flags);
+ if (write_pending)
+ cmd->t_state = TRANSPORT_WRITE_PENDING;
+
/*
* Determine if frontend context caller is requesting the stopping of
* this command for frontend exceptions.
@@ -627,6 +624,8 @@
void transport_cmd_finish_abort(struct se_cmd *cmd, int remove)
{
+ bool ack_kref = (cmd->se_cmd_flags & SCF_ACK_KREF);
+
if (cmd->se_cmd_flags & SCF_SE_LUN_CMD)
transport_lun_remove_cmd(cmd);
/*
@@ -638,7 +637,7 @@
if (transport_cmd_check_stop_to_fabric(cmd))
return;
- if (remove)
+ if (remove && ack_kref)
transport_put_cmd(cmd);
}
@@ -694,19 +693,10 @@
}
/*
- * See if we are waiting to complete for an exception condition.
- */
- if (cmd->transport_state & CMD_T_REQUEST_STOP) {
- spin_unlock_irqrestore(&cmd->t_state_lock, flags);
- complete(&cmd->task_stop_comp);
- return;
- }
-
- /*
* Check for case where an explicit ABORT_TASK has been received
* and transport_wait_for_tasks() will be waiting for completion..
*/
- if (cmd->transport_state & CMD_T_ABORTED &&
+ if (cmd->transport_state & CMD_T_ABORTED ||
cmd->transport_state & CMD_T_STOP) {
spin_unlock_irqrestore(&cmd->t_state_lock, flags);
complete_all(&cmd->t_transport_stop_comp);
@@ -721,10 +711,10 @@
cmd->transport_state |= (CMD_T_COMPLETE | CMD_T_ACTIVE);
spin_unlock_irqrestore(&cmd->t_state_lock, flags);
- if (cmd->cpuid == -1)
- queue_work(target_completion_wq, &cmd->work);
- else
+ if (cmd->se_cmd_flags & SCF_USE_CPUID)
queue_work_on(cmd->cpuid, target_completion_wq, &cmd->work);
+ else
+ queue_work(target_completion_wq, &cmd->work);
}
EXPORT_SYMBOL(target_complete_cmd);
@@ -1203,7 +1193,6 @@
INIT_LIST_HEAD(&cmd->state_list);
init_completion(&cmd->t_transport_stop_comp);
init_completion(&cmd->cmd_wait_comp);
- init_completion(&cmd->task_stop_comp);
spin_lock_init(&cmd->t_state_lock);
kref_init(&cmd->cmd_kref);
cmd->transport_state = CMD_T_DEV_ACTIVE;
@@ -1437,6 +1426,12 @@
*/
transport_init_se_cmd(se_cmd, se_tpg->se_tpg_tfo, se_sess,
data_length, data_dir, task_attr, sense);
+
+ if (flags & TARGET_SCF_USE_CPUID)
+ se_cmd->se_cmd_flags |= SCF_USE_CPUID;
+ else
+ se_cmd->cpuid = WORK_CPU_UNBOUND;
+
if (flags & TARGET_SCF_UNKNOWN_SIZE)
se_cmd->unknown_data_length = 1;
/*
@@ -1635,33 +1630,6 @@
EXPORT_SYMBOL(target_submit_tmr);
/*
- * If the cmd is active, request it to be stopped and sleep until it
- * has completed.
- */
-bool target_stop_cmd(struct se_cmd *cmd, unsigned long *flags)
- __releases(&cmd->t_state_lock)
- __acquires(&cmd->t_state_lock)
-{
- bool was_active = false;
-
- if (cmd->transport_state & CMD_T_BUSY) {
- cmd->transport_state |= CMD_T_REQUEST_STOP;
- spin_unlock_irqrestore(&cmd->t_state_lock, *flags);
-
- pr_debug("cmd %p waiting to complete\n", cmd);
- wait_for_completion(&cmd->task_stop_comp);
- pr_debug("cmd %p stopped successfully\n", cmd);
-
- spin_lock_irqsave(&cmd->t_state_lock, *flags);
- cmd->transport_state &= ~CMD_T_REQUEST_STOP;
- cmd->transport_state &= ~CMD_T_BUSY;
- was_active = true;
- }
-
- return was_active;
-}
-
-/*
* Handle SAM-esque emulation for generic transport request failures.
*/
void transport_generic_request_failure(struct se_cmd *cmd,
@@ -1859,19 +1827,21 @@
return true;
}
+static int __transport_check_aborted_status(struct se_cmd *, int);
+
void target_execute_cmd(struct se_cmd *cmd)
{
/*
- * If the received CDB has aleady been aborted stop processing it here.
- */
- if (transport_check_aborted_status(cmd, 1))
- return;
-
- /*
* Determine if frontend context caller is requesting the stopping of
* this command for frontend exceptions.
+ *
+ * If the received CDB has aleady been aborted stop processing it here.
*/
spin_lock_irq(&cmd->t_state_lock);
+ if (__transport_check_aborted_status(cmd, 1)) {
+ spin_unlock_irq(&cmd->t_state_lock);
+ return;
+ }
if (cmd->transport_state & CMD_T_STOP) {
pr_debug("%s:%d CMD_T_STOP for ITT: 0x%08llx\n",
__func__, __LINE__, cmd->tag);
@@ -2222,28 +2192,6 @@
}
/**
- * transport_release_cmd - free a command
- * @cmd: command to free
- *
- * This routine unconditionally frees a command, and reference counting
- * or list removal must be done in the caller.
- */
-static int transport_release_cmd(struct se_cmd *cmd)
-{
- BUG_ON(!cmd->se_tfo);
-
- if (cmd->se_cmd_flags & SCF_SCSI_TMR_CDB)
- core_tmr_release_req(cmd->se_tmr_req);
- if (cmd->t_task_cdb != cmd->__t_task_cdb)
- kfree(cmd->t_task_cdb);
- /*
- * If this cmd has been setup with target_get_sess_cmd(), drop
- * the kref and call ->release_cmd() in kref callback.
- */
- return target_put_sess_cmd(cmd);
-}
-
-/**
* transport_put_cmd - release a reference to a command
* @cmd: command to release
*
@@ -2251,8 +2199,12 @@
*/
static int transport_put_cmd(struct se_cmd *cmd)
{
- transport_free_pages(cmd);
- return transport_release_cmd(cmd);
+ BUG_ON(!cmd->se_tfo);
+ /*
+ * If this cmd has been setup with target_get_sess_cmd(), drop
+ * the kref and call ->release_cmd() in kref callback.
+ */
+ return target_put_sess_cmd(cmd);
}
void *transport_kmap_data_sg(struct se_cmd *cmd)
@@ -2450,34 +2402,58 @@
}
}
-int transport_generic_free_cmd(struct se_cmd *cmd, int wait_for_tasks)
+static bool
+__transport_wait_for_tasks(struct se_cmd *, bool, bool *, bool *,
+ unsigned long *flags);
+
+static void target_wait_free_cmd(struct se_cmd *cmd, bool *aborted, bool *tas)
{
unsigned long flags;
+
+ spin_lock_irqsave(&cmd->t_state_lock, flags);
+ __transport_wait_for_tasks(cmd, true, aborted, tas, &flags);
+ spin_unlock_irqrestore(&cmd->t_state_lock, flags);
+}
+
+int transport_generic_free_cmd(struct se_cmd *cmd, int wait_for_tasks)
+{
int ret = 0;
+ bool aborted = false, tas = false;
if (!(cmd->se_cmd_flags & SCF_SE_LUN_CMD)) {
if (wait_for_tasks && (cmd->se_cmd_flags & SCF_SCSI_TMR_CDB))
- transport_wait_for_tasks(cmd);
+ target_wait_free_cmd(cmd, &aborted, &tas);
- ret = transport_release_cmd(cmd);
+ if (!aborted || tas)
+ ret = transport_put_cmd(cmd);
} else {
if (wait_for_tasks)
- transport_wait_for_tasks(cmd);
+ target_wait_free_cmd(cmd, &aborted, &tas);
/*
* Handle WRITE failure case where transport_generic_new_cmd()
* has already added se_cmd to state_list, but fabric has
* failed command before I/O submission.
*/
- if (cmd->state_active) {
- spin_lock_irqsave(&cmd->t_state_lock, flags);
+ if (cmd->state_active)
target_remove_from_state_list(cmd);
- spin_unlock_irqrestore(&cmd->t_state_lock, flags);
- }
if (cmd->se_lun)
transport_lun_remove_cmd(cmd);
- ret = transport_put_cmd(cmd);
+ if (!aborted || tas)
+ ret = transport_put_cmd(cmd);
+ }
+ /*
+ * If the task has been internally aborted due to TMR ABORT_TASK
+ * or LUN_RESET, target_core_tmr.c is responsible for performing
+ * the remaining calls to target_put_sess_cmd(), and not the
+ * callers of this function.
+ */
+ if (aborted) {
+ pr_debug("Detected CMD_T_ABORTED for ITT: %llu\n", cmd->tag);
+ wait_for_completion(&cmd->cmd_wait_comp);
+ cmd->se_tfo->release_cmd(cmd);
+ ret = 1;
}
return ret;
}
@@ -2517,26 +2493,46 @@
}
EXPORT_SYMBOL(target_get_sess_cmd);
+static void target_free_cmd_mem(struct se_cmd *cmd)
+{
+ transport_free_pages(cmd);
+
+ if (cmd->se_cmd_flags & SCF_SCSI_TMR_CDB)
+ core_tmr_release_req(cmd->se_tmr_req);
+ if (cmd->t_task_cdb != cmd->__t_task_cdb)
+ kfree(cmd->t_task_cdb);
+}
+
static void target_release_cmd_kref(struct kref *kref)
{
struct se_cmd *se_cmd = container_of(kref, struct se_cmd, cmd_kref);
struct se_session *se_sess = se_cmd->se_sess;
unsigned long flags;
+ bool fabric_stop;
spin_lock_irqsave(&se_sess->sess_cmd_lock, flags);
if (list_empty(&se_cmd->se_cmd_list)) {
spin_unlock_irqrestore(&se_sess->sess_cmd_lock, flags);
+ target_free_cmd_mem(se_cmd);
se_cmd->se_tfo->release_cmd(se_cmd);
return;
}
- if (se_sess->sess_tearing_down && se_cmd->cmd_wait_set) {
+
+ spin_lock(&se_cmd->t_state_lock);
+ fabric_stop = (se_cmd->transport_state & CMD_T_FABRIC_STOP);
+ spin_unlock(&se_cmd->t_state_lock);
+
+ if (se_cmd->cmd_wait_set || fabric_stop) {
+ list_del_init(&se_cmd->se_cmd_list);
spin_unlock_irqrestore(&se_sess->sess_cmd_lock, flags);
+ target_free_cmd_mem(se_cmd);
complete(&se_cmd->cmd_wait_comp);
return;
}
- list_del(&se_cmd->se_cmd_list);
+ list_del_init(&se_cmd->se_cmd_list);
spin_unlock_irqrestore(&se_sess->sess_cmd_lock, flags);
+ target_free_cmd_mem(se_cmd);
se_cmd->se_tfo->release_cmd(se_cmd);
}
@@ -2548,6 +2544,7 @@
struct se_session *se_sess = se_cmd->se_sess;
if (!se_sess) {
+ target_free_cmd_mem(se_cmd);
se_cmd->se_tfo->release_cmd(se_cmd);
return 1;
}
@@ -2564,6 +2561,7 @@
{
struct se_cmd *se_cmd;
unsigned long flags;
+ int rc;
spin_lock_irqsave(&se_sess->sess_cmd_lock, flags);
if (se_sess->sess_tearing_down) {
@@ -2573,8 +2571,15 @@
se_sess->sess_tearing_down = 1;
list_splice_init(&se_sess->sess_cmd_list, &se_sess->sess_wait_list);
- list_for_each_entry(se_cmd, &se_sess->sess_wait_list, se_cmd_list)
- se_cmd->cmd_wait_set = 1;
+ list_for_each_entry(se_cmd, &se_sess->sess_wait_list, se_cmd_list) {
+ rc = kref_get_unless_zero(&se_cmd->cmd_kref);
+ if (rc) {
+ se_cmd->cmd_wait_set = 1;
+ spin_lock(&se_cmd->t_state_lock);
+ se_cmd->transport_state |= CMD_T_FABRIC_STOP;
+ spin_unlock(&se_cmd->t_state_lock);
+ }
+ }
spin_unlock_irqrestore(&se_sess->sess_cmd_lock, flags);
}
@@ -2587,15 +2592,25 @@
{
struct se_cmd *se_cmd, *tmp_cmd;
unsigned long flags;
+ bool tas;
list_for_each_entry_safe(se_cmd, tmp_cmd,
&se_sess->sess_wait_list, se_cmd_list) {
- list_del(&se_cmd->se_cmd_list);
+ list_del_init(&se_cmd->se_cmd_list);
pr_debug("Waiting for se_cmd: %p t_state: %d, fabric state:"
" %d\n", se_cmd, se_cmd->t_state,
se_cmd->se_tfo->get_cmd_state(se_cmd));
+ spin_lock_irqsave(&se_cmd->t_state_lock, flags);
+ tas = (se_cmd->transport_state & CMD_T_TAS);
+ spin_unlock_irqrestore(&se_cmd->t_state_lock, flags);
+
+ if (!target_put_sess_cmd(se_cmd)) {
+ if (tas)
+ target_put_sess_cmd(se_cmd);
+ }
+
wait_for_completion(&se_cmd->cmd_wait_comp);
pr_debug("After cmd_wait_comp: se_cmd: %p t_state: %d"
" fabric state: %d\n", se_cmd, se_cmd->t_state,
@@ -2617,6 +2632,58 @@
wait_for_completion(&lun->lun_ref_comp);
}
+static bool
+__transport_wait_for_tasks(struct se_cmd *cmd, bool fabric_stop,
+ bool *aborted, bool *tas, unsigned long *flags)
+ __releases(&cmd->t_state_lock)
+ __acquires(&cmd->t_state_lock)
+{
+
+ assert_spin_locked(&cmd->t_state_lock);
+ WARN_ON_ONCE(!irqs_disabled());
+
+ if (fabric_stop)
+ cmd->transport_state |= CMD_T_FABRIC_STOP;
+
+ if (cmd->transport_state & CMD_T_ABORTED)
+ *aborted = true;
+
+ if (cmd->transport_state & CMD_T_TAS)
+ *tas = true;
+
+ if (!(cmd->se_cmd_flags & SCF_SE_LUN_CMD) &&
+ !(cmd->se_cmd_flags & SCF_SCSI_TMR_CDB))
+ return false;
+
+ if (!(cmd->se_cmd_flags & SCF_SUPPORTED_SAM_OPCODE) &&
+ !(cmd->se_cmd_flags & SCF_SCSI_TMR_CDB))
+ return false;
+
+ if (!(cmd->transport_state & CMD_T_ACTIVE))
+ return false;
+
+ if (fabric_stop && *aborted)
+ return false;
+
+ cmd->transport_state |= CMD_T_STOP;
+
+ pr_debug("wait_for_tasks: Stopping %p ITT: 0x%08llx i_state: %d,"
+ " t_state: %d, CMD_T_STOP\n", cmd, cmd->tag,
+ cmd->se_tfo->get_cmd_state(cmd), cmd->t_state);
+
+ spin_unlock_irqrestore(&cmd->t_state_lock, *flags);
+
+ wait_for_completion(&cmd->t_transport_stop_comp);
+
+ spin_lock_irqsave(&cmd->t_state_lock, *flags);
+ cmd->transport_state &= ~(CMD_T_ACTIVE | CMD_T_STOP);
+
+ pr_debug("wait_for_tasks: Stopped wait_for_completion(&cmd->"
+ "t_transport_stop_comp) for ITT: 0x%08llx\n", cmd->tag);
+
+ return true;
+}
+
/**
* transport_wait_for_tasks - wait for completion to occur
* @cmd: command to wait
@@ -2627,43 +2694,13 @@
bool transport_wait_for_tasks(struct se_cmd *cmd)
{
unsigned long flags;
+ bool ret, aborted = false, tas = false;
spin_lock_irqsave(&cmd->t_state_lock, flags);
- if (!(cmd->se_cmd_flags & SCF_SE_LUN_CMD) &&
- !(cmd->se_cmd_flags & SCF_SCSI_TMR_CDB)) {
- spin_unlock_irqrestore(&cmd->t_state_lock, flags);
- return false;
- }
-
- if (!(cmd->se_cmd_flags & SCF_SUPPORTED_SAM_OPCODE) &&
- !(cmd->se_cmd_flags & SCF_SCSI_TMR_CDB)) {
- spin_unlock_irqrestore(&cmd->t_state_lock, flags);
- return false;
- }
-
- if (!(cmd->transport_state & CMD_T_ACTIVE)) {
- spin_unlock_irqrestore(&cmd->t_state_lock, flags);
- return false;
- }
-
- cmd->transport_state |= CMD_T_STOP;
-
- pr_debug("wait_for_tasks: Stopping %p ITT: 0x%08llx i_state: %d, t_state: %d, CMD_T_STOP\n",
- cmd, cmd->tag, cmd->se_tfo->get_cmd_state(cmd), cmd->t_state);
-
+ ret = __transport_wait_for_tasks(cmd, false, &aborted, &tas, &flags);
spin_unlock_irqrestore(&cmd->t_state_lock, flags);
- wait_for_completion(&cmd->t_transport_stop_comp);
-
- spin_lock_irqsave(&cmd->t_state_lock, flags);
- cmd->transport_state &= ~(CMD_T_ACTIVE | CMD_T_STOP);
-
- pr_debug("wait_for_tasks: Stopped wait_for_completion(&cmd->t_transport_stop_comp) for ITT: 0x%08llx\n",
- cmd->tag);
-
- spin_unlock_irqrestore(&cmd->t_state_lock, flags);
-
- return true;
+ return ret;
}
EXPORT_SYMBOL(transport_wait_for_tasks);
@@ -2845,28 +2882,49 @@
}
EXPORT_SYMBOL(transport_send_check_condition_and_sense);
-int transport_check_aborted_status(struct se_cmd *cmd, int send_status)
+static int __transport_check_aborted_status(struct se_cmd *cmd, int send_status)
+ __releases(&cmd->t_state_lock)
+ __acquires(&cmd->t_state_lock)
{
+ assert_spin_locked(&cmd->t_state_lock);
+ WARN_ON_ONCE(!irqs_disabled());
+
if (!(cmd->transport_state & CMD_T_ABORTED))
return 0;
-
/*
* If cmd has been aborted but either no status is to be sent or it has
* already been sent, just return
*/
- if (!send_status || !(cmd->se_cmd_flags & SCF_SEND_DELAYED_TAS))
+ if (!send_status || !(cmd->se_cmd_flags & SCF_SEND_DELAYED_TAS)) {
+ if (send_status)
+ cmd->se_cmd_flags |= SCF_SEND_DELAYED_TAS;
return 1;
+ }
- pr_debug("Sending delayed SAM_STAT_TASK_ABORTED status for CDB: 0x%02x ITT: 0x%08llx\n",
- cmd->t_task_cdb[0], cmd->tag);
+ pr_debug("Sending delayed SAM_STAT_TASK_ABORTED status for CDB:"
+ " 0x%02x ITT: 0x%08llx\n", cmd->t_task_cdb[0], cmd->tag);
cmd->se_cmd_flags &= ~SCF_SEND_DELAYED_TAS;
cmd->scsi_status = SAM_STAT_TASK_ABORTED;
trace_target_cmd_complete(cmd);
+
+ spin_unlock_irq(&cmd->t_state_lock);
cmd->se_tfo->queue_status(cmd);
+ spin_lock_irq(&cmd->t_state_lock);
return 1;
}
+
+int transport_check_aborted_status(struct se_cmd *cmd, int send_status)
+{
+ int ret;
+
+ spin_lock_irq(&cmd->t_state_lock);
+ ret = __transport_check_aborted_status(cmd, send_status);
+ spin_unlock_irq(&cmd->t_state_lock);
+
+ return ret;
+}
EXPORT_SYMBOL(transport_check_aborted_status);
void transport_send_task_abort(struct se_cmd *cmd)
@@ -2888,11 +2946,17 @@
*/
if (cmd->data_direction == DMA_TO_DEVICE) {
if (cmd->se_tfo->write_pending_status(cmd) != 0) {
- cmd->transport_state |= CMD_T_ABORTED;
+ spin_lock_irqsave(&cmd->t_state_lock, flags);
+ if (cmd->se_cmd_flags & SCF_SEND_DELAYED_TAS) {
+ spin_unlock_irqrestore(&cmd->t_state_lock, flags);
+ goto send_abort;
+ }
cmd->se_cmd_flags |= SCF_SEND_DELAYED_TAS;
+ spin_unlock_irqrestore(&cmd->t_state_lock, flags);
return;
}
}
+send_abort:
cmd->scsi_status = SAM_STAT_TASK_ABORTED;
transport_lun_remove_cmd(cmd);
@@ -2909,8 +2973,17 @@
struct se_cmd *cmd = container_of(work, struct se_cmd, work);
struct se_device *dev = cmd->se_dev;
struct se_tmr_req *tmr = cmd->se_tmr_req;
+ unsigned long flags;
int ret;
+ spin_lock_irqsave(&cmd->t_state_lock, flags);
+ if (cmd->transport_state & CMD_T_ABORTED) {
+ tmr->response = TMR_FUNCTION_REJECTED;
+ spin_unlock_irqrestore(&cmd->t_state_lock, flags);
+ goto check_stop;
+ }
+ spin_unlock_irqrestore(&cmd->t_state_lock, flags);
+
switch (tmr->function) {
case TMR_ABORT_TASK:
core_tmr_abort_task(dev, tmr, cmd->se_sess);
@@ -2943,9 +3016,17 @@
break;
}
+ spin_lock_irqsave(&cmd->t_state_lock, flags);
+ if (cmd->transport_state & CMD_T_ABORTED) {
+ spin_unlock_irqrestore(&cmd->t_state_lock, flags);
+ goto check_stop;
+ }
cmd->t_state = TRANSPORT_ISTATE_PROCESSING;
+ spin_unlock_irqrestore(&cmd->t_state_lock, flags);
+
cmd->se_tfo->queue_tm_rsp(cmd);
+check_stop:
transport_cmd_check_stop_to_fabric(cmd);
}
diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c
index dd600e5..94f5154 100644
--- a/drivers/target/target_core_user.c
+++ b/drivers/target/target_core_user.c
@@ -903,7 +903,7 @@
info->version = __stringify(TCMU_MAILBOX_VERSION);
info->mem[0].name = "tcm-user command & data buffer";
- info->mem[0].addr = (phys_addr_t) udev->mb_addr;
+ info->mem[0].addr = (phys_addr_t)(uintptr_t)udev->mb_addr;
info->mem[0].size = TCMU_RING_SIZE;
info->mem[0].memtype = UIO_MEM_VIRTUAL;
diff --git a/drivers/thermal/Kconfig b/drivers/thermal/Kconfig
index 8cc4ac6..7c92c09 100644
--- a/drivers/thermal/Kconfig
+++ b/drivers/thermal/Kconfig
@@ -195,7 +195,7 @@
passive trip is crossed.
config SPEAR_THERMAL
- bool "SPEAr thermal sensor driver"
+ tristate "SPEAr thermal sensor driver"
depends on PLAT_SPEAR || COMPILE_TEST
depends on OF
help
@@ -237,8 +237,8 @@
framework.
config DB8500_THERMAL
- bool "DB8500 thermal management"
- depends on ARCH_U8500
+ tristate "DB8500 thermal management"
+ depends on MFD_DB8500_PRCMU
default y
help
Adds DB8500 thermal management implementation according to the thermal
diff --git a/drivers/thermal/cpu_cooling.c b/drivers/thermal/cpu_cooling.c
index e3fbc5a..6ceac4f 100644
--- a/drivers/thermal/cpu_cooling.c
+++ b/drivers/thermal/cpu_cooling.c
@@ -377,26 +377,28 @@
* get_load() - get load for a cpu since last updated
* @cpufreq_device: &struct cpufreq_cooling_device for this cpu
* @cpu: cpu number
+ * @cpu_idx: index of the cpu in cpufreq_device->allowed_cpus
*
* Return: The average load of cpu @cpu in percentage since this
* function was last called.
*/
-static u32 get_load(struct cpufreq_cooling_device *cpufreq_device, int cpu)
+static u32 get_load(struct cpufreq_cooling_device *cpufreq_device, int cpu,
+ int cpu_idx)
{
u32 load;
u64 now, now_idle, delta_time, delta_idle;
now_idle = get_cpu_idle_time(cpu, &now, 0);
- delta_idle = now_idle - cpufreq_device->time_in_idle[cpu];
- delta_time = now - cpufreq_device->time_in_idle_timestamp[cpu];
+ delta_idle = now_idle - cpufreq_device->time_in_idle[cpu_idx];
+ delta_time = now - cpufreq_device->time_in_idle_timestamp[cpu_idx];
if (delta_time <= delta_idle)
load = 0;
else
load = div64_u64(100 * (delta_time - delta_idle), delta_time);
- cpufreq_device->time_in_idle[cpu] = now_idle;
- cpufreq_device->time_in_idle_timestamp[cpu] = now;
+ cpufreq_device->time_in_idle[cpu_idx] = now_idle;
+ cpufreq_device->time_in_idle_timestamp[cpu_idx] = now;
return load;
}
@@ -598,7 +600,7 @@
u32 load;
if (cpu_online(cpu))
- load = get_load(cpufreq_device, cpu);
+ load = get_load(cpufreq_device, cpu, i);
else
load = 0;
diff --git a/drivers/thermal/of-thermal.c b/drivers/thermal/of-thermal.c
index be4eedc..9043f8f 100644
--- a/drivers/thermal/of-thermal.c
+++ b/drivers/thermal/of-thermal.c
@@ -475,14 +475,10 @@
sensor_np = of_node_get(dev->of_node);
- for_each_child_of_node(np, child) {
+ for_each_available_child_of_node(np, child) {
struct of_phandle_args sensor_specs;
int ret, id;
- /* Check whether child is enabled or not */
- if (!of_device_is_available(child))
- continue;
-
/* For now, thermal framework supports only 1 sensor per zone */
ret = of_parse_phandle_with_args(child, "thermal-sensors",
"#thermal-sensor-cells",
@@ -881,16 +877,12 @@
return 0; /* Run successfully on systems without thermal DT */
}
- for_each_child_of_node(np, child) {
+ for_each_available_child_of_node(np, child) {
struct thermal_zone_device *zone;
struct thermal_zone_params *tzp;
int i, mask = 0;
u32 prop;
- /* Check whether child is enabled or not */
- if (!of_device_is_available(child))
- continue;
-
tz = thermal_of_build_thermal_zone(child);
if (IS_ERR(tz)) {
pr_err("failed to build thermal zone %s: %ld\n",
@@ -968,13 +960,9 @@
return;
}
- for_each_child_of_node(np, child) {
+ for_each_available_child_of_node(np, child) {
struct thermal_zone_device *zone;
- /* Check whether child is enabled or not */
- if (!of_device_is_available(child))
- continue;
-
zone = thermal_zone_get_zone_by_name(child->name);
if (IS_ERR(zone))
continue;
diff --git a/drivers/thermal/rcar_thermal.c b/drivers/thermal/rcar_thermal.c
index 44b9c48..0e735ac 100644
--- a/drivers/thermal/rcar_thermal.c
+++ b/drivers/thermal/rcar_thermal.c
@@ -23,6 +23,7 @@
#include <linux/interrupt.h>
#include <linux/io.h>
#include <linux/module.h>
+#include <linux/of_device.h>
#include <linux/platform_device.h>
#include <linux/pm_runtime.h>
#include <linux/reboot.h>
@@ -75,8 +76,10 @@
#define rcar_has_irq_support(priv) ((priv)->common->base)
#define rcar_id_to_shift(priv) ((priv)->id * 8)
+#define USE_OF_THERMAL 1
static const struct of_device_id rcar_thermal_dt_ids[] = {
{ .compatible = "renesas,rcar-thermal", },
+ { .compatible = "renesas,rcar-gen2-thermal", .data = (void *)USE_OF_THERMAL },
{},
};
MODULE_DEVICE_TABLE(of, rcar_thermal_dt_ids);
@@ -200,9 +203,9 @@
return ret;
}
-static int rcar_thermal_get_temp(struct thermal_zone_device *zone, int *temp)
+static int rcar_thermal_get_current_temp(struct rcar_thermal_priv *priv,
+ int *temp)
{
- struct rcar_thermal_priv *priv = rcar_zone_to_priv(zone);
int tmp;
int ret;
@@ -226,6 +229,20 @@
return 0;
}
+static int rcar_thermal_of_get_temp(void *data, int *temp)
+{
+ struct rcar_thermal_priv *priv = data;
+
+ return rcar_thermal_get_current_temp(priv, temp);
+}
+
+static int rcar_thermal_get_temp(struct thermal_zone_device *zone, int *temp)
+{
+ struct rcar_thermal_priv *priv = rcar_zone_to_priv(zone);
+
+ return rcar_thermal_get_current_temp(priv, temp);
+}
+
static int rcar_thermal_get_trip_type(struct thermal_zone_device *zone,
int trip, enum thermal_trip_type *type)
{
@@ -282,6 +299,10 @@
return 0;
}
+static const struct thermal_zone_of_device_ops rcar_thermal_zone_of_ops = {
+ .get_temp = rcar_thermal_of_get_temp,
+};
+
static struct thermal_zone_device_ops rcar_thermal_zone_ops = {
.get_temp = rcar_thermal_get_temp,
.get_trip_type = rcar_thermal_get_trip_type,
@@ -318,14 +339,20 @@
priv = container_of(work, struct rcar_thermal_priv, work.work);
- rcar_thermal_get_temp(priv->zone, &cctemp);
+ ret = rcar_thermal_get_current_temp(priv, &cctemp);
+ if (ret < 0)
+ return;
+
ret = rcar_thermal_update_temp(priv);
if (ret < 0)
return;
rcar_thermal_irq_enable(priv);
- rcar_thermal_get_temp(priv->zone, &nctemp);
+ ret = rcar_thermal_get_current_temp(priv, &nctemp);
+ if (ret < 0)
+ return;
+
if (nctemp != cctemp)
thermal_zone_device_update(priv->zone);
}
@@ -403,6 +430,8 @@
struct rcar_thermal_priv *priv;
struct device *dev = &pdev->dev;
struct resource *res, *irq;
+ const struct of_device_id *of_id = of_match_device(rcar_thermal_dt_ids, dev);
+ unsigned long of_data = (unsigned long)of_id->data;
int mres = 0;
int i;
int ret = -ENODEV;
@@ -463,7 +492,13 @@
if (ret < 0)
goto error_unregister;
- priv->zone = thermal_zone_device_register("rcar_thermal",
+ if (of_data == USE_OF_THERMAL)
+ priv->zone = thermal_zone_of_sensor_register(
+ dev, i, priv,
+ &rcar_thermal_zone_of_ops);
+ else
+ priv->zone = thermal_zone_device_register(
+ "rcar_thermal",
1, 0, priv,
&rcar_thermal_zone_ops, NULL, 0,
idle);
diff --git a/drivers/thermal/spear_thermal.c b/drivers/thermal/spear_thermal.c
index 534dd91..81b35aa 100644
--- a/drivers/thermal/spear_thermal.c
+++ b/drivers/thermal/spear_thermal.c
@@ -54,8 +54,7 @@
.get_temp = thermal_get_temp,
};
-#ifdef CONFIG_PM
-static int spear_thermal_suspend(struct device *dev)
+static int __maybe_unused spear_thermal_suspend(struct device *dev)
{
struct platform_device *pdev = to_platform_device(dev);
struct thermal_zone_device *spear_thermal = platform_get_drvdata(pdev);
@@ -72,7 +71,7 @@
return 0;
}
-static int spear_thermal_resume(struct device *dev)
+static int __maybe_unused spear_thermal_resume(struct device *dev)
{
struct platform_device *pdev = to_platform_device(dev);
struct thermal_zone_device *spear_thermal = platform_get_drvdata(pdev);
@@ -94,7 +93,6 @@
return 0;
}
-#endif
static SIMPLE_DEV_PM_OPS(spear_thermal_pm_ops, spear_thermal_suspend,
spear_thermal_resume);
diff --git a/drivers/tty/pty.c b/drivers/tty/pty.c
index b311004..2348fa6 100644
--- a/drivers/tty/pty.c
+++ b/drivers/tty/pty.c
@@ -681,7 +681,14 @@
/* this is called once with whichever end is closed last */
static void pty_unix98_shutdown(struct tty_struct *tty)
{
- devpts_kill_index(tty->driver_data, tty->index);
+ struct inode *ptmx_inode;
+
+ if (tty->driver->subtype == PTY_TYPE_MASTER)
+ ptmx_inode = tty->driver_data;
+ else
+ ptmx_inode = tty->link->driver_data;
+ devpts_kill_index(ptmx_inode, tty->index);
+ devpts_del_ref(ptmx_inode);
}
static const struct tty_operations ptm_unix98_ops = {
@@ -773,6 +780,18 @@
set_bit(TTY_PTY_LOCK, &tty->flags); /* LOCK THE SLAVE */
tty->driver_data = inode;
+ /*
+ * In the case where all references to ptmx inode are dropped and we
+ * still have /dev/tty opened pointing to the master/slave pair (ptmx
+ * is closed/released before /dev/tty), we must make sure that the inode
+ * is still valid when we call the final pty_unix98_shutdown, thus we
+ * hold an additional reference to the ptmx inode. For the same /dev/tty
+ * last close case, we also need to make sure the super_block isn't
+ * destroyed (devpts instance unmounted), before /dev/tty is closed and
+ * on its release devpts_kill_index is called.
+ */
+ devpts_add_ref(inode);
+
tty_add_file(tty, filp);
slave_inode = devpts_pty_new(inode,
diff --git a/drivers/tty/serial/8250/8250_pci.c b/drivers/tty/serial/8250/8250_pci.c
index e71ec78..7cd6f9a 100644
--- a/drivers/tty/serial/8250/8250_pci.c
+++ b/drivers/tty/serial/8250/8250_pci.c
@@ -1941,6 +1941,7 @@
#define PCIE_VENDOR_ID_WCH 0x1c00
#define PCIE_DEVICE_ID_WCH_CH382_2S1P 0x3250
#define PCIE_DEVICE_ID_WCH_CH384_4S 0x3470
+#define PCIE_DEVICE_ID_WCH_CH382_2S 0x3253
#define PCI_VENDOR_ID_PERICOM 0x12D8
#define PCI_DEVICE_ID_PERICOM_PI7C9X7951 0x7951
@@ -2637,6 +2638,14 @@
.subdevice = PCI_ANY_ID,
.setup = pci_wch_ch353_setup,
},
+ /* WCH CH382 2S card (16850 clone) */
+ {
+ .vendor = PCIE_VENDOR_ID_WCH,
+ .device = PCIE_DEVICE_ID_WCH_CH382_2S,
+ .subvendor = PCI_ANY_ID,
+ .subdevice = PCI_ANY_ID,
+ .setup = pci_wch_ch38x_setup,
+ },
/* WCH CH382 2S1P card (16850 clone) */
{
.vendor = PCIE_VENDOR_ID_WCH,
@@ -2955,6 +2964,7 @@
pbn_fintek_4,
pbn_fintek_8,
pbn_fintek_12,
+ pbn_wch382_2,
pbn_wch384_4,
pbn_pericom_PI7C9X7951,
pbn_pericom_PI7C9X7952,
@@ -3775,6 +3785,13 @@
.base_baud = 115200,
.first_offset = 0x40,
},
+ [pbn_wch382_2] = {
+ .flags = FL_BASE0,
+ .num_ports = 2,
+ .base_baud = 115200,
+ .uart_offset = 8,
+ .first_offset = 0xC0,
+ },
[pbn_wch384_4] = {
.flags = FL_BASE0,
.num_ports = 4,
@@ -5574,6 +5591,10 @@
PCI_ANY_ID, PCI_ANY_ID,
0, 0, pbn_b0_bt_2_115200 },
+ { PCIE_VENDOR_ID_WCH, PCIE_DEVICE_ID_WCH_CH382_2S,
+ PCI_ANY_ID, PCI_ANY_ID,
+ 0, 0, pbn_wch382_2 },
+
{ PCIE_VENDOR_ID_WCH, PCIE_DEVICE_ID_WCH_CH384_4S,
PCI_ANY_ID, PCI_ANY_ID,
0, 0, pbn_wch384_4 },
diff --git a/drivers/tty/serial/omap-serial.c b/drivers/tty/serial/omap-serial.c
index b645f92..fa49eb1 100644
--- a/drivers/tty/serial/omap-serial.c
+++ b/drivers/tty/serial/omap-serial.c
@@ -1165,7 +1165,7 @@
#define BOTH_EMPTY (UART_LSR_TEMT | UART_LSR_THRE)
-static void wait_for_xmitr(struct uart_omap_port *up)
+static void __maybe_unused wait_for_xmitr(struct uart_omap_port *up)
{
unsigned int status, tmout = 10000;
@@ -1343,7 +1343,7 @@
/* Enable or disable the rs485 support */
static int
-serial_omap_config_rs485(struct uart_port *port, struct serial_rs485 *rs485conf)
+serial_omap_config_rs485(struct uart_port *port, struct serial_rs485 *rs485)
{
struct uart_omap_port *up = to_uart_omap_port(port);
unsigned int mode;
@@ -1356,8 +1356,12 @@
up->ier = 0;
serial_out(up, UART_IER, 0);
+ /* Clamp the delays to [0, 100ms] */
+ rs485->delay_rts_before_send = min(rs485->delay_rts_before_send, 100U);
+ rs485->delay_rts_after_send = min(rs485->delay_rts_after_send, 100U);
+
/* store new config */
- port->rs485 = *rs485conf;
+ port->rs485 = *rs485;
/*
* Just as a precaution, only allow rs485
diff --git a/drivers/tty/tty_io.c b/drivers/tty/tty_io.c
index 5cec01c..a7eacef 100644
--- a/drivers/tty/tty_io.c
+++ b/drivers/tty/tty_io.c
@@ -2066,13 +2066,12 @@
if (tty) {
mutex_unlock(&tty_mutex);
retval = tty_lock_interruptible(tty);
+ tty_kref_put(tty); /* drop kref from tty_driver_lookup_tty() */
if (retval) {
if (retval == -EINTR)
retval = -ERESTARTSYS;
goto err_unref;
}
- /* safe to drop the kref from tty_driver_lookup_tty() */
- tty_kref_put(tty);
retval = tty_reopen(tty);
if (retval < 0) {
tty_unlock(tty);
diff --git a/drivers/tty/tty_mutex.c b/drivers/tty/tty_mutex.c
index d2f3c4c..dfa9ec0 100644
--- a/drivers/tty/tty_mutex.c
+++ b/drivers/tty/tty_mutex.c
@@ -21,10 +21,15 @@
int tty_lock_interruptible(struct tty_struct *tty)
{
+ int ret;
+
if (WARN(tty->magic != TTY_MAGIC, "L Bad %p\n", tty))
return -EIO;
tty_kref_get(tty);
- return mutex_lock_interruptible(&tty->legacy_mutex);
+ ret = mutex_lock_interruptible(&tty->legacy_mutex);
+ if (ret)
+ tty_kref_put(tty);
+ return ret;
}
void __lockfunc tty_unlock(struct tty_struct *tty)
diff --git a/drivers/usb/chipidea/ci_hdrc_pci.c b/drivers/usb/chipidea/ci_hdrc_pci.c
index b59195e..b635ab6 100644
--- a/drivers/usb/chipidea/ci_hdrc_pci.c
+++ b/drivers/usb/chipidea/ci_hdrc_pci.c
@@ -85,8 +85,8 @@
/* register a nop PHY */
ci->phy = usb_phy_generic_register();
- if (!ci->phy)
- return -ENOMEM;
+ if (IS_ERR(ci->phy))
+ return PTR_ERR(ci->phy);
memset(res, 0, sizeof(res));
res[0].start = pci_resource_start(pdev, 0);
diff --git a/drivers/usb/chipidea/debug.c b/drivers/usb/chipidea/debug.c
index a4f7db2..df47110 100644
--- a/drivers/usb/chipidea/debug.c
+++ b/drivers/usb/chipidea/debug.c
@@ -100,6 +100,9 @@
if (sscanf(buf, "%u", &mode) != 1)
return -EINVAL;
+ if (mode > 255)
+ return -EBADRQC;
+
pm_runtime_get_sync(ci->dev);
spin_lock_irqsave(&ci->lock, flags);
ret = hw_port_test_set(ci, mode);
diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
index 350dcd9..51b43691 100644
--- a/drivers/usb/core/hub.c
+++ b/drivers/usb/core/hub.c
@@ -5401,6 +5401,7 @@
}
bos = udev->bos;
+ udev->bos = NULL;
for (i = 0; i < SET_CONFIG_TRIES; ++i) {
@@ -5493,11 +5494,8 @@
usb_set_usb2_hardware_lpm(udev, 1);
usb_unlocked_enable_lpm(udev);
usb_enable_ltm(udev);
- /* release the new BOS descriptor allocated by hub_port_init() */
- if (udev->bos != bos) {
- usb_release_bos_descriptor(udev);
- udev->bos = bos;
- }
+ usb_release_bos_descriptor(udev);
+ udev->bos = bos;
return 0;
re_enumerate:
diff --git a/drivers/usb/dwc2/Kconfig b/drivers/usb/dwc2/Kconfig
index fd95ba6..f0decc0 100644
--- a/drivers/usb/dwc2/Kconfig
+++ b/drivers/usb/dwc2/Kconfig
@@ -1,5 +1,6 @@
config USB_DWC2
tristate "DesignWare USB2 DRD Core Support"
+ depends on HAS_DMA
depends on USB || USB_GADGET
help
Say Y here if your system has a Dual Role Hi-Speed USB
diff --git a/drivers/usb/dwc2/core.c b/drivers/usb/dwc2/core.c
index e991d55..46c4ba7 100644
--- a/drivers/usb/dwc2/core.c
+++ b/drivers/usb/dwc2/core.c
@@ -619,6 +619,12 @@
__func__, hsotg->dr_mode);
break;
}
+
+ /*
+ * NOTE: This is required for some rockchip soc based
+ * platforms.
+ */
+ msleep(50);
}
/*
diff --git a/drivers/usb/dwc2/hcd_ddma.c b/drivers/usb/dwc2/hcd_ddma.c
index 36606fc..a41274a 100644
--- a/drivers/usb/dwc2/hcd_ddma.c
+++ b/drivers/usb/dwc2/hcd_ddma.c
@@ -1174,14 +1174,11 @@
failed = dwc2_update_non_isoc_urb_state_ddma(hsotg, chan, qtd, dma_desc,
halt_status, n_bytes,
xfer_done);
- if (*xfer_done && urb->status != -EINPROGRESS)
- failed = 1;
-
- if (failed) {
+ if (failed || (*xfer_done && urb->status != -EINPROGRESS)) {
dwc2_host_complete(hsotg, qtd, urb->status);
dwc2_hcd_qtd_unlink_and_free(hsotg, qtd, qh);
- dev_vdbg(hsotg->dev, "failed=%1x xfer_done=%1x status=%08x\n",
- failed, *xfer_done, urb->status);
+ dev_vdbg(hsotg->dev, "failed=%1x xfer_done=%1x\n",
+ failed, *xfer_done);
return failed;
}
@@ -1236,21 +1233,23 @@
list_for_each_safe(qtd_item, qtd_tmp, &qh->qtd_list) {
int i;
+ int qtd_desc_count;
qtd = list_entry(qtd_item, struct dwc2_qtd, qtd_list_entry);
xfer_done = 0;
+ qtd_desc_count = qtd->n_desc;
- for (i = 0; i < qtd->n_desc; i++) {
+ for (i = 0; i < qtd_desc_count; i++) {
if (dwc2_process_non_isoc_desc(hsotg, chan, chnum, qtd,
desc_num, halt_status,
- &xfer_done)) {
- qtd = NULL;
- break;
- }
+ &xfer_done))
+ goto stop_scan;
+
desc_num++;
}
}
+stop_scan:
if (qh->ep_type != USB_ENDPOINT_XFER_CONTROL) {
/*
* Resetting the data toggle for bulk and interrupt endpoints
@@ -1258,7 +1257,7 @@
*/
if (halt_status == DWC2_HC_XFER_STALL)
qh->data_toggle = DWC2_HC_PID_DATA0;
- else if (qtd)
+ else
dwc2_hcd_save_data_toggle(hsotg, chan, chnum, qtd);
}
diff --git a/drivers/usb/dwc2/hcd_intr.c b/drivers/usb/dwc2/hcd_intr.c
index f825380..cadba8b 100644
--- a/drivers/usb/dwc2/hcd_intr.c
+++ b/drivers/usb/dwc2/hcd_intr.c
@@ -525,11 +525,19 @@
u32 pid = (hctsiz & TSIZ_SC_MC_PID_MASK) >> TSIZ_SC_MC_PID_SHIFT;
if (chan->ep_type != USB_ENDPOINT_XFER_CONTROL) {
+ if (WARN(!chan || !chan->qh,
+ "chan->qh must be specified for non-control eps\n"))
+ return;
+
if (pid == TSIZ_SC_MC_PID_DATA0)
chan->qh->data_toggle = DWC2_HC_PID_DATA0;
else
chan->qh->data_toggle = DWC2_HC_PID_DATA1;
} else {
+ if (WARN(!qtd,
+ "qtd must be specified for control eps\n"))
+ return;
+
if (pid == TSIZ_SC_MC_PID_DATA0)
qtd->data_toggle = DWC2_HC_PID_DATA0;
else
diff --git a/drivers/usb/dwc3/core.h b/drivers/usb/dwc3/core.h
index 2913068..e4f8b90 100644
--- a/drivers/usb/dwc3/core.h
+++ b/drivers/usb/dwc3/core.h
@@ -856,7 +856,6 @@
unsigned pullups_connected:1;
unsigned resize_fifos:1;
unsigned setup_packet_pending:1;
- unsigned start_config_issued:1;
unsigned three_stage_setup:1;
unsigned usb3_lpm_capable:1;
diff --git a/drivers/usb/dwc3/ep0.c b/drivers/usb/dwc3/ep0.c
index 3a9354a..8d6b75c 100644
--- a/drivers/usb/dwc3/ep0.c
+++ b/drivers/usb/dwc3/ep0.c
@@ -555,7 +555,6 @@
int ret;
u32 reg;
- dwc->start_config_issued = false;
cfg = le16_to_cpu(ctrl->wValue);
switch (state) {
@@ -737,10 +736,6 @@
dwc3_trace(trace_dwc3_ep0, "USB_REQ_SET_ISOCH_DELAY");
ret = dwc3_ep0_set_isoch_delay(dwc, ctrl);
break;
- case USB_REQ_SET_INTERFACE:
- dwc3_trace(trace_dwc3_ep0, "USB_REQ_SET_INTERFACE");
- dwc->start_config_issued = false;
- /* Fall through */
default:
dwc3_trace(trace_dwc3_ep0, "Forwarding to gadget driver");
ret = dwc3_ep0_delegate_req(dwc, ctrl);
diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
index 7d1dd82..2363bad 100644
--- a/drivers/usb/dwc3/gadget.c
+++ b/drivers/usb/dwc3/gadget.c
@@ -385,24 +385,66 @@
dep->trb_pool_dma = 0;
}
+static int dwc3_gadget_set_xfer_resource(struct dwc3 *dwc, struct dwc3_ep *dep);
+
+/**
+ * dwc3_gadget_start_config - Configure EP resources
+ * @dwc: pointer to our controller context structure
+ * @dep: endpoint that is being enabled
+ *
+ * The assignment of transfer resources cannot perfectly follow the
+ * data book due to the fact that the controller driver does not have
+ * all knowledge of the configuration in advance. It is given this
+ * information piecemeal by the composite gadget framework after every
+ * SET_CONFIGURATION and SET_INTERFACE. Trying to follow the databook
+ * programming model in this scenario can cause errors. For two
+ * reasons:
+ *
+ * 1) The databook says to do DEPSTARTCFG for every SET_CONFIGURATION
+ * and SET_INTERFACE (8.1.5). This is incorrect in the scenario of
+ * multiple interfaces.
+ *
+ * 2) The databook does not mention doing more DEPXFERCFG for new
+ * endpoint on alt setting (8.1.6).
+ *
+ * The following simplified method is used instead:
+ *
+ * All hardware endpoints can be assigned a transfer resource and this
+ * setting will stay persistent until either a core reset or
+ * hibernation. So whenever we do a DEPSTARTCFG(0) we can go ahead and
+ * do DEPXFERCFG for every hardware endpoint as well. We are
+ * guaranteed that there are as many transfer resources as endpoints.
+ *
+ * This function is called for each endpoint when it is being enabled
+ * but is triggered only when called for EP0-out, which always happens
+ * first, and which should only happen in one of the above conditions.
+ */
static int dwc3_gadget_start_config(struct dwc3 *dwc, struct dwc3_ep *dep)
{
struct dwc3_gadget_ep_cmd_params params;
u32 cmd;
+ int i;
+ int ret;
+
+ if (dep->number)
+ return 0;
memset(¶ms, 0x00, sizeof(params));
+ cmd = DWC3_DEPCMD_DEPSTARTCFG;
- if (dep->number != 1) {
- cmd = DWC3_DEPCMD_DEPSTARTCFG;
- /* XferRscIdx == 0 for ep0 and 2 for the remaining */
- if (dep->number > 1) {
- if (dwc->start_config_issued)
- return 0;
- dwc->start_config_issued = true;
- cmd |= DWC3_DEPCMD_PARAM(2);
- }
+ ret = dwc3_send_gadget_ep_cmd(dwc, 0, cmd, ¶ms);
+ if (ret)
+ return ret;
- return dwc3_send_gadget_ep_cmd(dwc, 0, cmd, ¶ms);
+ for (i = 0; i < DWC3_ENDPOINTS_NUM; i++) {
+ struct dwc3_ep *dep = dwc->eps[i];
+
+ if (!dep)
+ continue;
+
+ ret = dwc3_gadget_set_xfer_resource(dwc, dep);
+ if (ret)
+ return ret;
}
return 0;
@@ -516,10 +558,6 @@
struct dwc3_trb *trb_st_hw;
struct dwc3_trb *trb_link;
- ret = dwc3_gadget_set_xfer_resource(dwc, dep);
- if (ret)
- return ret;
-
dep->endpoint.desc = desc;
dep->comp_desc = comp_desc;
dep->type = usb_endpoint_type(desc);
@@ -1636,8 +1674,6 @@
}
dwc3_writel(dwc->regs, DWC3_DCFG, reg);
- dwc->start_config_issued = false;
-
/* Start with SuperSpeed Default */
dwc3_gadget_ep0_desc.wMaxPacketSize = cpu_to_le16(512);
@@ -2237,7 +2273,6 @@
dwc3_writel(dwc->regs, DWC3_DCTL, reg);
dwc3_disconnect_gadget(dwc);
- dwc->start_config_issued = false;
dwc->gadget.speed = USB_SPEED_UNKNOWN;
dwc->setup_packet_pending = false;
@@ -2288,7 +2323,6 @@
dwc3_stop_active_transfers(dwc);
dwc3_clear_stall_all_ep(dwc);
- dwc->start_config_issued = false;
/* Reset device address to zero */
reg = dwc3_readl(dwc->regs, DWC3_DCFG);
diff --git a/drivers/usb/gadget/legacy/inode.c b/drivers/usb/gadget/legacy/inode.c
index 7e179f8..87fb0fd 100644
--- a/drivers/usb/gadget/legacy/inode.c
+++ b/drivers/usb/gadget/legacy/inode.c
@@ -130,7 +130,8 @@
setup_can_stall : 1,
setup_out_ready : 1,
setup_out_error : 1,
- setup_abort : 1;
+ setup_abort : 1,
+ gadget_registered : 1;
unsigned setup_wLength;
/* the rest is basically write-once */
@@ -1179,7 +1180,8 @@
/* closing ep0 === shutdown all */
- usb_gadget_unregister_driver (&gadgetfs_driver);
+ if (dev->gadget_registered)
+ usb_gadget_unregister_driver (&gadgetfs_driver);
/* at this point "good" hardware has disconnected the
* device from USB; the host won't see it any more.
@@ -1847,6 +1849,7 @@
* kick in after the ep0 descriptor is closed.
*/
value = len;
+ dev->gadget_registered = true;
}
return value;
diff --git a/drivers/usb/gadget/udc/fsl_qe_udc.c b/drivers/usb/gadget/udc/fsl_qe_udc.c
index 53c0692..93d28cb 100644
--- a/drivers/usb/gadget/udc/fsl_qe_udc.c
+++ b/drivers/usb/gadget/udc/fsl_qe_udc.c
@@ -2340,7 +2340,7 @@
{
struct qe_udc *udc;
struct device_node *np = ofdev->dev.of_node;
- unsigned int tmp_addr = 0;
+ unsigned long tmp_addr = 0;
struct usb_device_para __iomem *usbpram;
unsigned int i;
u64 size;
diff --git a/drivers/usb/gadget/udc/net2280.h b/drivers/usb/gadget/udc/net2280.h
index 4dff60d..0d32052 100644
--- a/drivers/usb/gadget/udc/net2280.h
+++ b/drivers/usb/gadget/udc/net2280.h
@@ -369,9 +369,20 @@
static const u32 ep_enhanced[9] = { 0x10, 0x60, 0x30, 0x80,
0x50, 0x20, 0x70, 0x40, 0x90 };
- if (ep->dev->enhanced_mode)
+ if (ep->dev->enhanced_mode) {
reg = ep_enhanced[ep->num];
- else{
+ switch (ep->dev->gadget.speed) {
+ case USB_SPEED_SUPER:
+ reg += 2;
+ break;
+ case USB_SPEED_FULL:
+ reg += 1;
+ break;
+ case USB_SPEED_HIGH:
+ default:
+ break;
+ }
+ } else {
reg = (ep->num + 1) * 0x10;
if (ep->dev->gadget.speed != USB_SPEED_HIGH)
reg += 1;
diff --git a/drivers/usb/gadget/udc/udc-core.c b/drivers/usb/gadget/udc/udc-core.c
index fd73a3ea..b86a6f0 100644
--- a/drivers/usb/gadget/udc/udc-core.c
+++ b/drivers/usb/gadget/udc/udc-core.c
@@ -413,9 +413,10 @@
if (!driver->udc_name || strcmp(driver->udc_name,
dev_name(&udc->dev)) == 0) {
ret = udc_bind_to_driver(udc, driver);
+ if (ret != -EPROBE_DEFER)
+ list_del(&driver->pending);
if (ret)
goto err4;
- list_del(&driver->pending);
break;
}
}
diff --git a/drivers/usb/musb/musb_host.c b/drivers/usb/musb/musb_host.c
index 795a45b..58487a4 100644
--- a/drivers/usb/musb/musb_host.c
+++ b/drivers/usb/musb/musb_host.c
@@ -662,7 +662,7 @@
csr &= ~(MUSB_TXCSR_AUTOSET | MUSB_TXCSR_DMAMODE);
csr |= MUSB_TXCSR_DMAENAB; /* against programmer's guide */
}
- channel->desired_mode = mode;
+ channel->desired_mode = *mode;
musb_writew(epio, MUSB_TXCSR, csr);
return 0;
@@ -2003,10 +2003,8 @@
qh->offset,
urb->transfer_buffer_length);
- done = musb_rx_dma_in_inventra_cppi41(c, hw_ep, qh,
- urb, xfer_len,
- iso_err);
- if (done)
+ if (musb_rx_dma_in_inventra_cppi41(c, hw_ep, qh, urb,
+ xfer_len, iso_err))
goto finish;
else
dev_err(musb->controller, "error: rx_dma failed\n");
diff --git a/drivers/usb/phy/phy-msm-usb.c b/drivers/usb/phy/phy-msm-usb.c
index 970a30e..72b387d 100644
--- a/drivers/usb/phy/phy-msm-usb.c
+++ b/drivers/usb/phy/phy-msm-usb.c
@@ -757,14 +757,8 @@
otg->host = host;
dev_dbg(otg->usb_phy->dev, "host driver registered w/ tranceiver\n");
- /*
- * Kick the state machine work, if peripheral is not supported
- * or peripheral is already registered with us.
- */
- if (motg->pdata->mode == USB_DR_MODE_HOST || otg->gadget) {
- pm_runtime_get_sync(otg->usb_phy->dev);
- schedule_work(&motg->sm_work);
- }
+ pm_runtime_get_sync(otg->usb_phy->dev);
+ schedule_work(&motg->sm_work);
return 0;
}
@@ -827,14 +821,8 @@
dev_dbg(otg->usb_phy->dev,
"peripheral driver registered w/ tranceiver\n");
- /*
- * Kick the state machine work, if host is not supported
- * or host is already registered with us.
- */
- if (motg->pdata->mode == USB_DR_MODE_PERIPHERAL || otg->host) {
- pm_runtime_get_sync(otg->usb_phy->dev);
- schedule_work(&motg->sm_work);
- }
+ pm_runtime_get_sync(otg->usb_phy->dev);
+ schedule_work(&motg->sm_work);
return 0;
}
diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c
index 987813b..7c319e7 100644
--- a/drivers/usb/serial/cp210x.c
+++ b/drivers/usb/serial/cp210x.c
@@ -163,6 +163,8 @@
{ USB_DEVICE(0x1843, 0x0200) }, /* Vaisala USB Instrument Cable */
{ USB_DEVICE(0x18EF, 0xE00F) }, /* ELV USB-I2C-Interface */
{ USB_DEVICE(0x18EF, 0xE025) }, /* ELV Marble Sound Board 1 */
+ { USB_DEVICE(0x1901, 0x0190) }, /* GE B850 CP2105 Recorder interface */
+ { USB_DEVICE(0x1901, 0x0193) }, /* GE B650 CP2104 PMC interface */
{ USB_DEVICE(0x1ADB, 0x0001) }, /* Schweitzer Engineering C662 Cable */
{ USB_DEVICE(0x1B1C, 0x1C00) }, /* Corsair USB Dongle */
{ USB_DEVICE(0x1BA4, 0x0002) }, /* Silicon Labs 358x factory default */
diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
index db86e51..8849439a 100644
--- a/drivers/usb/serial/option.c
+++ b/drivers/usb/serial/option.c
@@ -315,6 +315,7 @@
#define TOSHIBA_PRODUCT_G450 0x0d45
#define ALINK_VENDOR_ID 0x1e0e
+#define SIMCOM_PRODUCT_SIM7100E 0x9001 /* Yes, ALINK_VENDOR_ID */
#define ALINK_PRODUCT_PH300 0x9100
#define ALINK_PRODUCT_3GU 0x9200
@@ -607,6 +608,10 @@
.reserved = BIT(3) | BIT(4),
};
+static const struct option_blacklist_info simcom_sim7100e_blacklist = {
+ .reserved = BIT(5) | BIT(6),
+};
+
static const struct option_blacklist_info telit_le910_blacklist = {
.sendsetup = BIT(0),
.reserved = BIT(1) | BIT(2),
@@ -1122,6 +1127,8 @@
{ USB_DEVICE(KYOCERA_VENDOR_ID, KYOCERA_PRODUCT_KPC650) },
{ USB_DEVICE(KYOCERA_VENDOR_ID, KYOCERA_PRODUCT_KPC680) },
{ USB_DEVICE(QUALCOMM_VENDOR_ID, 0x6000)}, /* ZTE AC8700 */
+ { USB_DEVICE_AND_INTERFACE_INFO(QUALCOMM_VENDOR_ID, 0x6001, 0xff, 0xff, 0xff), /* 4G LTE usb-modem U901 */
+ .driver_info = (kernel_ulong_t)&net_intf3_blacklist },
{ USB_DEVICE(QUALCOMM_VENDOR_ID, 0x6613)}, /* Onda H600/ZTE MF330 */
{ USB_DEVICE(QUALCOMM_VENDOR_ID, 0x0023)}, /* ONYX 3G device */
{ USB_DEVICE(QUALCOMM_VENDOR_ID, 0x9000)}, /* SIMCom SIM5218 */
@@ -1645,6 +1652,8 @@
{ USB_DEVICE(ALINK_VENDOR_ID, 0x9000) },
{ USB_DEVICE(ALINK_VENDOR_ID, ALINK_PRODUCT_PH300) },
{ USB_DEVICE_AND_INTERFACE_INFO(ALINK_VENDOR_ID, ALINK_PRODUCT_3GU, 0xff, 0xff, 0xff) },
+ { USB_DEVICE(ALINK_VENDOR_ID, SIMCOM_PRODUCT_SIM7100E),
+ .driver_info = (kernel_ulong_t)&simcom_sim7100e_blacklist },
{ USB_DEVICE(ALCATEL_VENDOR_ID, ALCATEL_PRODUCT_X060S_X200),
.driver_info = (kernel_ulong_t)&alcatel_x200_blacklist
},
diff --git a/drivers/video/fbdev/da8xx-fb.c b/drivers/video/fbdev/da8xx-fb.c
index 0081725..6b2a06d 100644
--- a/drivers/video/fbdev/da8xx-fb.c
+++ b/drivers/video/fbdev/da8xx-fb.c
@@ -152,7 +152,7 @@
struct da8xx_fb_par {
struct device *dev;
- resource_size_t p_palette_base;
+ dma_addr_t p_palette_base;
unsigned char *v_palette_base;
dma_addr_t vram_phys;
unsigned long vram_size;
@@ -1428,7 +1428,7 @@
par->vram_virt = dma_alloc_coherent(NULL,
par->vram_size,
- (resource_size_t *) &par->vram_phys,
+ &par->vram_phys,
GFP_KERNEL | GFP_DMA);
if (!par->vram_virt) {
dev_err(&device->dev,
@@ -1448,7 +1448,7 @@
/* allocate palette buffer */
par->v_palette_base = dma_zalloc_coherent(NULL, PALETTE_SIZE,
- (resource_size_t *)&par->p_palette_base,
+ &par->p_palette_base,
GFP_KERNEL | GFP_DMA);
if (!par->v_palette_base) {
dev_err(&device->dev,
diff --git a/drivers/video/fbdev/exynos/s6e8ax0.c b/drivers/video/fbdev/exynos/s6e8ax0.c
index 95873f2..de2f3e7 100644
--- a/drivers/video/fbdev/exynos/s6e8ax0.c
+++ b/drivers/video/fbdev/exynos/s6e8ax0.c
@@ -829,8 +829,7 @@
return 0;
}
-#ifdef CONFIG_PM
-static int s6e8ax0_suspend(struct mipi_dsim_lcd_device *dsim_dev)
+static int __maybe_unused s6e8ax0_suspend(struct mipi_dsim_lcd_device *dsim_dev)
{
struct s6e8ax0 *lcd = dev_get_drvdata(&dsim_dev->dev);
@@ -843,7 +842,7 @@
return 0;
}
-static int s6e8ax0_resume(struct mipi_dsim_lcd_device *dsim_dev)
+static int __maybe_unused s6e8ax0_resume(struct mipi_dsim_lcd_device *dsim_dev)
{
struct s6e8ax0 *lcd = dev_get_drvdata(&dsim_dev->dev);
@@ -855,10 +854,6 @@
return 0;
}
-#else
-#define s6e8ax0_suspend NULL
-#define s6e8ax0_resume NULL
-#endif
static struct mipi_dsim_lcd_driver s6e8ax0_dsim_ddi_driver = {
.name = "s6e8ax0",
@@ -867,8 +862,8 @@
.power_on = s6e8ax0_power_on,
.set_sequence = s6e8ax0_set_sequence,
.probe = s6e8ax0_probe,
- .suspend = s6e8ax0_suspend,
- .resume = s6e8ax0_resume,
+ .suspend = IS_ENABLED(CONFIG_PM) ? s6e8ax0_suspend : NULL,
+ .resume = IS_ENABLED(CONFIG_PM) ? s6e8ax0_resume : NULL,
};
static int s6e8ax0_init(void)
diff --git a/drivers/video/fbdev/imxfb.c b/drivers/video/fbdev/imxfb.c
index cee8860..bb2f1e8 100644
--- a/drivers/video/fbdev/imxfb.c
+++ b/drivers/video/fbdev/imxfb.c
@@ -902,6 +902,21 @@
goto failed_getclock;
}
+ /*
+ * The LCDC controller does not have an enable bit. The
+ * controller starts directly when the clocks are enabled.
+ * If the clocks are enabled when the controller is not yet
+ * programmed with proper register values (enabled at the
+ * bootloader, for example) then it just goes into some undefined
+ * state.
+ * To avoid this issue, let's enable and disable LCDC IPG clock
+ * so that we force some kind of 'reset' to the LCDC block.
+ */
+ ret = clk_prepare_enable(fbi->clk_ipg);
+ if (ret)
+ goto failed_getclock;
+ clk_disable_unprepare(fbi->clk_ipg);
+
fbi->clk_ahb = devm_clk_get(&pdev->dev, "ahb");
if (IS_ERR(fbi->clk_ahb)) {
ret = PTR_ERR(fbi->clk_ahb);
diff --git a/drivers/video/fbdev/mmp/hw/mmp_ctrl.c b/drivers/video/fbdev/mmp/hw/mmp_ctrl.c
index de54a47..b6f83d5 100644
--- a/drivers/video/fbdev/mmp/hw/mmp_ctrl.c
+++ b/drivers/video/fbdev/mmp/hw/mmp_ctrl.c
@@ -503,8 +503,7 @@
ctrl->reg_base = devm_ioremap_nocache(ctrl->dev,
res->start, resource_size(res));
if (ctrl->reg_base == NULL) {
- dev_err(ctrl->dev, "%s: res %x - %x map failed\n", __func__,
- res->start, res->end);
+ dev_err(ctrl->dev, "%s: res %pR map failed\n", __func__, res);
ret = -ENOMEM;
goto failed;
}
diff --git a/drivers/video/fbdev/ocfb.c b/drivers/video/fbdev/ocfb.c
index c9293ae..a970edc2 100644
--- a/drivers/video/fbdev/ocfb.c
+++ b/drivers/video/fbdev/ocfb.c
@@ -123,11 +123,11 @@
/* Horizontal timings */
ocfb_writereg(fbdev, OCFB_HTIM, (var->hsync_len - 1) << 24 |
- (var->right_margin - 1) << 16 | (var->xres - 1));
+ (var->left_margin - 1) << 16 | (var->xres - 1));
/* Vertical timings */
ocfb_writereg(fbdev, OCFB_VTIM, (var->vsync_len - 1) << 24 |
- (var->lower_margin - 1) << 16 | (var->yres - 1));
+ (var->upper_margin - 1) << 16 | (var->yres - 1));
/* Total length of frame */
hlen = var->left_margin + var->right_margin + var->hsync_len +
diff --git a/drivers/vme/bridges/vme_ca91cx42.c b/drivers/vme/bridges/vme_ca91cx42.c
index b79a74a..5fbeab3 100644
--- a/drivers/vme/bridges/vme_ca91cx42.c
+++ b/drivers/vme/bridges/vme_ca91cx42.c
@@ -202,7 +202,7 @@
bridge = ca91cx42_bridge->driver_priv;
/* Need pdev */
- pdev = container_of(ca91cx42_bridge->parent, struct pci_dev, dev);
+ pdev = to_pci_dev(ca91cx42_bridge->parent);
INIT_LIST_HEAD(&ca91cx42_bridge->vme_error_handlers);
@@ -293,8 +293,7 @@
iowrite32(tmp, bridge->base + LINT_EN);
if ((state == 0) && (sync != 0)) {
- pdev = container_of(ca91cx42_bridge->parent, struct pci_dev,
- dev);
+ pdev = to_pci_dev(ca91cx42_bridge->parent);
synchronize_irq(pdev->irq);
}
@@ -518,7 +517,7 @@
dev_err(ca91cx42_bridge->parent, "Dev entry NULL\n");
return -EINVAL;
}
- pdev = container_of(ca91cx42_bridge->parent, struct pci_dev, dev);
+ pdev = to_pci_dev(ca91cx42_bridge->parent);
existing_size = (unsigned long long)(image->bus_resource.end -
image->bus_resource.start);
@@ -1519,7 +1518,7 @@
struct pci_dev *pdev;
/* Find pci_dev container of dev */
- pdev = container_of(parent, struct pci_dev, dev);
+ pdev = to_pci_dev(parent);
return pci_alloc_consistent(pdev, size, dma);
}
@@ -1530,7 +1529,7 @@
struct pci_dev *pdev;
/* Find pci_dev container of dev */
- pdev = container_of(parent, struct pci_dev, dev);
+ pdev = to_pci_dev(parent);
pci_free_consistent(pdev, size, vaddr, dma);
}
diff --git a/drivers/w1/masters/omap_hdq.c b/drivers/w1/masters/omap_hdq.c
index 0e2f43b..a2eec97 100644
--- a/drivers/w1/masters/omap_hdq.c
+++ b/drivers/w1/masters/omap_hdq.c
@@ -618,7 +618,6 @@
hdq_disable_interrupt(hdq_data, OMAP_HDQ_CTRL_STATUS,
~OMAP_HDQ_CTRL_STATUS_INTERRUPTMASK);
- hdq_data->hdq_usecount = 0;
/* Write followed by a read, release the module */
if (hdq_data->init_trans) {
diff --git a/drivers/w1/w1.c b/drivers/w1/w1.c
index c9a7ff6..89a7847 100644
--- a/drivers/w1/w1.c
+++ b/drivers/w1/w1.c
@@ -1147,7 +1147,6 @@
jremain = 1;
}
- try_to_freeze();
__set_current_state(TASK_INTERRUPTIBLE);
/* hold list_mutex until after interruptible to prevent loosing
diff --git a/drivers/watchdog/Kconfig b/drivers/watchdog/Kconfig
index 0f6d851..86c2392 100644
--- a/drivers/watchdog/Kconfig
+++ b/drivers/watchdog/Kconfig
@@ -1214,6 +1214,21 @@
To compile this driver as a module, choose M here: the
module will be called sbc_epx_c3.
+config INTEL_MEI_WDT
+ tristate "Intel MEI iAMT Watchdog"
+ depends on INTEL_MEI && X86
+ select WATCHDOG_CORE
+ ---help---
+ A device driver for the Intel MEI iAMT watchdog.
+
+ The Intel AMT Watchdog is an OS Health (Hang/Crash) watchdog.
+ Whenever the OS hangs or crashes, iAMT will send an event
+ to any subscriber to this event. The watchdog doesn't reset the
+ the platform.
+
+ To compile this driver as a module, choose M here:
+ the module will be called mei_wdt.
+
# M32R Architecture
# M68K Architecture
diff --git a/drivers/watchdog/Makefile b/drivers/watchdog/Makefile
index f566753..efc4f78 100644
--- a/drivers/watchdog/Makefile
+++ b/drivers/watchdog/Makefile
@@ -126,6 +126,7 @@
obj-$(CONFIG_SBC_EPX_C3_WATCHDOG) += sbc_epx_c3.o
obj-$(CONFIG_INTEL_SCU_WATCHDOG) += intel_scu_watchdog.o
obj-$(CONFIG_INTEL_MID_WATCHDOG) += intel-mid_wdt.o
+obj-$(CONFIG_INTEL_MEI_WDT) += mei_wdt.o
# M32R Architecture
diff --git a/drivers/watchdog/mei_wdt.c b/drivers/watchdog/mei_wdt.c
new file mode 100644
index 0000000..630bd18
--- /dev/null
+++ b/drivers/watchdog/mei_wdt.c
@@ -0,0 +1,724 @@
+/*
+ * Intel Management Engine Interface (Intel MEI) Linux driver
+ * Copyright (c) 2015, Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ */
+
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/interrupt.h>
+#include <linux/debugfs.h>
+#include <linux/completion.h>
+#include <linux/watchdog.h>
+
+#include <linux/uuid.h>
+#include <linux/mei_cl_bus.h>
+
+/*
+ * iAMT Watchdog Device
+ */
+#define INTEL_AMT_WATCHDOG_ID "iamt_wdt"
+
+#define MEI_WDT_DEFAULT_TIMEOUT 120 /* seconds */
+#define MEI_WDT_MIN_TIMEOUT 120 /* seconds */
+#define MEI_WDT_MAX_TIMEOUT 65535 /* seconds */
+
+/* Commands */
+#define MEI_MANAGEMENT_CONTROL 0x02
+
+/* MEI Management Control version number */
+#define MEI_MC_VERSION_NUMBER 0x10
+
+/* Sub Commands */
+#define MEI_MC_START_WD_TIMER_REQ 0x13
+#define MEI_MC_START_WD_TIMER_RES 0x83
+#define MEI_WDT_STATUS_SUCCESS 0
+#define MEI_WDT_WDSTATE_NOT_REQUIRED 0x1
+#define MEI_MC_STOP_WD_TIMER_REQ 0x14
+
+/**
+ * enum mei_wdt_state - internal watchdog state
+ *
+ * @MEI_WDT_PROBE: wd in probing stage
+ * @MEI_WDT_IDLE: wd is idle and not opened
+ * @MEI_WDT_START: wd was opened, start was called
+ * @MEI_WDT_RUNNING: wd is expecting keep alive pings
+ * @MEI_WDT_STOPPING: wd is stopping and will move to IDLE
+ * @MEI_WDT_NOT_REQUIRED: wd device is not required
+ */
+enum mei_wdt_state {
+ MEI_WDT_PROBE,
+ MEI_WDT_IDLE,
+ MEI_WDT_START,
+ MEI_WDT_RUNNING,
+ MEI_WDT_STOPPING,
+ MEI_WDT_NOT_REQUIRED,
+};
+
+static const char *mei_wdt_state_str(enum mei_wdt_state state)
+{
+ switch (state) {
+ case MEI_WDT_PROBE:
+ return "PROBE";
+ case MEI_WDT_IDLE:
+ return "IDLE";
+ case MEI_WDT_START:
+ return "START";
+ case MEI_WDT_RUNNING:
+ return "RUNNING";
+ case MEI_WDT_STOPPING:
+ return "STOPPING";
+ case MEI_WDT_NOT_REQUIRED:
+ return "NOT_REQUIRED";
+ default:
+ return "unknown";
+ }
+}
+
+/**
+ * struct mei_wdt - mei watchdog driver
+ * @wdd: watchdog device
+ *
+ * @cldev: mei watchdog client device
+ * @state: watchdog internal state
+ * @resp_required: ping required response
+ * @response: ping response completion
+ * @unregister: unregister worker
+ * @reg_lock: watchdog device registration lock
+ * @timeout: watchdog current timeout
+ *
+ * @dbgfs_dir: debugfs dir entry
+ */
+struct mei_wdt {
+ struct watchdog_device wdd;
+
+ struct mei_cl_device *cldev;
+ enum mei_wdt_state state;
+ bool resp_required;
+ struct completion response;
+ struct work_struct unregister;
+ struct mutex reg_lock;
+ u16 timeout;
+
+#if IS_ENABLED(CONFIG_DEBUG_FS)
+ struct dentry *dbgfs_dir;
+#endif /* CONFIG_DEBUG_FS */
+};
+
+/*
+ * struct mei_mc_hdr - Management Control Command Header
+ *
+ * @command: Management Control (0x2)
+ * @bytecount: Number of bytes in the message beyond this byte
+ * @subcommand: Management Control Subcommand
+ * @versionnumber: Management Control Version (0x10)
+ */
+struct mei_mc_hdr {
+ u8 command;
+ u8 bytecount;
+ u8 subcommand;
+ u8 versionnumber;
+};
+
+/**
+ * struct mei_wdt_start_request watchdog start/ping
+ *
+ * @hdr: Management Control Command Header
+ * @timeout: timeout value
+ * @reserved: reserved (legacy)
+ */
+struct mei_wdt_start_request {
+ struct mei_mc_hdr hdr;
+ u16 timeout;
+ u8 reserved[17];
+} __packed;
+
+/**
+ * struct mei_wdt_start_response watchdog start/ping response
+ *
+ * @hdr: Management Control Command Header
+ * @status: operation status
+ * @wdstate: watchdog status bit mask
+ */
+struct mei_wdt_start_response {
+ struct mei_mc_hdr hdr;
+ u8 status;
+ u8 wdstate;
+} __packed;
+
+/**
+ * struct mei_wdt_stop_request - watchdog stop
+ *
+ * @hdr: Management Control Command Header
+ */
+struct mei_wdt_stop_request {
+ struct mei_mc_hdr hdr;
+} __packed;
+
+/**
+ * mei_wdt_ping - send wd start/ping command
+ *
+ * @wdt: mei watchdog device
+ *
+ * Return: 0 on success,
+ * negative errno code on failure
+ */
+static int mei_wdt_ping(struct mei_wdt *wdt)
+{
+ struct mei_wdt_start_request req;
+ const size_t req_len = sizeof(req);
+ int ret;
+
+ memset(&req, 0, req_len);
+ req.hdr.command = MEI_MANAGEMENT_CONTROL;
+ req.hdr.bytecount = req_len - offsetof(struct mei_mc_hdr, subcommand);
+ req.hdr.subcommand = MEI_MC_START_WD_TIMER_REQ;
+ req.hdr.versionnumber = MEI_MC_VERSION_NUMBER;
+ req.timeout = wdt->timeout;
+
+ ret = mei_cldev_send(wdt->cldev, (u8 *)&req, req_len);
+ if (ret < 0)
+ return ret;
+
+ return 0;
+}
+
+/**
+ * mei_wdt_stop - send wd stop command
+ *
+ * @wdt: mei watchdog device
+ *
+ * Return: 0 on success,
+ * negative errno code on failure
+ */
+static int mei_wdt_stop(struct mei_wdt *wdt)
+{
+ struct mei_wdt_stop_request req;
+ const size_t req_len = sizeof(req);
+ int ret;
+
+ memset(&req, 0, req_len);
+ req.hdr.command = MEI_MANAGEMENT_CONTROL;
+ req.hdr.bytecount = req_len - offsetof(struct mei_mc_hdr, subcommand);
+ req.hdr.subcommand = MEI_MC_STOP_WD_TIMER_REQ;
+ req.hdr.versionnumber = MEI_MC_VERSION_NUMBER;
+
+ ret = mei_cldev_send(wdt->cldev, (u8 *)&req, req_len);
+ if (ret < 0)
+ return ret;
+
+ return 0;
+}
+
+/**
+ * mei_wdt_ops_start - wd start command from the watchdog core.
+ *
+ * @wdd: watchdog device
+ *
+ * Return: 0 on success or -ENODEV;
+ */
+static int mei_wdt_ops_start(struct watchdog_device *wdd)
+{
+ struct mei_wdt *wdt = watchdog_get_drvdata(wdd);
+
+ wdt->state = MEI_WDT_START;
+ wdd->timeout = wdt->timeout;
+ return 0;
+}
+
+/**
+ * mei_wdt_ops_stop - wd stop command from the watchdog core.
+ *
+ * @wdd: watchdog device
+ *
+ * Return: 0 if success, negative errno code for failure
+ */
+static int mei_wdt_ops_stop(struct watchdog_device *wdd)
+{
+ struct mei_wdt *wdt = watchdog_get_drvdata(wdd);
+ int ret;
+
+ if (wdt->state != MEI_WDT_RUNNING)
+ return 0;
+
+ wdt->state = MEI_WDT_STOPPING;
+
+ ret = mei_wdt_stop(wdt);
+ if (ret)
+ return ret;
+
+ wdt->state = MEI_WDT_IDLE;
+
+ return 0;
+}
+
+/**
+ * mei_wdt_ops_ping - wd ping command from the watchdog core.
+ *
+ * @wdd: watchdog device
+ *
+ * Return: 0 if success, negative errno code on failure
+ */
+static int mei_wdt_ops_ping(struct watchdog_device *wdd)
+{
+ struct mei_wdt *wdt = watchdog_get_drvdata(wdd);
+ int ret;
+
+ if (wdt->state != MEI_WDT_START && wdt->state != MEI_WDT_RUNNING)
+ return 0;
+
+ if (wdt->resp_required)
+ init_completion(&wdt->response);
+
+ wdt->state = MEI_WDT_RUNNING;
+ ret = mei_wdt_ping(wdt);
+ if (ret)
+ return ret;
+
+ if (wdt->resp_required)
+ ret = wait_for_completion_killable(&wdt->response);
+
+ return ret;
+}
+
+/**
+ * mei_wdt_ops_set_timeout - wd set timeout command from the watchdog core.
+ *
+ * @wdd: watchdog device
+ * @timeout: timeout value to set
+ *
+ * Return: 0 if success, negative errno code for failure
+ */
+static int mei_wdt_ops_set_timeout(struct watchdog_device *wdd,
+ unsigned int timeout)
+{
+
+ struct mei_wdt *wdt = watchdog_get_drvdata(wdd);
+
+ /* valid value is already checked by the caller */
+ wdt->timeout = timeout;
+ wdd->timeout = timeout;
+
+ return 0;
+}
+
+static const struct watchdog_ops wd_ops = {
+ .owner = THIS_MODULE,
+ .start = mei_wdt_ops_start,
+ .stop = mei_wdt_ops_stop,
+ .ping = mei_wdt_ops_ping,
+ .set_timeout = mei_wdt_ops_set_timeout,
+};
+
+/* not const as the firmware_version field need to be retrieved */
+static struct watchdog_info wd_info = {
+ .identity = INTEL_AMT_WATCHDOG_ID,
+ .options = WDIOF_KEEPALIVEPING |
+ WDIOF_SETTIMEOUT |
+ WDIOF_ALARMONLY,
+};
+
+/**
+ * __mei_wdt_is_registered - check if wdt is registered
+ *
+ * @wdt: mei watchdog device
+ *
+ * Return: true if the wdt is registered with the watchdog subsystem
+ * Locking: should be called under wdt->reg_lock
+ */
+static inline bool __mei_wdt_is_registered(struct mei_wdt *wdt)
+{
+ return !!watchdog_get_drvdata(&wdt->wdd);
+}
+
+/**
+ * mei_wdt_unregister - unregister from the watchdog subsystem
+ *
+ * @wdt: mei watchdog device
+ */
+static void mei_wdt_unregister(struct mei_wdt *wdt)
+{
+ mutex_lock(&wdt->reg_lock);
+
+ if (__mei_wdt_is_registered(wdt)) {
+ watchdog_unregister_device(&wdt->wdd);
+ watchdog_set_drvdata(&wdt->wdd, NULL);
+ memset(&wdt->wdd, 0, sizeof(wdt->wdd));
+ }
+
+ mutex_unlock(&wdt->reg_lock);
+}
+
+/**
+ * mei_wdt_register - register with the watchdog subsystem
+ *
+ * @wdt: mei watchdog device
+ *
+ * Return: 0 if success, negative errno code for failure
+ */
+static int mei_wdt_register(struct mei_wdt *wdt)
+{
+ struct device *dev;
+ int ret;
+
+ if (!wdt || !wdt->cldev)
+ return -EINVAL;
+
+ dev = &wdt->cldev->dev;
+
+ mutex_lock(&wdt->reg_lock);
+
+ if (__mei_wdt_is_registered(wdt)) {
+ ret = 0;
+ goto out;
+ }
+
+ wdt->wdd.info = &wd_info;
+ wdt->wdd.ops = &wd_ops;
+ wdt->wdd.parent = dev;
+ wdt->wdd.timeout = MEI_WDT_DEFAULT_TIMEOUT;
+ wdt->wdd.min_timeout = MEI_WDT_MIN_TIMEOUT;
+ wdt->wdd.max_timeout = MEI_WDT_MAX_TIMEOUT;
+
+ watchdog_set_drvdata(&wdt->wdd, wdt);
+ ret = watchdog_register_device(&wdt->wdd);
+ if (ret) {
+ dev_err(dev, "unable to register watchdog device = %d.\n", ret);
+ watchdog_set_drvdata(&wdt->wdd, NULL);
+ }
+
+ wdt->state = MEI_WDT_IDLE;
+
+out:
+ mutex_unlock(&wdt->reg_lock);
+ return ret;
+}
+
+static void mei_wdt_unregister_work(struct work_struct *work)
+{
+ struct mei_wdt *wdt = container_of(work, struct mei_wdt, unregister);
+
+ mei_wdt_unregister(wdt);
+}
+
+/**
+ * mei_wdt_event_rx - callback for data receive
+ *
+ * @cldev: bus device
+ */
+static void mei_wdt_event_rx(struct mei_cl_device *cldev)
+{
+ struct mei_wdt *wdt = mei_cldev_get_drvdata(cldev);
+ struct mei_wdt_start_response res;
+ const size_t res_len = sizeof(res);
+ int ret;
+
+ ret = mei_cldev_recv(wdt->cldev, (u8 *)&res, res_len);
+ if (ret < 0) {
+ dev_err(&cldev->dev, "failure in recv %d\n", ret);
+ return;
+ }
+
+ /* Empty response can be sent on stop */
+ if (ret == 0)
+ return;
+
+ if (ret < sizeof(struct mei_mc_hdr)) {
+ dev_err(&cldev->dev, "recv small data %d\n", ret);
+ return;
+ }
+
+ if (res.hdr.command != MEI_MANAGEMENT_CONTROL ||
+ res.hdr.versionnumber != MEI_MC_VERSION_NUMBER) {
+ dev_err(&cldev->dev, "wrong command received\n");
+ return;
+ }
+
+ if (res.hdr.subcommand != MEI_MC_START_WD_TIMER_RES) {
+ dev_warn(&cldev->dev, "unsupported command %d :%s[%d]\n",
+ res.hdr.subcommand,
+ mei_wdt_state_str(wdt->state),
+ wdt->state);
+ return;
+ }
+
+ /* Run the unregistration in a worker as this can be
+ * run only after ping completion, otherwise the flow will
+ * deadlock on watchdog core mutex.
+ */
+ if (wdt->state == MEI_WDT_RUNNING) {
+ if (res.wdstate & MEI_WDT_WDSTATE_NOT_REQUIRED) {
+ wdt->state = MEI_WDT_NOT_REQUIRED;
+ schedule_work(&wdt->unregister);
+ }
+ goto out;
+ }
+
+ if (wdt->state == MEI_WDT_PROBE) {
+ if (res.wdstate & MEI_WDT_WDSTATE_NOT_REQUIRED) {
+ wdt->state = MEI_WDT_NOT_REQUIRED;
+ } else {
+ /* stop the watchdog and register watchdog device */
+ mei_wdt_stop(wdt);
+ mei_wdt_register(wdt);
+ }
+ return;
+ }
+
+ dev_warn(&cldev->dev, "not in correct state %s[%d]\n",
+ mei_wdt_state_str(wdt->state), wdt->state);
+
+out:
+ if (!completion_done(&wdt->response))
+ complete(&wdt->response);
+}
+
+/*
+ * mei_wdt_notify_event - callback for event notification
+ *
+ * @cldev: bus device
+ */
+static void mei_wdt_notify_event(struct mei_cl_device *cldev)
+{
+ struct mei_wdt *wdt = mei_cldev_get_drvdata(cldev);
+
+ if (wdt->state != MEI_WDT_NOT_REQUIRED)
+ return;
+
+ mei_wdt_register(wdt);
+}
+
+/**
+ * mei_wdt_event - callback for event receive
+ *
+ * @cldev: bus device
+ * @events: event mask
+ * @context: callback context
+ */
+static void mei_wdt_event(struct mei_cl_device *cldev,
+ u32 events, void *context)
+{
+ if (events & BIT(MEI_CL_EVENT_RX))
+ mei_wdt_event_rx(cldev);
+
+ if (events & BIT(MEI_CL_EVENT_NOTIF))
+ mei_wdt_notify_event(cldev);
+}
+
+#if IS_ENABLED(CONFIG_DEBUG_FS)
+
+static ssize_t mei_dbgfs_read_activation(struct file *file, char __user *ubuf,
+ size_t cnt, loff_t *ppos)
+{
+ struct mei_wdt *wdt = file->private_data;
+ const size_t bufsz = 32;
+ char buf[32];
+ ssize_t pos;
+
+ mutex_lock(&wdt->reg_lock);
+ pos = scnprintf(buf, bufsz, "%s\n",
+ __mei_wdt_is_registered(wdt) ? "activated" : "deactivated");
+ mutex_unlock(&wdt->reg_lock);
+
+ return simple_read_from_buffer(ubuf, cnt, ppos, buf, pos);
+}
+
+static const struct file_operations dbgfs_fops_activation = {
+ .open = simple_open,
+ .read = mei_dbgfs_read_activation,
+ .llseek = generic_file_llseek,
+};
+
+static ssize_t mei_dbgfs_read_state(struct file *file, char __user *ubuf,
+ size_t cnt, loff_t *ppos)
+{
+ struct mei_wdt *wdt = file->private_data;
+ const size_t bufsz = 32;
+ char buf[bufsz];
+ ssize_t pos;
+
+ pos = scnprintf(buf, bufsz, "state: %s\n",
+ mei_wdt_state_str(wdt->state));
+
+ return simple_read_from_buffer(ubuf, cnt, ppos, buf, pos);
+}
+
+static const struct file_operations dbgfs_fops_state = {
+ .open = simple_open,
+ .read = mei_dbgfs_read_state,
+ .llseek = generic_file_llseek,
+};
+
+static void dbgfs_unregister(struct mei_wdt *wdt)
+{
+ debugfs_remove_recursive(wdt->dbgfs_dir);
+ wdt->dbgfs_dir = NULL;
+}
+
+static int dbgfs_register(struct mei_wdt *wdt)
+{
+ struct dentry *dir, *f;
+
+ dir = debugfs_create_dir(KBUILD_MODNAME, NULL);
+ if (!dir)
+ return -ENOMEM;
+
+ wdt->dbgfs_dir = dir;
+ f = debugfs_create_file("state", S_IRUSR, dir, wdt, &dbgfs_fops_state);
+ if (!f)
+ goto err;
+
+ f = debugfs_create_file("activation", S_IRUSR,
+ dir, wdt, &dbgfs_fops_activation);
+ if (!f)
+ goto err;
+
+ return 0;
+err:
+ dbgfs_unregister(wdt);
+ return -ENODEV;
+}
+
+#else
+
+static inline void dbgfs_unregister(struct mei_wdt *wdt) {}
+
+static inline int dbgfs_register(struct mei_wdt *wdt)
+{
+ return 0;
+}
+#endif /* CONFIG_DEBUG_FS */
+
+static int mei_wdt_probe(struct mei_cl_device *cldev,
+ const struct mei_cl_device_id *id)
+{
+ struct mei_wdt *wdt;
+ int ret;
+
+ wdt = kzalloc(sizeof(struct mei_wdt), GFP_KERNEL);
+ if (!wdt)
+ return -ENOMEM;
+
+ wdt->timeout = MEI_WDT_DEFAULT_TIMEOUT;
+ wdt->state = MEI_WDT_PROBE;
+ wdt->cldev = cldev;
+ wdt->resp_required = mei_cldev_ver(cldev) > 0x1;
+ mutex_init(&wdt->reg_lock);
+ init_completion(&wdt->response);
+ INIT_WORK(&wdt->unregister, mei_wdt_unregister_work);
+
+ mei_cldev_set_drvdata(cldev, wdt);
+
+ ret = mei_cldev_enable(cldev);
+ if (ret < 0) {
+ dev_err(&cldev->dev, "Could not enable cl device\n");
+ goto err_out;
+ }
+
+ ret = mei_cldev_register_event_cb(wdt->cldev,
+ BIT(MEI_CL_EVENT_RX) |
+ BIT(MEI_CL_EVENT_NOTIF),
+ mei_wdt_event, NULL);
+
+ /* on legacy devices notification is not supported
+ * this doesn't fail the registration for RX event
+ */
+ if (ret && ret != -EOPNOTSUPP) {
+ dev_err(&cldev->dev, "Could not register event ret=%d\n", ret);
+ goto err_disable;
+ }
+
+ wd_info.firmware_version = mei_cldev_ver(cldev);
+
+ if (wdt->resp_required)
+ ret = mei_wdt_ping(wdt);
+ else
+ ret = mei_wdt_register(wdt);
+
+ if (ret)
+ goto err_disable;
+
+ if (dbgfs_register(wdt))
+ dev_warn(&cldev->dev, "cannot register debugfs\n");
+
+ return 0;
+
+err_disable:
+ mei_cldev_disable(cldev);
+
+err_out:
+ kfree(wdt);
+
+ return ret;
+}
+
+static int mei_wdt_remove(struct mei_cl_device *cldev)
+{
+ struct mei_wdt *wdt = mei_cldev_get_drvdata(cldev);
+
+ /* Free the caller in case of fw initiated or unexpected reset */
+ if (!completion_done(&wdt->response))
+ complete(&wdt->response);
+
+ cancel_work_sync(&wdt->unregister);
+
+ mei_wdt_unregister(wdt);
+
+ mei_cldev_disable(cldev);
+
+ dbgfs_unregister(wdt);
+
+ kfree(wdt);
+
+ return 0;
+}
+
+#define MEI_UUID_WD UUID_LE(0x05B79A6F, 0x4628, 0x4D7F, \
+ 0x89, 0x9D, 0xA9, 0x15, 0x14, 0xCB, 0x32, 0xAB)
+
+static struct mei_cl_device_id mei_wdt_tbl[] = {
+ { .uuid = MEI_UUID_WD, .version = MEI_CL_VERSION_ANY },
+ /* required last entry */
+ { }
+};
+MODULE_DEVICE_TABLE(mei, mei_wdt_tbl);
+
+static struct mei_cl_driver mei_wdt_driver = {
+ .id_table = mei_wdt_tbl,
+ .name = KBUILD_MODNAME,
+
+ .probe = mei_wdt_probe,
+ .remove = mei_wdt_remove,
+};
+
+static int __init mei_wdt_init(void)
+{
+ int ret;
+
+ ret = mei_cldev_driver_register(&mei_wdt_driver);
+ if (ret) {
+ pr_err(KBUILD_MODNAME ": module registration failed\n");
+ return ret;
+ }
+ return 0;
+}
+
+static void __exit mei_wdt_exit(void)
+{
+ mei_cldev_driver_unregister(&mei_wdt_driver);
+}
+
+module_init(mei_wdt_init);
+module_exit(mei_wdt_exit);
+
+MODULE_AUTHOR("Intel Corporation");
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("Device driver for Intel MEI iAMT watchdog");
diff --git a/drivers/xen/xen-pciback/pciback_ops.c b/drivers/xen/xen-pciback/pciback_ops.c
index 73dafdc..fb02214 100644
--- a/drivers/xen/xen-pciback/pciback_ops.c
+++ b/drivers/xen/xen-pciback/pciback_ops.c
@@ -227,8 +227,9 @@
/*
* PCI_COMMAND_MEMORY must be enabled, otherwise we may not be able
* to access the BARs where the MSI-X entries reside.
+ * But VF devices are unique in which the PF needs to be checked.
*/
- pci_read_config_word(dev, PCI_COMMAND, &cmd);
+ pci_read_config_word(pci_physfn(dev), PCI_COMMAND, &cmd);
if (dev->msi_enabled || !(cmd & PCI_COMMAND_MEMORY))
return -ENXIO;
@@ -332,6 +333,9 @@
struct xen_pcibk_dev_data *dev_data = NULL;
struct xen_pci_op *op = &pdev->op;
int test_intx = 0;
+#ifdef CONFIG_PCI_MSI
+ unsigned int nr = 0;
+#endif
*op = pdev->sh_info->op;
barrier();
@@ -360,6 +364,7 @@
op->err = xen_pcibk_disable_msi(pdev, dev, op);
break;
case XEN_PCI_OP_enable_msix:
+ nr = op->value;
op->err = xen_pcibk_enable_msix(pdev, dev, op);
break;
case XEN_PCI_OP_disable_msix:
@@ -382,7 +387,7 @@
if (op->cmd == XEN_PCI_OP_enable_msix && op->err == 0) {
unsigned int i;
- for (i = 0; i < op->value; i++)
+ for (i = 0; i < nr; i++)
pdev->sh_info->op.msix_entries[i].vector =
op->msix_entries[i].vector;
}
diff --git a/drivers/xen/xen-scsiback.c b/drivers/xen/xen-scsiback.c
index ad4eb10..c46ee18 100644
--- a/drivers/xen/xen-scsiback.c
+++ b/drivers/xen/xen-scsiback.c
@@ -849,15 +849,31 @@
}
/*
+ Check for a translation entry being present
+*/
+static struct v2p_entry *scsiback_chk_translation_entry(
+ struct vscsibk_info *info, struct ids_tuple *v)
+{
+ struct list_head *head = &(info->v2p_entry_lists);
+ struct v2p_entry *entry;
+
+ list_for_each_entry(entry, head, l)
+ if ((entry->v.chn == v->chn) &&
+ (entry->v.tgt == v->tgt) &&
+ (entry->v.lun == v->lun))
+ return entry;
+
+ return NULL;
+}
+
+/*
Add a new translation entry
*/
static int scsiback_add_translation_entry(struct vscsibk_info *info,
char *phy, struct ids_tuple *v)
{
int err = 0;
- struct v2p_entry *entry;
struct v2p_entry *new;
- struct list_head *head = &(info->v2p_entry_lists);
unsigned long flags;
char *lunp;
unsigned long long unpacked_lun;
@@ -917,15 +933,10 @@
spin_lock_irqsave(&info->v2p_lock, flags);
/* Check double assignment to identical virtual ID */
- list_for_each_entry(entry, head, l) {
- if ((entry->v.chn == v->chn) &&
- (entry->v.tgt == v->tgt) &&
- (entry->v.lun == v->lun)) {
- pr_warn("Virtual ID is already used. Assignment was not performed.\n");
- err = -EEXIST;
- goto out;
- }
-
+ if (scsiback_chk_translation_entry(info, v)) {
+ pr_warn("Virtual ID is already used. Assignment was not performed.\n");
+ err = -EEXIST;
+ goto out;
}
/* Create a new translation entry and add to the list */
@@ -933,18 +944,18 @@
new->v = *v;
new->tpg = tpg;
new->lun = unpacked_lun;
- list_add_tail(&new->l, head);
+ list_add_tail(&new->l, &info->v2p_entry_lists);
out:
spin_unlock_irqrestore(&info->v2p_lock, flags);
out_free:
- mutex_lock(&tpg->tv_tpg_mutex);
- tpg->tv_tpg_fe_count--;
- mutex_unlock(&tpg->tv_tpg_mutex);
-
- if (err)
+ if (err) {
+ mutex_lock(&tpg->tv_tpg_mutex);
+ tpg->tv_tpg_fe_count--;
+ mutex_unlock(&tpg->tv_tpg_mutex);
kfree(new);
+ }
return err;
}
@@ -956,39 +967,40 @@
}
/*
- Delete the translation entry specfied
+ Delete the translation entry specified
*/
static int scsiback_del_translation_entry(struct vscsibk_info *info,
struct ids_tuple *v)
{
struct v2p_entry *entry;
- struct list_head *head = &(info->v2p_entry_lists);
unsigned long flags;
+ int ret = 0;
spin_lock_irqsave(&info->v2p_lock, flags);
/* Find out the translation entry specified */
- list_for_each_entry(entry, head, l) {
- if ((entry->v.chn == v->chn) &&
- (entry->v.tgt == v->tgt) &&
- (entry->v.lun == v->lun)) {
- goto found;
- }
- }
+ entry = scsiback_chk_translation_entry(info, v);
+ if (entry)
+ __scsiback_del_translation_entry(entry);
+ else
+ ret = -ENOENT;
spin_unlock_irqrestore(&info->v2p_lock, flags);
- return 1;
-
-found:
- /* Delete the translation entry specfied */
- __scsiback_del_translation_entry(entry);
-
- spin_unlock_irqrestore(&info->v2p_lock, flags);
- return 0;
+ return ret;
}
static void scsiback_do_add_lun(struct vscsibk_info *info, const char *state,
char *phy, struct ids_tuple *vir, int try)
{
+ struct v2p_entry *entry;
+ unsigned long flags;
+
+ if (try) {
+ spin_lock_irqsave(&info->v2p_lock, flags);
+ entry = scsiback_chk_translation_entry(info, vir);
+ spin_unlock_irqrestore(&info->v2p_lock, flags);
+ if (entry)
+ return;
+ }
if (!scsiback_add_translation_entry(info, phy, vir)) {
if (xenbus_printf(XBT_NIL, info->dev->nodename, state,
"%d", XenbusStateInitialised)) {
diff --git a/drivers/xen/xenbus/xenbus_dev_frontend.c b/drivers/xen/xenbus/xenbus_dev_frontend.c
index 9433e46..912b64e 100644
--- a/drivers/xen/xenbus/xenbus_dev_frontend.c
+++ b/drivers/xen/xenbus/xenbus_dev_frontend.c
@@ -188,6 +188,8 @@
if (len == 0)
return 0;
+ if (len > XENSTORE_PAYLOAD_MAX)
+ return -EINVAL;
rb = kmalloc(sizeof(*rb) + len, GFP_KERNEL);
if (rb == NULL)
diff --git a/fs/affs/file.c b/fs/affs/file.c
index 0548c53..22fc7c8 100644
--- a/fs/affs/file.c
+++ b/fs/affs/file.c
@@ -511,8 +511,6 @@
pr_debug("%s(%lu, %ld, 0, %d)\n", __func__, inode->i_ino,
page->index, to);
BUG_ON(to > PAGE_CACHE_SIZE);
- kmap(page);
- data = page_address(page);
bsize = AFFS_SB(sb)->s_data_blksize;
tmp = page->index << PAGE_CACHE_SHIFT;
bidx = tmp / bsize;
@@ -524,14 +522,15 @@
return PTR_ERR(bh);
tmp = min(bsize - boff, to - pos);
BUG_ON(pos + tmp > to || tmp > bsize);
+ data = kmap_atomic(page);
memcpy(data + pos, AFFS_DATA(bh) + boff, tmp);
+ kunmap_atomic(data);
affs_brelse(bh);
bidx++;
pos += tmp;
boff = 0;
}
flush_dcache_page(page);
- kunmap(page);
return 0;
}
diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c
index 051ea48..7d914c6 100644
--- a/fs/binfmt_elf.c
+++ b/fs/binfmt_elf.c
@@ -653,7 +653,7 @@
if ((current->flags & PF_RANDOMIZE) &&
!(current->personality & ADDR_NO_RANDOMIZE)) {
- random_variable = (unsigned long) get_random_int();
+ random_variable = get_random_long();
random_variable &= STACK_RND_MASK;
random_variable <<= PAGE_SHIFT;
}
diff --git a/fs/block_dev.c b/fs/block_dev.c
index 39b3a17..826b164 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -1201,7 +1201,11 @@
bdev->bd_disk = disk;
bdev->bd_queue = disk->queue;
bdev->bd_contains = bdev;
- bdev->bd_inode->i_flags = disk->fops->direct_access ? S_DAX : 0;
+ if (IS_ENABLED(CONFIG_BLK_DEV_DAX) && disk->fops->direct_access)
+ bdev->bd_inode->i_flags = S_DAX;
+ else
+ bdev->bd_inode->i_flags = 0;
+
if (!partno) {
ret = -ENXIO;
bdev->bd_part = disk_get_part(disk, partno);
@@ -1693,13 +1697,24 @@
return try_to_free_buffers(page);
}
+static int blkdev_writepages(struct address_space *mapping,
+ struct writeback_control *wbc)
+{
+ if (dax_mapping(mapping)) {
+ struct block_device *bdev = I_BDEV(mapping->host);
+
+ return dax_writeback_mapping_range(mapping, bdev, wbc);
+ }
+ return generic_writepages(mapping, wbc);
+}
+
static const struct address_space_operations def_blk_aops = {
.readpage = blkdev_readpage,
.readpages = blkdev_readpages,
.writepage = blkdev_writepage,
.write_begin = blkdev_write_begin,
.write_end = blkdev_write_end,
- .writepages = generic_writepages,
+ .writepages = blkdev_writepages,
.releasepage = blkdev_releasepage,
.direct_IO = blkdev_direct_IO,
.is_dirty_writeback = buffer_check_dirty_writeback,
diff --git a/fs/btrfs/backref.c b/fs/btrfs/backref.c
index b90cd37..f6dac40 100644
--- a/fs/btrfs/backref.c
+++ b/fs/btrfs/backref.c
@@ -1406,7 +1406,8 @@
read_extent_buffer(eb, dest + bytes_left,
name_off, name_len);
if (eb != eb_in) {
- btrfs_tree_read_unlock_blocking(eb);
+ if (!path->skip_locking)
+ btrfs_tree_read_unlock_blocking(eb);
free_extent_buffer(eb);
}
ret = btrfs_find_item(fs_root, path, parent, 0,
@@ -1426,9 +1427,10 @@
eb = path->nodes[0];
/* make sure we can use eb after releasing the path */
if (eb != eb_in) {
- atomic_inc(&eb->refs);
- btrfs_tree_read_lock(eb);
- btrfs_set_lock_blocking_rw(eb, BTRFS_READ_LOCK);
+ if (!path->skip_locking)
+ btrfs_set_lock_blocking_rw(eb, BTRFS_READ_LOCK);
+ path->nodes[0] = NULL;
+ path->locks[0] = 0;
}
btrfs_release_path(path);
iref = btrfs_item_ptr(eb, slot, struct btrfs_inode_ref);
diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index c473c42..3346cd8 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -637,11 +637,7 @@
faili = nr_pages - 1;
cb->nr_pages = nr_pages;
- /* In the parent-locked case, we only locked the range we are
- * interested in. In all other cases, we can opportunistically
- * cache decompressed data that goes beyond the requested range. */
- if (!(bio_flags & EXTENT_BIO_PARENT_LOCKED))
- add_ra_bio_pages(inode, em_start + em_len, cb);
+ add_ra_bio_pages(inode, em_start + em_len, cb);
/* include any pages we added in add_ra-bio_pages */
uncompressed_len = bio->bi_vcnt * PAGE_CACHE_SIZE;
diff --git a/fs/btrfs/delayed-inode.c b/fs/btrfs/delayed-inode.c
index 0be47e4..b57daa8 100644
--- a/fs/btrfs/delayed-inode.c
+++ b/fs/btrfs/delayed-inode.c
@@ -1689,7 +1689,7 @@
*
*/
int btrfs_readdir_delayed_dir_index(struct dir_context *ctx,
- struct list_head *ins_list)
+ struct list_head *ins_list, bool *emitted)
{
struct btrfs_dir_item *di;
struct btrfs_delayed_item *curr, *next;
@@ -1733,6 +1733,7 @@
if (over)
return 1;
+ *emitted = true;
}
return 0;
}
diff --git a/fs/btrfs/delayed-inode.h b/fs/btrfs/delayed-inode.h
index f70119f..0167853 100644
--- a/fs/btrfs/delayed-inode.h
+++ b/fs/btrfs/delayed-inode.h
@@ -144,7 +144,7 @@
int btrfs_should_delete_dir_index(struct list_head *del_list,
u64 index);
int btrfs_readdir_delayed_dir_index(struct dir_context *ctx,
- struct list_head *ins_list);
+ struct list_head *ins_list, bool *emitted);
/* for init */
int __init btrfs_delayed_inode_init(void);
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 2e7c97a..392592d 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -2897,12 +2897,11 @@
struct block_device *bdev;
int ret;
int nr = 0;
- int parent_locked = *bio_flags & EXTENT_BIO_PARENT_LOCKED;
size_t pg_offset = 0;
size_t iosize;
size_t disk_io_size;
size_t blocksize = inode->i_sb->s_blocksize;
- unsigned long this_bio_flag = *bio_flags & EXTENT_BIO_PARENT_LOCKED;
+ unsigned long this_bio_flag = 0;
set_page_extent_mapped(page);
@@ -2942,18 +2941,16 @@
kunmap_atomic(userpage);
set_extent_uptodate(tree, cur, cur + iosize - 1,
&cached, GFP_NOFS);
- if (!parent_locked)
- unlock_extent_cached(tree, cur,
- cur + iosize - 1,
- &cached, GFP_NOFS);
+ unlock_extent_cached(tree, cur,
+ cur + iosize - 1,
+ &cached, GFP_NOFS);
break;
}
em = __get_extent_map(inode, page, pg_offset, cur,
end - cur + 1, get_extent, em_cached);
if (IS_ERR_OR_NULL(em)) {
SetPageError(page);
- if (!parent_locked)
- unlock_extent(tree, cur, end);
+ unlock_extent(tree, cur, end);
break;
}
extent_offset = cur - em->start;
@@ -3038,12 +3035,9 @@
set_extent_uptodate(tree, cur, cur + iosize - 1,
&cached, GFP_NOFS);
- if (parent_locked)
- free_extent_state(cached);
- else
- unlock_extent_cached(tree, cur,
- cur + iosize - 1,
- &cached, GFP_NOFS);
+ unlock_extent_cached(tree, cur,
+ cur + iosize - 1,
+ &cached, GFP_NOFS);
cur = cur + iosize;
pg_offset += iosize;
continue;
@@ -3052,8 +3046,7 @@
if (test_range_bit(tree, cur, cur_end,
EXTENT_UPTODATE, 1, NULL)) {
check_page_uptodate(tree, page);
- if (!parent_locked)
- unlock_extent(tree, cur, cur + iosize - 1);
+ unlock_extent(tree, cur, cur + iosize - 1);
cur = cur + iosize;
pg_offset += iosize;
continue;
@@ -3063,8 +3056,7 @@
*/
if (block_start == EXTENT_MAP_INLINE) {
SetPageError(page);
- if (!parent_locked)
- unlock_extent(tree, cur, cur + iosize - 1);
+ unlock_extent(tree, cur, cur + iosize - 1);
cur = cur + iosize;
pg_offset += iosize;
continue;
@@ -3083,8 +3075,7 @@
*bio_flags = this_bio_flag;
} else {
SetPageError(page);
- if (!parent_locked)
- unlock_extent(tree, cur, cur + iosize - 1);
+ unlock_extent(tree, cur, cur + iosize - 1);
}
cur = cur + iosize;
pg_offset += iosize;
@@ -3213,20 +3204,6 @@
return ret;
}
-int extent_read_full_page_nolock(struct extent_io_tree *tree, struct page *page,
- get_extent_t *get_extent, int mirror_num)
-{
- struct bio *bio = NULL;
- unsigned long bio_flags = EXTENT_BIO_PARENT_LOCKED;
- int ret;
-
- ret = __do_readpage(tree, page, get_extent, NULL, &bio, mirror_num,
- &bio_flags, READ, NULL);
- if (bio)
- ret = submit_one_bio(READ, bio, mirror_num, bio_flags);
- return ret;
-}
-
static noinline void update_nr_written(struct page *page,
struct writeback_control *wbc,
unsigned long nr_written)
diff --git a/fs/btrfs/extent_io.h b/fs/btrfs/extent_io.h
index 0377413..880d529 100644
--- a/fs/btrfs/extent_io.h
+++ b/fs/btrfs/extent_io.h
@@ -29,7 +29,6 @@
*/
#define EXTENT_BIO_COMPRESSED 1
#define EXTENT_BIO_TREE_LOG 2
-#define EXTENT_BIO_PARENT_LOCKED 4
#define EXTENT_BIO_FLAG_SHIFT 16
/* these are bit numbers for test/set bit */
@@ -210,8 +209,6 @@
int try_lock_extent(struct extent_io_tree *tree, u64 start, u64 end);
int extent_read_full_page(struct extent_io_tree *tree, struct page *page,
get_extent_t *get_extent, int mirror_num);
-int extent_read_full_page_nolock(struct extent_io_tree *tree, struct page *page,
- get_extent_t *get_extent, int mirror_num);
int __init extent_io_init(void);
void extent_io_exit(void);
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 5f06eb1..d96f5cf 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -5717,6 +5717,7 @@
char *name_ptr;
int name_len;
int is_curr = 0; /* ctx->pos points to the current index? */
+ bool emitted;
/* FIXME, use a real flag for deciding about the key type */
if (root->fs_info->tree_root == root)
@@ -5745,6 +5746,7 @@
if (ret < 0)
goto err;
+ emitted = false;
while (1) {
leaf = path->nodes[0];
slot = path->slots[0];
@@ -5824,6 +5826,7 @@
if (over)
goto nopos;
+ emitted = true;
di_len = btrfs_dir_name_len(leaf, di) +
btrfs_dir_data_len(leaf, di) + sizeof(*di);
di_cur += di_len;
@@ -5836,11 +5839,20 @@
if (key_type == BTRFS_DIR_INDEX_KEY) {
if (is_curr)
ctx->pos++;
- ret = btrfs_readdir_delayed_dir_index(ctx, &ins_list);
+ ret = btrfs_readdir_delayed_dir_index(ctx, &ins_list, &emitted);
if (ret)
goto nopos;
}
+ /*
+ * If we haven't emitted any dir entry, we must not touch ctx->pos as
+ * it was was set to the termination value in previous call. We assume
+ * that "." and ".." were emitted if we reach this point and set the
+ * termination value as well for an empty directory.
+ */
+ if (ctx->pos > 2 && !emitted)
+ goto nopos;
+
/* Reached end of directory/root. Bump pos past the last item. */
ctx->pos++;
@@ -7974,6 +7986,7 @@
kfree(dip);
+ dio_bio->bi_error = bio->bi_error;
dio_end_io(dio_bio, bio->bi_error);
if (io_bio->end_io)
@@ -8028,6 +8041,7 @@
kfree(dip);
+ dio_bio->bi_error = bio->bi_error;
dio_end_io(dio_bio, bio->bi_error);
bio_put(bio);
}
diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
index 952172c..48aee98 100644
--- a/fs/btrfs/ioctl.c
+++ b/fs/btrfs/ioctl.c
@@ -2794,24 +2794,29 @@
static struct page *extent_same_get_page(struct inode *inode, pgoff_t index)
{
struct page *page;
- struct extent_io_tree *tree = &BTRFS_I(inode)->io_tree;
page = grab_cache_page(inode->i_mapping, index);
if (!page)
- return NULL;
+ return ERR_PTR(-ENOMEM);
if (!PageUptodate(page)) {
- if (extent_read_full_page_nolock(tree, page, btrfs_get_extent,
- 0))
- return NULL;
+ int ret;
+
+ ret = btrfs_readpage(NULL, page);
+ if (ret)
+ return ERR_PTR(ret);
lock_page(page);
if (!PageUptodate(page)) {
unlock_page(page);
page_cache_release(page);
- return NULL;
+ return ERR_PTR(-EIO);
+ }
+ if (page->mapping != inode->i_mapping) {
+ unlock_page(page);
+ page_cache_release(page);
+ return ERR_PTR(-EAGAIN);
}
}
- unlock_page(page);
return page;
}
@@ -2823,17 +2828,31 @@
pgoff_t index = off >> PAGE_CACHE_SHIFT;
for (i = 0; i < num_pages; i++) {
+again:
pages[i] = extent_same_get_page(inode, index + i);
- if (!pages[i])
- return -ENOMEM;
+ if (IS_ERR(pages[i])) {
+ int err = PTR_ERR(pages[i]);
+
+ if (err == -EAGAIN)
+ goto again;
+ pages[i] = NULL;
+ return err;
+ }
}
return 0;
}
-static inline void lock_extent_range(struct inode *inode, u64 off, u64 len)
+static int lock_extent_range(struct inode *inode, u64 off, u64 len,
+ bool retry_range_locking)
{
- /* do any pending delalloc/csum calc on src, one way or
- another, and lock file content */
+ /*
+ * Do any pending delalloc/csum calculations on inode, one way or
+ * another, and lock file content.
+ * The locking order is:
+ *
+ * 1) pages
+ * 2) range in the inode's io tree
+ */
while (1) {
struct btrfs_ordered_extent *ordered;
lock_extent(&BTRFS_I(inode)->io_tree, off, off + len - 1);
@@ -2851,8 +2870,11 @@
unlock_extent(&BTRFS_I(inode)->io_tree, off, off + len - 1);
if (ordered)
btrfs_put_ordered_extent(ordered);
+ if (!retry_range_locking)
+ return -EAGAIN;
btrfs_wait_ordered_range(inode, off, len);
}
+ return 0;
}
static void btrfs_double_inode_unlock(struct inode *inode1, struct inode *inode2)
@@ -2877,15 +2899,24 @@
unlock_extent(&BTRFS_I(inode2)->io_tree, loff2, loff2 + len - 1);
}
-static void btrfs_double_extent_lock(struct inode *inode1, u64 loff1,
- struct inode *inode2, u64 loff2, u64 len)
+static int btrfs_double_extent_lock(struct inode *inode1, u64 loff1,
+ struct inode *inode2, u64 loff2, u64 len,
+ bool retry_range_locking)
{
+ int ret;
+
if (inode1 < inode2) {
swap(inode1, inode2);
swap(loff1, loff2);
}
- lock_extent_range(inode1, loff1, len);
- lock_extent_range(inode2, loff2, len);
+ ret = lock_extent_range(inode1, loff1, len, retry_range_locking);
+ if (ret)
+ return ret;
+ ret = lock_extent_range(inode2, loff2, len, retry_range_locking);
+ if (ret)
+ unlock_extent(&BTRFS_I(inode1)->io_tree, loff1,
+ loff1 + len - 1);
+ return ret;
}
struct cmp_pages {
@@ -2901,11 +2932,15 @@
for (i = 0; i < cmp->num_pages; i++) {
pg = cmp->src_pages[i];
- if (pg)
+ if (pg) {
+ unlock_page(pg);
page_cache_release(pg);
+ }
pg = cmp->dst_pages[i];
- if (pg)
+ if (pg) {
+ unlock_page(pg);
page_cache_release(pg);
+ }
}
kfree(cmp->src_pages);
kfree(cmp->dst_pages);
@@ -2966,6 +3001,8 @@
src_page = cmp->src_pages[i];
dst_page = cmp->dst_pages[i];
+ ASSERT(PageLocked(src_page));
+ ASSERT(PageLocked(dst_page));
addr = kmap_atomic(src_page);
dst_addr = kmap_atomic(dst_page);
@@ -3078,14 +3115,46 @@
goto out_unlock;
}
+again:
ret = btrfs_cmp_data_prepare(src, loff, dst, dst_loff, olen, &cmp);
if (ret)
goto out_unlock;
if (same_inode)
- lock_extent_range(src, same_lock_start, same_lock_len);
+ ret = lock_extent_range(src, same_lock_start, same_lock_len,
+ false);
else
- btrfs_double_extent_lock(src, loff, dst, dst_loff, len);
+ ret = btrfs_double_extent_lock(src, loff, dst, dst_loff, len,
+ false);
+ /*
+ * If one of the inodes has dirty pages in the respective range or
+ * ordered extents, we need to flush dellaloc and wait for all ordered
+ * extents in the range. We must unlock the pages and the ranges in the
+ * io trees to avoid deadlocks when flushing delalloc (requires locking
+ * pages) and when waiting for ordered extents to complete (they require
+ * range locking).
+ */
+ if (ret == -EAGAIN) {
+ /*
+ * Ranges in the io trees already unlocked. Now unlock all
+ * pages before waiting for all IO to complete.
+ */
+ btrfs_cmp_data_free(&cmp);
+ if (same_inode) {
+ btrfs_wait_ordered_range(src, same_lock_start,
+ same_lock_len);
+ } else {
+ btrfs_wait_ordered_range(src, loff, len);
+ btrfs_wait_ordered_range(dst, dst_loff, len);
+ }
+ goto again;
+ }
+ ASSERT(ret == 0);
+ if (WARN_ON(ret)) {
+ /* ranges in the io trees already unlocked */
+ btrfs_cmp_data_free(&cmp);
+ return ret;
+ }
/* pass original length for comparison so we stay within i_size */
ret = btrfs_cmp_data(src, loff, dst, dst_loff, olen, &cmp);
@@ -3795,9 +3864,15 @@
u64 lock_start = min_t(u64, off, destoff);
u64 lock_len = max_t(u64, off, destoff) + len - lock_start;
- lock_extent_range(src, lock_start, lock_len);
+ ret = lock_extent_range(src, lock_start, lock_len, true);
} else {
- btrfs_double_extent_lock(src, off, inode, destoff, len);
+ ret = btrfs_double_extent_lock(src, off, inode, destoff, len,
+ true);
+ }
+ ASSERT(ret == 0);
+ if (WARN_ON(ret)) {
+ /* ranges in the io trees already unlocked */
+ goto out_unlock;
}
ret = btrfs_clone(src, inode, off, olen, len, destoff, 0);
diff --git a/fs/cifs/cifs_dfs_ref.c b/fs/cifs/cifs_dfs_ref.c
index 7dc886c..e956cba 100644
--- a/fs/cifs/cifs_dfs_ref.c
+++ b/fs/cifs/cifs_dfs_ref.c
@@ -175,7 +175,7 @@
* string to the length of the original string to allow for worst case.
*/
md_len = strlen(sb_mountdata) + INET6_ADDRSTRLEN;
- mountdata = kzalloc(md_len + 1, GFP_KERNEL);
+ mountdata = kzalloc(md_len + sizeof("ip=") + 1, GFP_KERNEL);
if (mountdata == NULL) {
rc = -ENOMEM;
goto compose_mount_options_err;
diff --git a/fs/cifs/cifsencrypt.c b/fs/cifs/cifsencrypt.c
index afa09fc..e682b36 100644
--- a/fs/cifs/cifsencrypt.c
+++ b/fs/cifs/cifsencrypt.c
@@ -714,7 +714,7 @@
ses->auth_key.response = kmalloc(baselen + tilen, GFP_KERNEL);
if (!ses->auth_key.response) {
- rc = ENOMEM;
+ rc = -ENOMEM;
ses->auth_key.len = 0;
goto setup_ntlmv2_rsp_ret;
}
diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
index 4fbd92d..a763cd3 100644
--- a/fs/cifs/connect.c
+++ b/fs/cifs/connect.c
@@ -2999,8 +2999,7 @@
if (ses_init_buf) {
ses_init_buf->trailer.session_req.called_len = 32;
- if (server->server_RFC1001_name &&
- server->server_RFC1001_name[0] != 0)
+ if (server->server_RFC1001_name[0] != 0)
rfc1002mangle(ses_init_buf->trailer.
session_req.called_name,
server->server_RFC1001_name,
diff --git a/fs/compat_ioctl.c b/fs/compat_ioctl.c
index 6402eaf..bd01b92 100644
--- a/fs/compat_ioctl.c
+++ b/fs/compat_ioctl.c
@@ -1040,28 +1040,6 @@
/* PPPOX */
COMPATIBLE_IOCTL(PPPOEIOCSFWD)
COMPATIBLE_IOCTL(PPPOEIOCDFWD)
-/* ppdev */
-COMPATIBLE_IOCTL(PPSETMODE)
-COMPATIBLE_IOCTL(PPRSTATUS)
-COMPATIBLE_IOCTL(PPRCONTROL)
-COMPATIBLE_IOCTL(PPWCONTROL)
-COMPATIBLE_IOCTL(PPFCONTROL)
-COMPATIBLE_IOCTL(PPRDATA)
-COMPATIBLE_IOCTL(PPWDATA)
-COMPATIBLE_IOCTL(PPCLAIM)
-COMPATIBLE_IOCTL(PPRELEASE)
-COMPATIBLE_IOCTL(PPYIELD)
-COMPATIBLE_IOCTL(PPEXCL)
-COMPATIBLE_IOCTL(PPDATADIR)
-COMPATIBLE_IOCTL(PPNEGOT)
-COMPATIBLE_IOCTL(PPWCTLONIRQ)
-COMPATIBLE_IOCTL(PPCLRIRQ)
-COMPATIBLE_IOCTL(PPSETPHASE)
-COMPATIBLE_IOCTL(PPGETMODES)
-COMPATIBLE_IOCTL(PPGETMODE)
-COMPATIBLE_IOCTL(PPGETPHASE)
-COMPATIBLE_IOCTL(PPGETFLAGS)
-COMPATIBLE_IOCTL(PPSETFLAGS)
/* Big A */
/* sparc only */
/* Big Q for sound/OSS */
diff --git a/fs/dax.c b/fs/dax.c
index fc2e314..7111724 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -79,15 +79,14 @@
}
/*
- * dax_clear_blocks() is called from within transaction context from XFS,
+ * dax_clear_sectors() is called from within transaction context from XFS,
* and hence this means the stack from this point must follow GFP_NOFS
* semantics for all operations.
*/
-int dax_clear_blocks(struct inode *inode, sector_t block, long _size)
+int dax_clear_sectors(struct block_device *bdev, sector_t _sector, long _size)
{
- struct block_device *bdev = inode->i_sb->s_bdev;
struct blk_dax_ctl dax = {
- .sector = block << (inode->i_blkbits - 9),
+ .sector = _sector,
.size = _size,
};
@@ -109,7 +108,7 @@
wmb_pmem();
return 0;
}
-EXPORT_SYMBOL_GPL(dax_clear_blocks);
+EXPORT_SYMBOL_GPL(dax_clear_sectors);
/* the clear_pmem() calls are ordered by a wmb_pmem() in the caller */
static void dax_new_buf(void __pmem *addr, unsigned size, unsigned first,
@@ -485,11 +484,10 @@
* end]. This is required by data integrity operations to ensure file data is
* on persistent storage prior to completion of the operation.
*/
-int dax_writeback_mapping_range(struct address_space *mapping, loff_t start,
- loff_t end)
+int dax_writeback_mapping_range(struct address_space *mapping,
+ struct block_device *bdev, struct writeback_control *wbc)
{
struct inode *inode = mapping->host;
- struct block_device *bdev = inode->i_sb->s_bdev;
pgoff_t start_index, end_index, pmd_index;
pgoff_t indices[PAGEVEC_SIZE];
struct pagevec pvec;
@@ -500,8 +498,11 @@
if (WARN_ON_ONCE(inode->i_blkbits != PAGE_SHIFT))
return -EIO;
- start_index = start >> PAGE_CACHE_SHIFT;
- end_index = end >> PAGE_CACHE_SHIFT;
+ if (!mapping->nrexceptional || wbc->sync_mode != WB_SYNC_ALL)
+ return 0;
+
+ start_index = wbc->range_start >> PAGE_CACHE_SHIFT;
+ end_index = wbc->range_end >> PAGE_CACHE_SHIFT;
pmd_index = DAX_PMD_INDEX(start_index);
rcu_read_lock();
diff --git a/fs/devpts/inode.c b/fs/devpts/inode.c
index 1f107fd..655f21f 100644
--- a/fs/devpts/inode.c
+++ b/fs/devpts/inode.c
@@ -575,6 +575,26 @@
mutex_unlock(&allocated_ptys_lock);
}
+/*
+ * pty code needs to hold extra references in case of last /dev/tty close
+ */
+
+void devpts_add_ref(struct inode *ptmx_inode)
+{
+ struct super_block *sb = pts_sb_from_inode(ptmx_inode);
+
+ atomic_inc(&sb->s_active);
+ ihold(ptmx_inode);
+}
+
+void devpts_del_ref(struct inode *ptmx_inode)
+{
+ struct super_block *sb = pts_sb_from_inode(ptmx_inode);
+
+ iput(ptmx_inode);
+ deactivate_super(sb);
+}
+
/**
* devpts_pty_new -- create a new inode in /dev/pts/
* @ptmx_inode: inode of the master
diff --git a/fs/direct-io.c b/fs/direct-io.c
index 1b2f7ff..d6a9012 100644
--- a/fs/direct-io.c
+++ b/fs/direct-io.c
@@ -472,8 +472,8 @@
dio->io_error = -EIO;
if (dio->is_async && dio->rw == READ && dio->should_dirty) {
- bio_check_pages_dirty(bio); /* transfers ownership */
err = bio->bi_error;
+ bio_check_pages_dirty(bio); /* transfers ownership */
} else {
bio_for_each_segment_all(bvec, bio, i) {
struct page *page = bvec->bv_page;
diff --git a/fs/efivarfs/file.c b/fs/efivarfs/file.c
index c424e48..d48e0d2 100644
--- a/fs/efivarfs/file.c
+++ b/fs/efivarfs/file.c
@@ -10,6 +10,7 @@
#include <linux/efi.h>
#include <linux/fs.h>
#include <linux/slab.h>
+#include <linux/mount.h>
#include "internal.h"
@@ -103,9 +104,78 @@
return size;
}
+static int
+efivarfs_ioc_getxflags(struct file *file, void __user *arg)
+{
+ struct inode *inode = file->f_mapping->host;
+ unsigned int i_flags;
+ unsigned int flags = 0;
+
+ i_flags = inode->i_flags;
+ if (i_flags & S_IMMUTABLE)
+ flags |= FS_IMMUTABLE_FL;
+
+ if (copy_to_user(arg, &flags, sizeof(flags)))
+ return -EFAULT;
+ return 0;
+}
+
+static int
+efivarfs_ioc_setxflags(struct file *file, void __user *arg)
+{
+ struct inode *inode = file->f_mapping->host;
+ unsigned int flags;
+ unsigned int i_flags = 0;
+ int error;
+
+ if (!inode_owner_or_capable(inode))
+ return -EACCES;
+
+ if (copy_from_user(&flags, arg, sizeof(flags)))
+ return -EFAULT;
+
+ if (flags & ~FS_IMMUTABLE_FL)
+ return -EOPNOTSUPP;
+
+ if (!capable(CAP_LINUX_IMMUTABLE))
+ return -EPERM;
+
+ if (flags & FS_IMMUTABLE_FL)
+ i_flags |= S_IMMUTABLE;
+
+
+ error = mnt_want_write_file(file);
+ if (error)
+ return error;
+
+ inode_lock(inode);
+ inode_set_flags(inode, i_flags, S_IMMUTABLE);
+ inode_unlock(inode);
+
+ mnt_drop_write_file(file);
+
+ return 0;
+}
+
+long
+efivarfs_file_ioctl(struct file *file, unsigned int cmd, unsigned long p)
+{
+ void __user *arg = (void __user *)p;
+
+ switch (cmd) {
+ case FS_IOC_GETFLAGS:
+ return efivarfs_ioc_getxflags(file, arg);
+ case FS_IOC_SETFLAGS:
+ return efivarfs_ioc_setxflags(file, arg);
+ }
+
+ return -ENOTTY;
+}
+
const struct file_operations efivarfs_file_operations = {
.open = simple_open,
.read = efivarfs_file_read,
.write = efivarfs_file_write,
.llseek = no_llseek,
+ .unlocked_ioctl = efivarfs_file_ioctl,
};
diff --git a/fs/efivarfs/inode.c b/fs/efivarfs/inode.c
index 3381b9d..e2ab6d0 100644
--- a/fs/efivarfs/inode.c
+++ b/fs/efivarfs/inode.c
@@ -15,7 +15,8 @@
#include "internal.h"
struct inode *efivarfs_get_inode(struct super_block *sb,
- const struct inode *dir, int mode, dev_t dev)
+ const struct inode *dir, int mode,
+ dev_t dev, bool is_removable)
{
struct inode *inode = new_inode(sb);
@@ -23,6 +24,7 @@
inode->i_ino = get_next_ino();
inode->i_mode = mode;
inode->i_atime = inode->i_mtime = inode->i_ctime = CURRENT_TIME;
+ inode->i_flags = is_removable ? 0 : S_IMMUTABLE;
switch (mode & S_IFMT) {
case S_IFREG:
inode->i_fop = &efivarfs_file_operations;
@@ -102,22 +104,17 @@
static int efivarfs_create(struct inode *dir, struct dentry *dentry,
umode_t mode, bool excl)
{
- struct inode *inode;
+ struct inode *inode = NULL;
struct efivar_entry *var;
int namelen, i = 0, err = 0;
+ bool is_removable = false;
if (!efivarfs_valid_name(dentry->d_name.name, dentry->d_name.len))
return -EINVAL;
- inode = efivarfs_get_inode(dir->i_sb, dir, mode, 0);
- if (!inode)
- return -ENOMEM;
-
var = kzalloc(sizeof(struct efivar_entry), GFP_KERNEL);
- if (!var) {
- err = -ENOMEM;
- goto out;
- }
+ if (!var)
+ return -ENOMEM;
/* length of the variable name itself: remove GUID and separator */
namelen = dentry->d_name.len - EFI_VARIABLE_GUID_LEN - 1;
@@ -125,6 +122,16 @@
efivarfs_hex_to_guid(dentry->d_name.name + namelen + 1,
&var->var.VendorGuid);
+ if (efivar_variable_is_removable(var->var.VendorGuid,
+ dentry->d_name.name, namelen))
+ is_removable = true;
+
+ inode = efivarfs_get_inode(dir->i_sb, dir, mode, 0, is_removable);
+ if (!inode) {
+ err = -ENOMEM;
+ goto out;
+ }
+
for (i = 0; i < namelen; i++)
var->var.VariableName[i] = dentry->d_name.name[i];
@@ -138,7 +145,8 @@
out:
if (err) {
kfree(var);
- iput(inode);
+ if (inode)
+ iput(inode);
}
return err;
}
diff --git a/fs/efivarfs/internal.h b/fs/efivarfs/internal.h
index b5ff16a..b450518 100644
--- a/fs/efivarfs/internal.h
+++ b/fs/efivarfs/internal.h
@@ -15,7 +15,8 @@
extern const struct inode_operations efivarfs_dir_inode_operations;
extern bool efivarfs_valid_name(const char *str, int len);
extern struct inode *efivarfs_get_inode(struct super_block *sb,
- const struct inode *dir, int mode, dev_t dev);
+ const struct inode *dir, int mode, dev_t dev,
+ bool is_removable);
extern struct list_head efivarfs_list;
diff --git a/fs/efivarfs/super.c b/fs/efivarfs/super.c
index b8a564f..dd029d1 100644
--- a/fs/efivarfs/super.c
+++ b/fs/efivarfs/super.c
@@ -118,8 +118,9 @@
struct dentry *dentry, *root = sb->s_root;
unsigned long size = 0;
char *name;
- int len, i;
+ int len;
int err = -ENOMEM;
+ bool is_removable = false;
entry = kzalloc(sizeof(*entry), GFP_KERNEL);
if (!entry)
@@ -128,15 +129,17 @@
memcpy(entry->var.VariableName, name16, name_size);
memcpy(&(entry->var.VendorGuid), &vendor, sizeof(efi_guid_t));
- len = ucs2_strlen(entry->var.VariableName);
+ len = ucs2_utf8size(entry->var.VariableName);
/* name, plus '-', plus GUID, plus NUL*/
name = kmalloc(len + 1 + EFI_VARIABLE_GUID_LEN + 1, GFP_KERNEL);
if (!name)
goto fail;
- for (i = 0; i < len; i++)
- name[i] = entry->var.VariableName[i] & 0xFF;
+ ucs2_as_utf8(name, entry->var.VariableName, len);
+
+ if (efivar_variable_is_removable(entry->var.VendorGuid, name, len))
+ is_removable = true;
name[len] = '-';
@@ -144,7 +147,8 @@
name[len + EFI_VARIABLE_GUID_LEN+1] = '\0';
- inode = efivarfs_get_inode(sb, d_inode(root), S_IFREG | 0644, 0);
+ inode = efivarfs_get_inode(sb, d_inode(root), S_IFREG | 0644, 0,
+ is_removable);
if (!inode)
goto fail_name;
@@ -200,7 +204,7 @@
sb->s_d_op = &efivarfs_d_ops;
sb->s_time_gran = 1;
- inode = efivarfs_get_inode(sb, NULL, S_IFDIR | 0755, 0);
+ inode = efivarfs_get_inode(sb, NULL, S_IFDIR | 0755, 0, true);
if (!inode)
return -ENOMEM;
inode->i_op = &efivarfs_dir_inode_operations;
diff --git a/fs/ext2/file.c b/fs/ext2/file.c
index 2c88d68..c1400b1 100644
--- a/fs/ext2/file.c
+++ b/fs/ext2/file.c
@@ -80,23 +80,6 @@
return ret;
}
-static int ext2_dax_mkwrite(struct vm_area_struct *vma, struct vm_fault *vmf)
-{
- struct inode *inode = file_inode(vma->vm_file);
- struct ext2_inode_info *ei = EXT2_I(inode);
- int ret;
-
- sb_start_pagefault(inode->i_sb);
- file_update_time(vma->vm_file);
- down_read(&ei->dax_sem);
-
- ret = __dax_mkwrite(vma, vmf, ext2_get_block, NULL);
-
- up_read(&ei->dax_sem);
- sb_end_pagefault(inode->i_sb);
- return ret;
-}
-
static int ext2_dax_pfn_mkwrite(struct vm_area_struct *vma,
struct vm_fault *vmf)
{
@@ -124,7 +107,7 @@
static const struct vm_operations_struct ext2_dax_vm_ops = {
.fault = ext2_dax_fault,
.pmd_fault = ext2_dax_pmd_fault,
- .page_mkwrite = ext2_dax_mkwrite,
+ .page_mkwrite = ext2_dax_fault,
.pfn_mkwrite = ext2_dax_pfn_mkwrite,
};
diff --git a/fs/ext2/inode.c b/fs/ext2/inode.c
index 338eefd..6bd58e6 100644
--- a/fs/ext2/inode.c
+++ b/fs/ext2/inode.c
@@ -737,8 +737,10 @@
* so that it's not found by another thread before it's
* initialised
*/
- err = dax_clear_blocks(inode, le32_to_cpu(chain[depth-1].key),
- 1 << inode->i_blkbits);
+ err = dax_clear_sectors(inode->i_sb->s_bdev,
+ le32_to_cpu(chain[depth-1].key) <<
+ (inode->i_blkbits - 9),
+ 1 << inode->i_blkbits);
if (err) {
mutex_unlock(&ei->truncate_mutex);
goto cleanup;
@@ -874,6 +876,14 @@
static int
ext2_writepages(struct address_space *mapping, struct writeback_control *wbc)
{
+#ifdef CONFIG_FS_DAX
+ if (dax_mapping(mapping)) {
+ return dax_writeback_mapping_range(mapping,
+ mapping->host->i_sb->s_bdev,
+ wbc);
+ }
+#endif
+
return mpage_writepages(mapping, wbc, ext2_get_block);
}
@@ -1296,7 +1306,7 @@
inode->i_flags |= S_NOATIME;
if (flags & EXT2_DIRSYNC_FL)
inode->i_flags |= S_DIRSYNC;
- if (test_opt(inode->i_sb, DAX))
+ if (test_opt(inode->i_sb, DAX) && S_ISREG(inode->i_mode))
inode->i_flags |= S_DAX;
}
diff --git a/fs/ext4/balloc.c b/fs/ext4/balloc.c
index ec0668a..fe1f50f 100644
--- a/fs/ext4/balloc.c
+++ b/fs/ext4/balloc.c
@@ -191,7 +191,6 @@
/* If checksum is bad mark all blocks used to prevent allocation
* essentially implementing a per-group read-only flag. */
if (!ext4_group_desc_csum_verify(sb, block_group, gdp)) {
- ext4_error(sb, "Checksum bad for group %u", block_group);
grp = ext4_get_group_info(sb, block_group);
if (!EXT4_MB_GRP_BBITMAP_CORRUPT(grp))
percpu_counter_sub(&sbi->s_freeclusters_counter,
@@ -442,14 +441,16 @@
}
ext4_lock_group(sb, block_group);
if (desc->bg_flags & cpu_to_le16(EXT4_BG_BLOCK_UNINIT)) {
-
err = ext4_init_block_bitmap(sb, bh, block_group, desc);
set_bitmap_uptodate(bh);
set_buffer_uptodate(bh);
ext4_unlock_group(sb, block_group);
unlock_buffer(bh);
- if (err)
+ if (err) {
+ ext4_error(sb, "Failed to init block bitmap for group "
+ "%u: %d", block_group, err);
goto out;
+ }
goto verify;
}
ext4_unlock_group(sb, block_group);
diff --git a/fs/ext4/crypto.c b/fs/ext4/crypto.c
index c802120..38f7562 100644
--- a/fs/ext4/crypto.c
+++ b/fs/ext4/crypto.c
@@ -467,3 +467,59 @@
return size;
return 0;
}
+
+/*
+ * Validate dentries for encrypted directories to make sure we aren't
+ * potentially caching stale data after a key has been added or
+ * removed.
+ */
+static int ext4_d_revalidate(struct dentry *dentry, unsigned int flags)
+{
+ struct inode *dir = d_inode(dentry->d_parent);
+ struct ext4_crypt_info *ci = EXT4_I(dir)->i_crypt_info;
+ int dir_has_key, cached_with_key;
+
+ if (!ext4_encrypted_inode(dir))
+ return 0;
+
+ if (ci && ci->ci_keyring_key &&
+ (ci->ci_keyring_key->flags & ((1 << KEY_FLAG_INVALIDATED) |
+ (1 << KEY_FLAG_REVOKED) |
+ (1 << KEY_FLAG_DEAD))))
+ ci = NULL;
+
+ /* this should eventually be an flag in d_flags */
+ cached_with_key = dentry->d_fsdata != NULL;
+ dir_has_key = (ci != NULL);
+
+ /*
+ * If the dentry was cached without the key, and it is a
+ * negative dentry, it might be a valid name. We can't check
+ * if the key has since been made available due to locking
+ * reasons, so we fail the validation so ext4_lookup() can do
+ * this check.
+ *
+ * We also fail the validation if the dentry was created with
+ * the key present, but we no longer have the key, or vice versa.
+ */
+ if ((!cached_with_key && d_is_negative(dentry)) ||
+ (!cached_with_key && dir_has_key) ||
+ (cached_with_key && !dir_has_key)) {
+#if 0 /* Revalidation debug */
+ char buf[80];
+ char *cp = simple_dname(dentry, buf, sizeof(buf));
+
+ if (IS_ERR(cp))
+ cp = (char *) "???";
+ pr_err("revalidate: %s %p %d %d %d\n", cp, dentry->d_fsdata,
+ cached_with_key, d_is_negative(dentry),
+ dir_has_key);
+#endif
+ return 0;
+ }
+ return 1;
+}
+
+const struct dentry_operations ext4_encrypted_d_ops = {
+ .d_revalidate = ext4_d_revalidate,
+};
diff --git a/fs/ext4/dir.c b/fs/ext4/dir.c
index 1d1bca7..33f5e2a 100644
--- a/fs/ext4/dir.c
+++ b/fs/ext4/dir.c
@@ -111,6 +111,12 @@
int dir_has_error = 0;
struct ext4_str fname_crypto_str = {.name = NULL, .len = 0};
+ if (ext4_encrypted_inode(inode)) {
+ err = ext4_get_encryption_info(inode);
+ if (err && err != -ENOKEY)
+ return err;
+ }
+
if (is_dx_dir(inode)) {
err = ext4_dx_readdir(file, ctx);
if (err != ERR_BAD_DX_DIR) {
@@ -157,8 +163,11 @@
index, 1);
file->f_ra.prev_pos = (loff_t)index << PAGE_CACHE_SHIFT;
bh = ext4_bread(NULL, inode, map.m_lblk, 0);
- if (IS_ERR(bh))
- return PTR_ERR(bh);
+ if (IS_ERR(bh)) {
+ err = PTR_ERR(bh);
+ bh = NULL;
+ goto errout;
+ }
}
if (!bh) {
diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
index 0662b28..157b458 100644
--- a/fs/ext4/ext4.h
+++ b/fs/ext4/ext4.h
@@ -2302,6 +2302,7 @@
int ext4_decrypt(struct page *page);
int ext4_encrypted_zeroout(struct inode *inode, ext4_lblk_t lblk,
ext4_fsblk_t pblk, ext4_lblk_t len);
+extern const struct dentry_operations ext4_encrypted_d_ops;
#ifdef CONFIG_EXT4_FS_ENCRYPTION
int ext4_init_crypto(void);
diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
index 0ffabaf..3753ceb 100644
--- a/fs/ext4/extents.c
+++ b/fs/ext4/extents.c
@@ -3928,7 +3928,7 @@
convert_initialized_extent(handle_t *handle, struct inode *inode,
struct ext4_map_blocks *map,
struct ext4_ext_path **ppath, int flags,
- unsigned int allocated, ext4_fsblk_t newblock)
+ unsigned int allocated)
{
struct ext4_ext_path *path = *ppath;
struct ext4_extent *ex;
@@ -4347,7 +4347,7 @@
(flags & EXT4_GET_BLOCKS_CONVERT_UNWRITTEN)) {
allocated = convert_initialized_extent(
handle, inode, map, &path,
- flags, allocated, newblock);
+ flags, allocated);
goto out2;
} else if (!ext4_ext_is_unwritten(ex))
goto out;
diff --git a/fs/ext4/file.c b/fs/ext4/file.c
index 1126436..4cd318f 100644
--- a/fs/ext4/file.c
+++ b/fs/ext4/file.c
@@ -262,23 +262,8 @@
return result;
}
-static int ext4_dax_mkwrite(struct vm_area_struct *vma, struct vm_fault *vmf)
-{
- int err;
- struct inode *inode = file_inode(vma->vm_file);
-
- sb_start_pagefault(inode->i_sb);
- file_update_time(vma->vm_file);
- down_read(&EXT4_I(inode)->i_mmap_sem);
- err = __dax_mkwrite(vma, vmf, ext4_dax_mmap_get_block, NULL);
- up_read(&EXT4_I(inode)->i_mmap_sem);
- sb_end_pagefault(inode->i_sb);
-
- return err;
-}
-
/*
- * Handle write fault for VM_MIXEDMAP mappings. Similarly to ext4_dax_mkwrite()
+ * Handle write fault for VM_MIXEDMAP mappings. Similarly to ext4_dax_fault()
* handler we check for races agaist truncate. Note that since we cycle through
* i_mmap_sem, we are sure that also any hole punching that began before we
* were called is finished by now and so if it included part of the file we
@@ -311,7 +296,7 @@
static const struct vm_operations_struct ext4_dax_vm_ops = {
.fault = ext4_dax_fault,
.pmd_fault = ext4_dax_pmd_fault,
- .page_mkwrite = ext4_dax_mkwrite,
+ .page_mkwrite = ext4_dax_fault,
.pfn_mkwrite = ext4_dax_pfn_mkwrite,
};
#else
@@ -350,6 +335,7 @@
struct super_block *sb = inode->i_sb;
struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
struct vfsmount *mnt = filp->f_path.mnt;
+ struct inode *dir = filp->f_path.dentry->d_parent->d_inode;
struct path path;
char buf[64], *cp;
int ret;
@@ -393,6 +379,14 @@
if (ext4_encryption_info(inode) == NULL)
return -ENOKEY;
}
+ if (ext4_encrypted_inode(dir) &&
+ !ext4_is_child_context_consistent_with_parent(dir, inode)) {
+ ext4_warning(inode->i_sb,
+ "Inconsistent encryption contexts: %lu/%lu\n",
+ (unsigned long) dir->i_ino,
+ (unsigned long) inode->i_ino);
+ return -EPERM;
+ }
/*
* Set up the jbd2_inode if we are opening the inode for
* writing and the journal is present
diff --git a/fs/ext4/ialloc.c b/fs/ext4/ialloc.c
index 3fcfd50..acc0ad5 100644
--- a/fs/ext4/ialloc.c
+++ b/fs/ext4/ialloc.c
@@ -76,7 +76,6 @@
/* If checksum is bad mark all blocks and inodes use to prevent
* allocation, essentially implementing a per-group read-only flag. */
if (!ext4_group_desc_csum_verify(sb, block_group, gdp)) {
- ext4_error(sb, "Checksum bad for group %u", block_group);
grp = ext4_get_group_info(sb, block_group);
if (!EXT4_MB_GRP_BBITMAP_CORRUPT(grp))
percpu_counter_sub(&sbi->s_freeclusters_counter,
@@ -191,8 +190,11 @@
set_buffer_verified(bh);
ext4_unlock_group(sb, block_group);
unlock_buffer(bh);
- if (err)
+ if (err) {
+ ext4_error(sb, "Failed to init inode bitmap for group "
+ "%u: %d", block_group, err);
goto out;
+ }
return bh;
}
ext4_unlock_group(sb, block_group);
diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index 83bc8bf..aee960b 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -686,6 +686,34 @@
return retval;
}
+/*
+ * Update EXT4_MAP_FLAGS in bh->b_state. For buffer heads attached to pages
+ * we have to be careful as someone else may be manipulating b_state as well.
+ */
+static void ext4_update_bh_state(struct buffer_head *bh, unsigned long flags)
+{
+ unsigned long old_state;
+ unsigned long new_state;
+
+ flags &= EXT4_MAP_FLAGS;
+
+ /* Dummy buffer_head? Set non-atomically. */
+ if (!bh->b_page) {
+ bh->b_state = (bh->b_state & ~EXT4_MAP_FLAGS) | flags;
+ return;
+ }
+ /*
+ * Someone else may be modifying b_state. Be careful! This is ugly but
+ * once we get rid of using bh as a container for mapping information
+ * to pass to / from get_block functions, this can go away.
+ */
+ do {
+ old_state = READ_ONCE(bh->b_state);
+ new_state = (old_state & ~EXT4_MAP_FLAGS) | flags;
+ } while (unlikely(
+ cmpxchg(&bh->b_state, old_state, new_state) != old_state));
+}
+
/* Maximum number of blocks we map for direct IO at once. */
#define DIO_MAX_BLOCKS 4096
@@ -722,7 +750,7 @@
ext4_io_end_t *io_end = ext4_inode_aio(inode);
map_bh(bh, inode->i_sb, map.m_pblk);
- bh->b_state = (bh->b_state & ~EXT4_MAP_FLAGS) | map.m_flags;
+ ext4_update_bh_state(bh, map.m_flags);
if (io_end && io_end->flag & EXT4_IO_END_UNWRITTEN)
set_buffer_defer_completion(bh);
bh->b_size = inode->i_sb->s_blocksize * map.m_len;
@@ -1685,7 +1713,7 @@
return ret;
map_bh(bh, inode->i_sb, map.m_pblk);
- bh->b_state = (bh->b_state & ~EXT4_MAP_FLAGS) | map.m_flags;
+ ext4_update_bh_state(bh, map.m_flags);
if (buffer_unwritten(bh)) {
/* A delayed write to unwritten bh should be marked
@@ -2450,6 +2478,10 @@
trace_ext4_writepages(inode, wbc);
+ if (dax_mapping(mapping))
+ return dax_writeback_mapping_range(mapping, inode->i_sb->s_bdev,
+ wbc);
+
/*
* No pages to write? This is mainly a kludge to avoid starting
* a transaction for special inodes like journal inode on last iput()
@@ -3253,29 +3285,29 @@
* case, we allocate an io_end structure to hook to the iocb.
*/
iocb->private = NULL;
- ext4_inode_aio_set(inode, NULL);
- if (!is_sync_kiocb(iocb)) {
- io_end = ext4_init_io_end(inode, GFP_NOFS);
- if (!io_end) {
- ret = -ENOMEM;
- goto retake_lock;
- }
- /*
- * Grab reference for DIO. Will be dropped in ext4_end_io_dio()
- */
- iocb->private = ext4_get_io_end(io_end);
- /*
- * we save the io structure for current async direct
- * IO, so that later ext4_map_blocks() could flag the
- * io structure whether there is a unwritten extents
- * needs to be converted when IO is completed.
- */
- ext4_inode_aio_set(inode, io_end);
- }
-
if (overwrite) {
get_block_func = ext4_get_block_overwrite;
} else {
+ ext4_inode_aio_set(inode, NULL);
+ if (!is_sync_kiocb(iocb)) {
+ io_end = ext4_init_io_end(inode, GFP_NOFS);
+ if (!io_end) {
+ ret = -ENOMEM;
+ goto retake_lock;
+ }
+ /*
+ * Grab reference for DIO. Will be dropped in
+ * ext4_end_io_dio()
+ */
+ iocb->private = ext4_get_io_end(io_end);
+ /*
+ * we save the io structure for current async direct
+ * IO, so that later ext4_map_blocks() could flag the
+ * io structure whether there is a unwritten extents
+ * needs to be converted when IO is completed.
+ */
+ ext4_inode_aio_set(inode, io_end);
+ }
get_block_func = ext4_get_block_write;
dio_flags = DIO_LOCKING;
}
@@ -4127,7 +4159,7 @@
new_fl |= S_NOATIME;
if (flags & EXT4_DIRSYNC_FL)
new_fl |= S_DIRSYNC;
- if (test_opt(inode->i_sb, DAX))
+ if (test_opt(inode->i_sb, DAX) && S_ISREG(inode->i_mode))
new_fl |= S_DAX;
inode_set_flags(inode, new_fl,
S_SYNC|S_APPEND|S_IMMUTABLE|S_NOATIME|S_DIRSYNC|S_DAX);
diff --git a/fs/ext4/ioctl.c b/fs/ext4/ioctl.c
index 0f6c369..eae5917 100644
--- a/fs/ext4/ioctl.c
+++ b/fs/ext4/ioctl.c
@@ -208,7 +208,7 @@
{
struct ext4_inode_info *ei = EXT4_I(inode);
handle_t *handle = NULL;
- int err = EPERM, migrate = 0;
+ int err = -EPERM, migrate = 0;
struct ext4_iloc iloc;
unsigned int oldflags, mask, i;
unsigned int jflag;
@@ -583,6 +583,11 @@
"Online defrag not supported with bigalloc");
err = -EOPNOTSUPP;
goto mext_out;
+ } else if (IS_DAX(inode)) {
+ ext4_msg(sb, KERN_ERR,
+ "Online defrag not supported with DAX");
+ err = -EOPNOTSUPP;
+ goto mext_out;
}
err = mnt_want_write_file(filp);
diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
index 61eaf74..4424b7b 100644
--- a/fs/ext4/mballoc.c
+++ b/fs/ext4/mballoc.c
@@ -2285,7 +2285,7 @@
if (group == 0)
seq_puts(seq, "#group: free frags first ["
" 2^0 2^1 2^2 2^3 2^4 2^5 2^6 "
- " 2^7 2^8 2^9 2^10 2^11 2^12 2^13 ]");
+ " 2^7 2^8 2^9 2^10 2^11 2^12 2^13 ]\n");
i = (sb->s_blocksize_bits + 2) * sizeof(sg.info.bb_counters[0]) +
sizeof(struct ext4_group_info);
diff --git a/fs/ext4/move_extent.c b/fs/ext4/move_extent.c
index fb6f117..e032a04 100644
--- a/fs/ext4/move_extent.c
+++ b/fs/ext4/move_extent.c
@@ -265,11 +265,12 @@
ext4_lblk_t orig_blk_offset, donor_blk_offset;
unsigned long blocksize = orig_inode->i_sb->s_blocksize;
unsigned int tmp_data_size, data_size, replaced_size;
- int err2, jblocks, retries = 0;
+ int i, err2, jblocks, retries = 0;
int replaced_count = 0;
int from = data_offset_in_page << orig_inode->i_blkbits;
int blocks_per_page = PAGE_CACHE_SIZE >> orig_inode->i_blkbits;
struct super_block *sb = orig_inode->i_sb;
+ struct buffer_head *bh = NULL;
/*
* It needs twice the amount of ordinary journal buffers because
@@ -380,8 +381,16 @@
}
/* Perform all necessary steps similar write_begin()/write_end()
* but keeping in mind that i_size will not change */
- *err = __block_write_begin(pagep[0], from, replaced_size,
- ext4_get_block);
+ if (!page_has_buffers(pagep[0]))
+ create_empty_buffers(pagep[0], 1 << orig_inode->i_blkbits, 0);
+ bh = page_buffers(pagep[0]);
+ for (i = 0; i < data_offset_in_page; i++)
+ bh = bh->b_this_page;
+ for (i = 0; i < block_len_in_page; i++) {
+ *err = ext4_get_block(orig_inode, orig_blk_offset + i, bh, 0);
+ if (*err < 0)
+ break;
+ }
if (!*err)
*err = block_commit_write(pagep[0], from, from + replaced_size);
diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
index 06574dd..48e4b89 100644
--- a/fs/ext4/namei.c
+++ b/fs/ext4/namei.c
@@ -1558,6 +1558,24 @@
struct ext4_dir_entry_2 *de;
struct buffer_head *bh;
+ if (ext4_encrypted_inode(dir)) {
+ int res = ext4_get_encryption_info(dir);
+
+ /*
+ * This should be a properly defined flag for
+ * dentry->d_flags when we uplift this to the VFS.
+ * d_fsdata is set to (void *) 1 if if the dentry is
+ * created while the directory was encrypted and we
+ * don't have access to the key.
+ */
+ dentry->d_fsdata = NULL;
+ if (ext4_encryption_info(dir))
+ dentry->d_fsdata = (void *) 1;
+ d_set_d_op(dentry, &ext4_encrypted_d_ops);
+ if (res && res != -ENOKEY)
+ return ERR_PTR(res);
+ }
+
if (dentry->d_name.len > EXT4_NAME_LEN)
return ERR_PTR(-ENAMETOOLONG);
@@ -1585,11 +1603,15 @@
return ERR_PTR(-EFSCORRUPTED);
}
if (!IS_ERR(inode) && ext4_encrypted_inode(dir) &&
- (S_ISREG(inode->i_mode) || S_ISDIR(inode->i_mode) ||
- S_ISLNK(inode->i_mode)) &&
+ (S_ISDIR(inode->i_mode) || S_ISLNK(inode->i_mode)) &&
!ext4_is_child_context_consistent_with_parent(dir,
inode)) {
+ int nokey = ext4_encrypted_inode(inode) &&
+ !ext4_encryption_info(inode);
+
iput(inode);
+ if (nokey)
+ return ERR_PTR(-ENOKEY);
ext4_warning(inode->i_sb,
"Inconsistent encryption contexts: %lu/%lu\n",
(unsigned long) dir->i_ino,
diff --git a/fs/ext4/resize.c b/fs/ext4/resize.c
index ad62d7a..34038e3 100644
--- a/fs/ext4/resize.c
+++ b/fs/ext4/resize.c
@@ -198,7 +198,7 @@
if (flex_gd == NULL)
goto out3;
- if (flexbg_size >= UINT_MAX / sizeof(struct ext4_new_flex_group_data))
+ if (flexbg_size >= UINT_MAX / sizeof(struct ext4_new_group_data))
goto out2;
flex_gd->count = flexbg_size;
diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index 6915c95..1f76d89 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -317,6 +317,7 @@
struct inode_switch_wbs_context *isw =
container_of(work, struct inode_switch_wbs_context, work);
struct inode *inode = isw->inode;
+ struct super_block *sb = inode->i_sb;
struct address_space *mapping = inode->i_mapping;
struct bdi_writeback *old_wb = inode->i_wb;
struct bdi_writeback *new_wb = isw->new_wb;
@@ -423,6 +424,7 @@
wb_put(new_wb);
iput(inode);
+ deactivate_super(sb);
kfree(isw);
}
@@ -469,11 +471,14 @@
/* while holding I_WB_SWITCH, no one else can update the association */
spin_lock(&inode->i_lock);
+
if (inode->i_state & (I_WB_SWITCH | I_FREEING) ||
- inode_to_wb(inode) == isw->new_wb) {
- spin_unlock(&inode->i_lock);
- goto out_free;
- }
+ inode_to_wb(inode) == isw->new_wb)
+ goto out_unlock;
+
+ if (!atomic_inc_not_zero(&inode->i_sb->s_active))
+ goto out_unlock;
+
inode->i_state |= I_WB_SWITCH;
spin_unlock(&inode->i_lock);
@@ -489,6 +494,8 @@
call_rcu(&isw->rcu_head, inode_switch_wbs_rcu_fn);
return;
+out_unlock:
+ spin_unlock(&inode->i_lock);
out_free:
if (isw->new_wb)
wb_put(isw->new_wb);
diff --git a/fs/hpfs/namei.c b/fs/hpfs/namei.c
index 506765a..bb8d67e 100644
--- a/fs/hpfs/namei.c
+++ b/fs/hpfs/namei.c
@@ -376,12 +376,11 @@
struct inode *inode = d_inode(dentry);
dnode_secno dno;
int r;
- int rep = 0;
int err;
hpfs_lock(dir->i_sb);
hpfs_adjust_length(name, &len);
-again:
+
err = -ENOENT;
de = map_dirent(dir, hpfs_i(dir)->i_dno, name, len, &dno, &qbh);
if (!de)
@@ -401,33 +400,9 @@
hpfs_error(dir->i_sb, "there was error when removing dirent");
err = -EFSERROR;
break;
- case 2: /* no space for deleting, try to truncate file */
-
+ case 2: /* no space for deleting */
err = -ENOSPC;
- if (rep++)
- break;
-
- dentry_unhash(dentry);
- if (!d_unhashed(dentry)) {
- hpfs_unlock(dir->i_sb);
- return -ENOSPC;
- }
- if (generic_permission(inode, MAY_WRITE) ||
- !S_ISREG(inode->i_mode) ||
- get_write_access(inode)) {
- d_rehash(dentry);
- } else {
- struct iattr newattrs;
- /*pr_info("truncating file before delete.\n");*/
- newattrs.ia_size = 0;
- newattrs.ia_valid = ATTR_SIZE | ATTR_CTIME;
- err = notify_change(dentry, &newattrs, NULL);
- put_write_access(inode);
- if (!err)
- goto again;
- }
- hpfs_unlock(dir->i_sb);
- return -ENOSPC;
+ break;
default:
drop_nlink(inode);
err = 0;
diff --git a/fs/inode.c b/fs/inode.c
index 9f62db3..69b8b52 100644
--- a/fs/inode.c
+++ b/fs/inode.c
@@ -154,6 +154,12 @@
inode->i_rdev = 0;
inode->dirtied_when = 0;
+#ifdef CONFIG_CGROUP_WRITEBACK
+ inode->i_wb_frn_winner = 0;
+ inode->i_wb_frn_avg_time = 0;
+ inode->i_wb_frn_history = 0;
+#endif
+
if (security_inode_alloc(inode))
goto out;
spin_lock_init(&inode->i_lock);
diff --git a/fs/namei.c b/fs/namei.c
index f624d13..9c590e0 100644
--- a/fs/namei.c
+++ b/fs/namei.c
@@ -1712,6 +1712,11 @@
return 0;
if (!follow)
return 0;
+ /* make sure that d_is_symlink above matches inode */
+ if (nd->flags & LOOKUP_RCU) {
+ if (read_seqcount_retry(&link->dentry->d_seq, seq))
+ return -ECHILD;
+ }
return pick_link(nd, link, inode, seq);
}
@@ -1743,11 +1748,11 @@
if (err < 0)
return err;
- inode = d_backing_inode(path.dentry);
seq = 0; /* we are already out of RCU mode */
err = -ENOENT;
if (d_is_negative(path.dentry))
goto out_path_put;
+ inode = d_backing_inode(path.dentry);
}
if (flags & WALK_PUT)
@@ -3192,12 +3197,12 @@
return error;
BUG_ON(nd->flags & LOOKUP_RCU);
- inode = d_backing_inode(path.dentry);
seq = 0; /* out of RCU mode, so the value doesn't matter */
if (unlikely(d_is_negative(path.dentry))) {
path_to_nameidata(&path, nd);
return -ENOENT;
}
+ inode = d_backing_inode(path.dentry);
finish_lookup:
if (nd->depth)
put_link(nd);
@@ -3206,11 +3211,6 @@
if (unlikely(error))
return error;
- if (unlikely(d_is_symlink(path.dentry)) && !(open_flag & O_PATH)) {
- path_to_nameidata(&path, nd);
- return -ELOOP;
- }
-
if ((nd->flags & LOOKUP_RCU) || nd->path.mnt != path.mnt) {
path_to_nameidata(&path, nd);
} else {
@@ -3229,6 +3229,10 @@
return error;
}
audit_inode(nd->name, nd->path.dentry, 0);
+ if (unlikely(d_is_symlink(nd->path.dentry)) && !(open_flag & O_PATH)) {
+ error = -ELOOP;
+ goto out;
+ }
error = -EISDIR;
if ((open_flag & O_CREAT) && d_is_dir(nd->path.dentry))
goto out;
@@ -3273,6 +3277,10 @@
goto exit_fput;
}
out:
+ if (unlikely(error > 0)) {
+ WARN_ON(1);
+ error = -EINVAL;
+ }
if (got_write)
mnt_drop_write(nd->path.mnt);
path_put(&save_parent);
diff --git a/fs/nfs/blocklayout/extent_tree.c b/fs/nfs/blocklayout/extent_tree.c
index c59a59c..35ab51c 100644
--- a/fs/nfs/blocklayout/extent_tree.c
+++ b/fs/nfs/blocklayout/extent_tree.c
@@ -476,6 +476,7 @@
for (i = 0; i < nr_pages; i++)
put_page(arg->layoutupdate_pages[i]);
+ vfree(arg->start_p);
kfree(arg->layoutupdate_pages);
} else {
put_page(arg->layoutupdate_page);
@@ -559,10 +560,15 @@
if (unlikely(arg->layoutupdate_pages != &arg->layoutupdate_page)) {
void *p = start_p, *end = p + arg->layoutupdate_len;
+ struct page *page = NULL;
int i = 0;
- for ( ; p < end; p += PAGE_SIZE)
- arg->layoutupdate_pages[i++] = vmalloc_to_page(p);
+ arg->start_p = start_p;
+ for ( ; p < end; p += PAGE_SIZE) {
+ page = vmalloc_to_page(p);
+ arg->layoutupdate_pages[i++] = page;
+ get_page(page);
+ }
}
dprintk("%s found %zu ranges\n", __func__, count);
diff --git a/fs/nfs/nfs42proc.c b/fs/nfs/nfs42proc.c
index bd25dc7..dff8346 100644
--- a/fs/nfs/nfs42proc.c
+++ b/fs/nfs/nfs42proc.c
@@ -16,29 +16,8 @@
#define NFSDBG_FACILITY NFSDBG_PROC
-static int nfs42_set_rw_stateid(nfs4_stateid *dst, struct file *file,
- fmode_t fmode)
-{
- struct nfs_open_context *open;
- struct nfs_lock_context *lock;
- int ret;
-
- open = get_nfs_open_context(nfs_file_open_context(file));
- lock = nfs_get_lock_context(open);
- if (IS_ERR(lock)) {
- put_nfs_open_context(open);
- return PTR_ERR(lock);
- }
-
- ret = nfs4_set_rw_stateid(dst, open, lock, fmode);
-
- nfs_put_lock_context(lock);
- put_nfs_open_context(open);
- return ret;
-}
-
static int _nfs42_proc_fallocate(struct rpc_message *msg, struct file *filep,
- loff_t offset, loff_t len)
+ struct nfs_lock_context *lock, loff_t offset, loff_t len)
{
struct inode *inode = file_inode(filep);
struct nfs_server *server = NFS_SERVER(inode);
@@ -56,7 +35,8 @@
msg->rpc_argp = &args;
msg->rpc_resp = &res;
- status = nfs42_set_rw_stateid(&args.falloc_stateid, filep, FMODE_WRITE);
+ status = nfs4_set_rw_stateid(&args.falloc_stateid, lock->open_context,
+ lock, FMODE_WRITE);
if (status)
return status;
@@ -78,15 +58,26 @@
{
struct nfs_server *server = NFS_SERVER(file_inode(filep));
struct nfs4_exception exception = { };
+ struct nfs_lock_context *lock;
int err;
+ lock = nfs_get_lock_context(nfs_file_open_context(filep));
+ if (IS_ERR(lock))
+ return PTR_ERR(lock);
+
+ exception.inode = file_inode(filep);
+ exception.state = lock->open_context->state;
+
do {
- err = _nfs42_proc_fallocate(msg, filep, offset, len);
- if (err == -ENOTSUPP)
- return -EOPNOTSUPP;
+ err = _nfs42_proc_fallocate(msg, filep, lock, offset, len);
+ if (err == -ENOTSUPP) {
+ err = -EOPNOTSUPP;
+ break;
+ }
err = nfs4_handle_exception(server, err, &exception);
} while (exception.retry);
+ nfs_put_lock_context(lock);
return err;
}
@@ -135,7 +126,8 @@
return err;
}
-static loff_t _nfs42_proc_llseek(struct file *filep, loff_t offset, int whence)
+static loff_t _nfs42_proc_llseek(struct file *filep,
+ struct nfs_lock_context *lock, loff_t offset, int whence)
{
struct inode *inode = file_inode(filep);
struct nfs42_seek_args args = {
@@ -156,7 +148,8 @@
if (!nfs_server_capable(inode, NFS_CAP_SEEK))
return -ENOTSUPP;
- status = nfs42_set_rw_stateid(&args.sa_stateid, filep, FMODE_READ);
+ status = nfs4_set_rw_stateid(&args.sa_stateid, lock->open_context,
+ lock, FMODE_READ);
if (status)
return status;
@@ -175,17 +168,28 @@
{
struct nfs_server *server = NFS_SERVER(file_inode(filep));
struct nfs4_exception exception = { };
+ struct nfs_lock_context *lock;
loff_t err;
+ lock = nfs_get_lock_context(nfs_file_open_context(filep));
+ if (IS_ERR(lock))
+ return PTR_ERR(lock);
+
+ exception.inode = file_inode(filep);
+ exception.state = lock->open_context->state;
+
do {
- err = _nfs42_proc_llseek(filep, offset, whence);
+ err = _nfs42_proc_llseek(filep, lock, offset, whence);
if (err >= 0)
break;
- if (err == -ENOTSUPP)
- return -EOPNOTSUPP;
+ if (err == -ENOTSUPP) {
+ err = -EOPNOTSUPP;
+ break;
+ }
err = nfs4_handle_exception(server, err, &exception);
} while (exception.retry);
+ nfs_put_lock_context(lock);
return err;
}
@@ -298,8 +302,9 @@
}
static int _nfs42_proc_clone(struct rpc_message *msg, struct file *src_f,
- struct file *dst_f, loff_t src_offset,
- loff_t dst_offset, loff_t count)
+ struct file *dst_f, struct nfs_lock_context *src_lock,
+ struct nfs_lock_context *dst_lock, loff_t src_offset,
+ loff_t dst_offset, loff_t count)
{
struct inode *src_inode = file_inode(src_f);
struct inode *dst_inode = file_inode(dst_f);
@@ -320,11 +325,13 @@
msg->rpc_argp = &args;
msg->rpc_resp = &res;
- status = nfs42_set_rw_stateid(&args.src_stateid, src_f, FMODE_READ);
+ status = nfs4_set_rw_stateid(&args.src_stateid, src_lock->open_context,
+ src_lock, FMODE_READ);
if (status)
return status;
- status = nfs42_set_rw_stateid(&args.dst_stateid, dst_f, FMODE_WRITE);
+ status = nfs4_set_rw_stateid(&args.dst_stateid, dst_lock->open_context,
+ dst_lock, FMODE_WRITE);
if (status)
return status;
@@ -349,22 +356,48 @@
};
struct inode *inode = file_inode(src_f);
struct nfs_server *server = NFS_SERVER(file_inode(src_f));
- struct nfs4_exception exception = { };
- int err;
+ struct nfs_lock_context *src_lock;
+ struct nfs_lock_context *dst_lock;
+ struct nfs4_exception src_exception = { };
+ struct nfs4_exception dst_exception = { };
+ int err, err2;
if (!nfs_server_capable(inode, NFS_CAP_CLONE))
return -EOPNOTSUPP;
+ src_lock = nfs_get_lock_context(nfs_file_open_context(src_f));
+ if (IS_ERR(src_lock))
+ return PTR_ERR(src_lock);
+
+ src_exception.inode = file_inode(src_f);
+ src_exception.state = src_lock->open_context->state;
+
+ dst_lock = nfs_get_lock_context(nfs_file_open_context(dst_f));
+ if (IS_ERR(dst_lock)) {
+ err = PTR_ERR(dst_lock);
+ goto out_put_src_lock;
+ }
+
+ dst_exception.inode = file_inode(dst_f);
+ dst_exception.state = dst_lock->open_context->state;
+
do {
- err = _nfs42_proc_clone(&msg, src_f, dst_f, src_offset,
- dst_offset, count);
+ err = _nfs42_proc_clone(&msg, src_f, dst_f, src_lock, dst_lock,
+ src_offset, dst_offset, count);
if (err == -ENOTSUPP || err == -EOPNOTSUPP) {
NFS_SERVER(inode)->caps &= ~NFS_CAP_CLONE;
- return -EOPNOTSUPP;
+ err = -EOPNOTSUPP;
+ break;
}
- err = nfs4_handle_exception(server, err, &exception);
- } while (exception.retry);
+ err2 = nfs4_handle_exception(server, err, &src_exception);
+ err = nfs4_handle_exception(server, err, &dst_exception);
+ if (!err)
+ err = err2;
+ } while (src_exception.retry || dst_exception.retry);
+
+ nfs_put_lock_context(dst_lock);
+out_put_src_lock:
+ nfs_put_lock_context(src_lock);
return err;
-
}
diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
index 4bfc33a..1488159 100644
--- a/fs/nfs/nfs4proc.c
+++ b/fs/nfs/nfs4proc.c
@@ -2466,9 +2466,9 @@
dentry = d_add_unique(dentry, igrab(state->inode));
if (dentry == NULL) {
dentry = opendata->dentry;
- } else if (dentry != ctx->dentry) {
+ } else {
dput(ctx->dentry);
- ctx->dentry = dget(dentry);
+ ctx->dentry = dentry;
}
nfs_set_verifier(dentry,
nfs_save_change_attribute(d_inode(opendata->dir)));
diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c
index 482b6e9..2fa483e 100644
--- a/fs/nfs/pnfs.c
+++ b/fs/nfs/pnfs.c
@@ -252,6 +252,27 @@
}
}
+/*
+ * Mark a pnfs_layout_hdr and all associated layout segments as invalid
+ *
+ * In order to continue using the pnfs_layout_hdr, a full recovery
+ * is required.
+ * Note that caller must hold inode->i_lock.
+ */
+static int
+pnfs_mark_layout_stateid_invalid(struct pnfs_layout_hdr *lo,
+ struct list_head *lseg_list)
+{
+ struct pnfs_layout_range range = {
+ .iomode = IOMODE_ANY,
+ .offset = 0,
+ .length = NFS4_MAX_UINT64,
+ };
+
+ set_bit(NFS_LAYOUT_INVALID_STID, &lo->plh_flags);
+ return pnfs_mark_matching_lsegs_invalid(lo, lseg_list, &range);
+}
+
static int
pnfs_iomode_to_fail_bit(u32 iomode)
{
@@ -554,9 +575,8 @@
spin_lock(&nfsi->vfs_inode.i_lock);
lo = nfsi->layout;
if (lo) {
- lo->plh_block_lgets++; /* permanently block new LAYOUTGETs */
- pnfs_mark_matching_lsegs_invalid(lo, &tmp_list, NULL);
pnfs_get_layout_hdr(lo);
+ pnfs_mark_layout_stateid_invalid(lo, &tmp_list);
pnfs_layout_clear_fail_bit(lo, NFS_LAYOUT_RO_FAILED);
pnfs_layout_clear_fail_bit(lo, NFS_LAYOUT_RW_FAILED);
spin_unlock(&nfsi->vfs_inode.i_lock);
@@ -617,11 +637,6 @@
{
struct pnfs_layout_hdr *lo;
struct inode *inode;
- struct pnfs_layout_range range = {
- .iomode = IOMODE_ANY,
- .offset = 0,
- .length = NFS4_MAX_UINT64,
- };
LIST_HEAD(lseg_list);
int ret = 0;
@@ -636,11 +651,11 @@
spin_lock(&inode->i_lock);
list_del_init(&lo->plh_bulk_destroy);
- lo->plh_block_lgets++; /* permanently block new LAYOUTGETs */
- if (is_bulk_recall)
- set_bit(NFS_LAYOUT_BULK_RECALL, &lo->plh_flags);
- if (pnfs_mark_matching_lsegs_invalid(lo, &lseg_list, &range))
+ if (pnfs_mark_layout_stateid_invalid(lo, &lseg_list)) {
+ if (is_bulk_recall)
+ set_bit(NFS_LAYOUT_BULK_RECALL, &lo->plh_flags);
ret = -EAGAIN;
+ }
spin_unlock(&inode->i_lock);
pnfs_free_lseg_list(&lseg_list);
/* Free all lsegs that are attached to commit buckets */
@@ -1738,8 +1753,19 @@
if (lo->plh_return_iomode != 0)
iomode = IOMODE_ANY;
lo->plh_return_iomode = iomode;
+ set_bit(NFS_LAYOUT_RETURN_REQUESTED, &lo->plh_flags);
}
+/**
+ * pnfs_mark_matching_lsegs_return - Free or return matching layout segments
+ * @lo: pointer to layout header
+ * @tmp_list: list header to be used with pnfs_free_lseg_list()
+ * @return_range: describe layout segment ranges to be returned
+ *
+ * This function is mainly intended for use by layoutrecall. It attempts
+ * to free the layout segment immediately, or else to mark it for return
+ * as soon as its reference count drops to zero.
+ */
int
pnfs_mark_matching_lsegs_return(struct pnfs_layout_hdr *lo,
struct list_head *tmp_list,
@@ -1762,12 +1788,11 @@
lseg, lseg->pls_range.iomode,
lseg->pls_range.offset,
lseg->pls_range.length);
+ if (mark_lseg_invalid(lseg, tmp_list))
+ continue;
+ remaining++;
set_bit(NFS_LSEG_LAYOUTRETURN, &lseg->pls_flags);
pnfs_set_plh_return_iomode(lo, return_range->iomode);
- if (!mark_lseg_invalid(lseg, tmp_list))
- remaining++;
- set_bit(NFS_LAYOUT_RETURN_REQUESTED,
- &lo->plh_flags);
}
return remaining;
}
diff --git a/fs/notify/mark.c b/fs/notify/mark.c
index cfcbf11..7115c5d 100644
--- a/fs/notify/mark.c
+++ b/fs/notify/mark.c
@@ -91,7 +91,14 @@
#include <linux/fsnotify_backend.h>
#include "fsnotify.h"
+#define FSNOTIFY_REAPER_DELAY (1) /* 1 jiffy */
+
struct srcu_struct fsnotify_mark_srcu;
+static DEFINE_SPINLOCK(destroy_lock);
+static LIST_HEAD(destroy_list);
+
+static void fsnotify_mark_destroy(struct work_struct *work);
+static DECLARE_DELAYED_WORK(reaper_work, fsnotify_mark_destroy);
void fsnotify_get_mark(struct fsnotify_mark *mark)
{
@@ -165,19 +172,10 @@
atomic_dec(&group->num_marks);
}
-static void
-fsnotify_mark_free_rcu(struct rcu_head *rcu)
-{
- struct fsnotify_mark *mark;
-
- mark = container_of(rcu, struct fsnotify_mark, g_rcu);
- fsnotify_put_mark(mark);
-}
-
/*
- * Free fsnotify mark. The freeing is actually happening from a call_srcu
- * callback. Caller must have a reference to the mark or be protected by
- * fsnotify_mark_srcu.
+ * Free fsnotify mark. The freeing is actually happening from a kthread which
+ * first waits for srcu period end. Caller must have a reference to the mark
+ * or be protected by fsnotify_mark_srcu.
*/
void fsnotify_free_mark(struct fsnotify_mark *mark)
{
@@ -192,7 +190,11 @@
mark->flags &= ~FSNOTIFY_MARK_FLAG_ALIVE;
spin_unlock(&mark->lock);
- call_srcu(&fsnotify_mark_srcu, &mark->g_rcu, fsnotify_mark_free_rcu);
+ spin_lock(&destroy_lock);
+ list_add(&mark->g_list, &destroy_list);
+ spin_unlock(&destroy_lock);
+ queue_delayed_work(system_unbound_wq, &reaper_work,
+ FSNOTIFY_REAPER_DELAY);
/*
* Some groups like to know that marks are being freed. This is a
@@ -388,7 +390,12 @@
spin_unlock(&mark->lock);
- call_srcu(&fsnotify_mark_srcu, &mark->g_rcu, fsnotify_mark_free_rcu);
+ spin_lock(&destroy_lock);
+ list_add(&mark->g_list, &destroy_list);
+ spin_unlock(&destroy_lock);
+ queue_delayed_work(system_unbound_wq, &reaper_work,
+ FSNOTIFY_REAPER_DELAY);
+
return ret;
}
@@ -491,3 +498,21 @@
atomic_set(&mark->refcnt, 1);
mark->free_mark = free_mark;
}
+
+static void fsnotify_mark_destroy(struct work_struct *work)
+{
+ struct fsnotify_mark *mark, *next;
+ struct list_head private_destroy_list;
+
+ spin_lock(&destroy_lock);
+ /* exchange the list head */
+ list_replace_init(&destroy_list, &private_destroy_list);
+ spin_unlock(&destroy_lock);
+
+ synchronize_srcu(&fsnotify_mark_srcu);
+
+ list_for_each_entry_safe(mark, next, &private_destroy_list, g_list) {
+ list_del_init(&mark->g_list);
+ fsnotify_put_mark(mark);
+ }
+}
diff --git a/fs/ocfs2/aops.c b/fs/ocfs2/aops.c
index 794fd15..cda0361 100644
--- a/fs/ocfs2/aops.c
+++ b/fs/ocfs2/aops.c
@@ -956,6 +956,7 @@
tmp_ret = ocfs2_del_inode_from_orphan(osb, inode, di_bh,
update_isize, end);
if (tmp_ret < 0) {
+ ocfs2_inode_unlock(inode, 1);
ret = tmp_ret;
mlog_errno(ret);
brelse(di_bh);
diff --git a/fs/pnode.c b/fs/pnode.c
index 6367e1e..c524fdd 100644
--- a/fs/pnode.c
+++ b/fs/pnode.c
@@ -202,6 +202,11 @@
static struct mountpoint *mp;
static struct hlist_head *list;
+static inline bool peers(struct mount *m1, struct mount *m2)
+{
+ return m1->mnt_group_id == m2->mnt_group_id && m1->mnt_group_id;
+}
+
static int propagate_one(struct mount *m)
{
struct mount *child;
@@ -212,7 +217,7 @@
/* skip if mountpoint isn't covered by it */
if (!is_subdir(mp->m_dentry, m->mnt.mnt_root))
return 0;
- if (m->mnt_group_id == last_dest->mnt_group_id) {
+ if (peers(m, last_dest)) {
type = CL_MAKE_SHARED;
} else {
struct mount *n, *p;
@@ -223,7 +228,7 @@
last_source = last_source->mnt_master;
last_dest = last_source->mnt_parent;
}
- if (n->mnt_group_id != last_dest->mnt_group_id) {
+ if (!peers(n, last_dest)) {
last_source = last_source->mnt_master;
last_dest = last_source->mnt_parent;
}
diff --git a/fs/read_write.c b/fs/read_write.c
index 324ec27..dadf24e 100644
--- a/fs/read_write.c
+++ b/fs/read_write.c
@@ -17,6 +17,7 @@
#include <linux/splice.h>
#include <linux/compat.h>
#include <linux/mount.h>
+#include <linux/fs.h>
#include "internal.h"
#include <asm/uaccess.h>
@@ -183,7 +184,7 @@
switch (whence) {
case SEEK_SET: case SEEK_CUR:
return generic_file_llseek_size(file, offset, whence,
- ~0ULL, 0);
+ OFFSET_MAX, 0);
default:
return -EINVAL;
}
@@ -1532,10 +1533,12 @@
if (!(file_in->f_mode & FMODE_READ) ||
!(file_out->f_mode & FMODE_WRITE) ||
- (file_out->f_flags & O_APPEND) ||
- !file_in->f_op->clone_file_range)
+ (file_out->f_flags & O_APPEND))
return -EBADF;
+ if (!file_in->f_op->clone_file_range)
+ return -EOPNOTSUPP;
+
ret = clone_verify_area(file_in, pos_in, len, false);
if (ret)
return ret;
diff --git a/fs/xattr.c b/fs/xattr.c
index 07d0e47..4861322 100644
--- a/fs/xattr.c
+++ b/fs/xattr.c
@@ -940,7 +940,7 @@
bool trusted = capable(CAP_SYS_ADMIN);
struct simple_xattr *xattr;
ssize_t remaining_size = size;
- int err;
+ int err = 0;
#ifdef CONFIG_FS_POSIX_ACL
if (inode->i_acl) {
@@ -965,11 +965,11 @@
err = xattr_list_one(&buffer, &remaining_size, xattr->name);
if (err)
- return err;
+ break;
}
spin_unlock(&xattrs->lock);
- return size - remaining_size;
+ return err ? err : size - remaining_size;
}
/*
diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c
index 379c089..a9ebabfe 100644
--- a/fs/xfs/xfs_aops.c
+++ b/fs/xfs/xfs_aops.c
@@ -55,7 +55,7 @@
} while ((bh = bh->b_this_page) != head);
}
-STATIC struct block_device *
+struct block_device *
xfs_find_bdev_for_inode(
struct inode *inode)
{
@@ -1208,6 +1208,10 @@
struct writeback_control *wbc)
{
xfs_iflags_clear(XFS_I(mapping->host), XFS_ITRUNCATED);
+ if (dax_mapping(mapping))
+ return dax_writeback_mapping_range(mapping,
+ xfs_find_bdev_for_inode(mapping->host), wbc);
+
return generic_writepages(mapping, wbc);
}
diff --git a/fs/xfs/xfs_aops.h b/fs/xfs/xfs_aops.h
index f6ffc9a..a4343c63 100644
--- a/fs/xfs/xfs_aops.h
+++ b/fs/xfs/xfs_aops.h
@@ -62,5 +62,6 @@
struct buffer_head *map_bh, int create);
extern void xfs_count_page_state(struct page *, int *, int *);
+extern struct block_device *xfs_find_bdev_for_inode(struct inode *);
#endif /* __XFS_AOPS_H__ */
diff --git a/fs/xfs/xfs_bmap_util.c b/fs/xfs/xfs_bmap_util.c
index 45ec9e4..6c87601 100644
--- a/fs/xfs/xfs_bmap_util.c
+++ b/fs/xfs/xfs_bmap_util.c
@@ -75,7 +75,8 @@
ssize_t size = XFS_FSB_TO_B(mp, count_fsb);
if (IS_DAX(VFS_I(ip)))
- return dax_clear_blocks(VFS_I(ip), block, size);
+ return dax_clear_sectors(xfs_find_bdev_for_inode(VFS_I(ip)),
+ sector, size);
/*
* let the block layer decide on the fastest method of
diff --git a/fs/xfs/xfs_log_recover.c b/fs/xfs/xfs_log_recover.c
index da37beb..594f7e6 100644
--- a/fs/xfs/xfs_log_recover.c
+++ b/fs/xfs/xfs_log_recover.c
@@ -4491,7 +4491,7 @@
* know precisely what failed.
*/
if (pass == XLOG_RECOVER_CRCPASS) {
- if (rhead->h_crc && crc != le32_to_cpu(rhead->h_crc))
+ if (rhead->h_crc && crc != rhead->h_crc)
return -EFSBADCRC;
return 0;
}
@@ -4502,7 +4502,7 @@
* zero CRC check prevents warnings from being emitted when upgrading
* the kernel from one that does not add CRCs by default.
*/
- if (crc != le32_to_cpu(rhead->h_crc)) {
+ if (crc != rhead->h_crc) {
if (rhead->h_crc || xfs_sb_version_hascrc(&log->l_mp->m_sb)) {
xfs_alert(log->l_mp,
"log record CRC mismatch: found 0x%x, expected 0x%x.",
diff --git a/include/asm-generic/cputime_nsecs.h b/include/asm-generic/cputime_nsecs.h
index 0419485..0f1c6f3 100644
--- a/include/asm-generic/cputime_nsecs.h
+++ b/include/asm-generic/cputime_nsecs.h
@@ -75,7 +75,7 @@
*/
static inline cputime_t timespec_to_cputime(const struct timespec *val)
{
- u64 ret = val->tv_sec * NSEC_PER_SEC + val->tv_nsec;
+ u64 ret = (u64)val->tv_sec * NSEC_PER_SEC + val->tv_nsec;
return (__force cputime_t) ret;
}
static inline void cputime_to_timespec(const cputime_t ct, struct timespec *val)
@@ -91,7 +91,8 @@
*/
static inline cputime_t timeval_to_cputime(const struct timeval *val)
{
- u64 ret = val->tv_sec * NSEC_PER_SEC + val->tv_usec * NSEC_PER_USEC;
+ u64 ret = (u64)val->tv_sec * NSEC_PER_SEC +
+ val->tv_usec * NSEC_PER_USEC;
return (__force cputime_t) ret;
}
static inline void cputime_to_timeval(const cputime_t ct, struct timeval *val)
diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h
index 0b3c0d3..c370b26 100644
--- a/include/asm-generic/pgtable.h
+++ b/include/asm-generic/pgtable.h
@@ -239,6 +239,14 @@
pmd_t *pmdp);
#endif
+#ifndef __HAVE_ARCH_PMDP_HUGE_SPLIT_PREPARE
+static inline void pmdp_huge_split_prepare(struct vm_area_struct *vma,
+ unsigned long address, pmd_t *pmdp)
+{
+
+}
+#endif
+
#ifndef __HAVE_ARCH_PTE_SAME
static inline int pte_same(pte_t pte_a, pte_t pte_b)
{
diff --git a/include/drm/drm_crtc.h b/include/drm/drm_crtc.h
index c65a212..c5b4b81 100644
--- a/include/drm/drm_crtc.h
+++ b/include/drm/drm_crtc.h
@@ -1166,6 +1166,7 @@
struct drm_mode_object base;
char *name;
+ int connector_id;
int connector_type;
int connector_type_id;
bool interlace_allowed;
@@ -2047,6 +2048,7 @@
struct list_head fb_list;
int num_connector;
+ struct ida connector_ida;
struct list_head connector_list;
int num_encoder;
struct list_head encoder_list;
@@ -2200,7 +2202,11 @@
void drm_connector_unregister(struct drm_connector *connector);
extern void drm_connector_cleanup(struct drm_connector *connector);
-extern unsigned int drm_connector_index(struct drm_connector *connector);
+static inline unsigned drm_connector_index(struct drm_connector *connector)
+{
+ return connector->connector_id;
+}
+
/* helper to unplug all connectors from sysfs for device */
extern void drm_connector_unplug_all(struct drm_device *dev);
diff --git a/include/dt-bindings/clock/tegra210-car.h b/include/dt-bindings/clock/tegra210-car.h
index 6f45aea..0a05b0d 100644
--- a/include/dt-bindings/clock/tegra210-car.h
+++ b/include/dt-bindings/clock/tegra210-car.h
@@ -126,7 +126,7 @@
/* 104 */
/* 105 */
#define TEGRA210_CLK_D_AUDIO 106
-/* 107 ( affects abp -> ape) */
+#define TEGRA210_CLK_APB2APE 107
/* 108 */
/* 109 */
/* 110 */
diff --git a/include/linux/amba/bus.h b/include/linux/amba/bus.h
index 9006c4e..3d8dcdd 100644
--- a/include/linux/amba/bus.h
+++ b/include/linux/amba/bus.h
@@ -163,4 +163,13 @@
#define module_amba_driver(__amba_drv) \
module_driver(__amba_drv, amba_driver_register, amba_driver_unregister)
+/*
+ * builtin_amba_driver() - Helper macro for drivers that don't do anything
+ * special in driver initcall. This eliminates a lot of boilerplate. Each
+ * driver may only use this macro once, and calling it replaces the instance
+ * device_initcall().
+ */
+#define builtin_amba_driver(__amba_drv) \
+ builtin_driver(__amba_drv, amba_driver_register)
+
#endif
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 29189ae..4571ef1 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -682,9 +682,12 @@
/*
* q->prep_rq_fn return values
*/
-#define BLKPREP_OK 0 /* serve it */
-#define BLKPREP_KILL 1 /* fatal error, kill */
-#define BLKPREP_DEFER 2 /* leave on queue */
+enum {
+ BLKPREP_OK, /* serve it */
+ BLKPREP_KILL, /* fatal error, kill, return -EIO */
+ BLKPREP_DEFER, /* leave on queue */
+ BLKPREP_INVALID, /* invalid command, kill, return -EREMOTEIO */
+};
extern unsigned long blk_max_low_pfn, blk_max_pfn;
diff --git a/include/linux/cgroup-defs.h b/include/linux/cgroup-defs.h
index 7f540f7..789471d 100644
--- a/include/linux/cgroup-defs.h
+++ b/include/linux/cgroup-defs.h
@@ -127,6 +127,12 @@
*/
u64 serial_nr;
+ /*
+ * Incremented by online self and children. Used to guarantee that
+ * parents are not offlined before their children.
+ */
+ atomic_t online_cnt;
+
/* percpu_ref killing and RCU release */
struct rcu_head rcu_head;
struct work_struct destroy_work;
diff --git a/include/linux/compiler.h b/include/linux/compiler.h
index 00b042c..48f5aab 100644
--- a/include/linux/compiler.h
+++ b/include/linux/compiler.h
@@ -144,7 +144,7 @@
*/
#define if(cond, ...) __trace_if( (cond , ## __VA_ARGS__) )
#define __trace_if(cond) \
- if (__builtin_constant_p((cond)) ? !!(cond) : \
+ if (__builtin_constant_p(!!(cond)) ? !!(cond) : \
({ \
int ______r; \
static struct ftrace_branch_data \
diff --git a/include/linux/coresight-pmu.h b/include/linux/coresight-pmu.h
new file mode 100644
index 0000000..7d41026
--- /dev/null
+++ b/include/linux/coresight-pmu.h
@@ -0,0 +1,39 @@
+/*
+ * Copyright(C) 2015 Linaro Limited. All rights reserved.
+ * Author: Mathieu Poirier <mathieu.poirier@linaro.org>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 as published by
+ * the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef _LINUX_CORESIGHT_PMU_H
+#define _LINUX_CORESIGHT_PMU_H
+
+#define CORESIGHT_ETM_PMU_NAME "cs_etm"
+#define CORESIGHT_ETM_PMU_SEED 0x10
+
+/* ETMv3.5/PTM's ETMCR config bit */
+#define ETM_OPT_CYCACC 12
+#define ETM_OPT_TS 28
+
+static inline int coresight_get_trace_id(int cpu)
+{
+ /*
+ * A trace ID of value 0 is invalid, so let's start at some
+ * random value that fits in 7 bits and go from there. Since
+ * the common convention is to have data trace IDs be I(N) + 1,
+ * set instruction trace IDs as a function of the CPU number.
+ */
+ return (CORESIGHT_ETM_PMU_SEED + (cpu * 2));
+}
+
+#endif
diff --git a/include/linux/coresight.h b/include/linux/coresight.h
index a7cabfa2..385d62e 100644
--- a/include/linux/coresight.h
+++ b/include/linux/coresight.h
@@ -14,6 +14,7 @@
#define _LINUX_CORESIGHT_H
#include <linux/device.h>
+#include <linux/perf_event.h>
#include <linux/sched.h>
/* Peripheral id registers (0xFD0-0xFEC) */
@@ -152,7 +153,6 @@
by @coresight_ops.
* @dev: The device entity associated to this component.
* @refcnt: keep track of what is in use.
- * @path_link: link of current component into the path being enabled.
* @orphan: true if the component has connections that haven't been linked.
* @enable: 'true' if component is currently part of an active path.
* @activated: 'true' only if a _sink_ has been activated. A sink can be
@@ -168,7 +168,6 @@
const struct coresight_ops *ops;
struct device dev;
atomic_t *refcnt;
- struct list_head path_link;
bool orphan;
bool enable; /* true only if configured as part of a path */
bool activated; /* true only if a sink is part of a path */
@@ -183,12 +182,29 @@
/**
* struct coresight_ops_sink - basic operations for a sink
* Operations available for sinks
- * @enable: enables the sink.
- * @disable: disables the sink.
+ * @enable: enables the sink.
+ * @disable: disables the sink.
+ * @alloc_buffer: initialises perf's ring buffer for trace collection.
+ * @free_buffer: release memory allocated in @get_config.
+ * @set_buffer: initialises buffer mechanic before a trace session.
+ * @reset_buffer: finalises buffer mechanic after a trace session.
+ * @update_buffer: update buffer pointers after a trace session.
*/
struct coresight_ops_sink {
- int (*enable)(struct coresight_device *csdev);
+ int (*enable)(struct coresight_device *csdev, u32 mode);
void (*disable)(struct coresight_device *csdev);
+ void *(*alloc_buffer)(struct coresight_device *csdev, int cpu,
+ void **pages, int nr_pages, bool overwrite);
+ void (*free_buffer)(void *config);
+ int (*set_buffer)(struct coresight_device *csdev,
+ struct perf_output_handle *handle,
+ void *sink_config);
+ unsigned long (*reset_buffer)(struct coresight_device *csdev,
+ struct perf_output_handle *handle,
+ void *sink_config, bool *lost);
+ void (*update_buffer)(struct coresight_device *csdev,
+ struct perf_output_handle *handle,
+ void *sink_config);
};
/**
@@ -205,14 +221,18 @@
/**
* struct coresight_ops_source - basic operations for a source
* Operations available for sources.
+ * @cpu_id: returns the value of the CPU number this component
+ * is associated to.
* @trace_id: returns the value of the component's trace ID as known
- to the HW.
+ * to the HW.
* @enable: enables tracing for a source.
* @disable: disables tracing for a source.
*/
struct coresight_ops_source {
+ int (*cpu_id)(struct coresight_device *csdev);
int (*trace_id)(struct coresight_device *csdev);
- int (*enable)(struct coresight_device *csdev);
+ int (*enable)(struct coresight_device *csdev,
+ struct perf_event_attr *attr, u32 mode);
void (*disable)(struct coresight_device *csdev);
};
diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h
index 85a868c..fea160e 100644
--- a/include/linux/cpuset.h
+++ b/include/linux/cpuset.h
@@ -137,6 +137,8 @@
task_unlock(current);
}
+extern void cpuset_post_attach_flush(void);
+
#else /* !CONFIG_CPUSETS */
static inline bool cpusets_enabled(void) { return false; }
@@ -243,6 +245,10 @@
return false;
}
+static inline void cpuset_post_attach_flush(void)
+{
+}
+
#endif /* !CONFIG_CPUSETS */
#endif /* _LINUX_CPUSET_H */
diff --git a/include/linux/dax.h b/include/linux/dax.h
index 818e450..636dd59 100644
--- a/include/linux/dax.h
+++ b/include/linux/dax.h
@@ -7,7 +7,7 @@
ssize_t dax_do_io(struct kiocb *, struct inode *, struct iov_iter *, loff_t,
get_block_t, dio_iodone_t, int flags);
-int dax_clear_blocks(struct inode *, sector_t block, long size);
+int dax_clear_sectors(struct block_device *bdev, sector_t _sector, long _size);
int dax_zero_page_range(struct inode *, loff_t from, unsigned len, get_block_t);
int dax_truncate_page(struct inode *, loff_t from, get_block_t);
int dax_fault(struct vm_area_struct *, struct vm_fault *, get_block_t,
@@ -52,6 +52,8 @@
{
return mapping->host && IS_DAX(mapping->host);
}
-int dax_writeback_mapping_range(struct address_space *mapping, loff_t start,
- loff_t end);
+
+struct writeback_control;
+int dax_writeback_mapping_range(struct address_space *mapping,
+ struct block_device *bdev, struct writeback_control *wbc);
#endif
diff --git a/include/linux/devpts_fs.h b/include/linux/devpts_fs.h
index 251a209..e0ee0b3 100644
--- a/include/linux/devpts_fs.h
+++ b/include/linux/devpts_fs.h
@@ -19,6 +19,8 @@
int devpts_new_index(struct inode *ptmx_inode);
void devpts_kill_index(struct inode *ptmx_inode, int idx);
+void devpts_add_ref(struct inode *ptmx_inode);
+void devpts_del_ref(struct inode *ptmx_inode);
/* mknod in devpts */
struct inode *devpts_pty_new(struct inode *ptmx_inode, dev_t device, int index,
void *priv);
@@ -32,6 +34,8 @@
/* Dummy stubs in the no-pty case */
static inline int devpts_new_index(struct inode *ptmx_inode) { return -EINVAL; }
static inline void devpts_kill_index(struct inode *ptmx_inode, int idx) { }
+static inline void devpts_add_ref(struct inode *ptmx_inode) { }
+static inline void devpts_del_ref(struct inode *ptmx_inode) { }
static inline struct inode *devpts_pty_new(struct inode *ptmx_inode,
dev_t device, int index, void *priv)
{
diff --git a/include/linux/eeprom_93xx46.h b/include/linux/eeprom_93xx46.h
index 0679181..885f587 100644
--- a/include/linux/eeprom_93xx46.h
+++ b/include/linux/eeprom_93xx46.h
@@ -3,16 +3,25 @@
* platform description for 93xx46 EEPROMs.
*/
+struct gpio_desc;
+
struct eeprom_93xx46_platform_data {
unsigned char flags;
#define EE_ADDR8 0x01 /* 8 bit addr. cfg */
#define EE_ADDR16 0x02 /* 16 bit addr. cfg */
#define EE_READONLY 0x08 /* forbid writing */
+ unsigned int quirks;
+/* Single word read transfers only; no sequential read. */
+#define EEPROM_93XX46_QUIRK_SINGLE_WORD_READ (1 << 0)
+/* Instructions such as EWEN are (addrlen + 2) in length. */
+#define EEPROM_93XX46_QUIRK_INSTRUCTION_LENGTH (1 << 1)
+
/*
* optional hooks to control additional logic
* before and after spi transfer.
*/
void (*prepare)(void *);
void (*finish)(void *);
+ struct gpio_desc *select;
};
diff --git a/include/linux/efi.h b/include/linux/efi.h
index 569b5a8..47be3ad 100644
--- a/include/linux/efi.h
+++ b/include/linux/efi.h
@@ -1199,7 +1199,10 @@
struct efivar_entry *efivar_entry_find(efi_char16_t *name, efi_guid_t guid,
struct list_head *head, bool remove);
-bool efivar_validate(efi_char16_t *var_name, u8 *data, unsigned long len);
+bool efivar_validate(efi_guid_t vendor, efi_char16_t *var_name, u8 *data,
+ unsigned long data_size);
+bool efivar_variable_is_removable(efi_guid_t vendor, const char *name,
+ size_t len);
extern struct work_struct efivar_work;
void efivar_run_worker(void);
diff --git a/include/linux/exportfs.h b/include/linux/exportfs.h
index fa05e04..d841450 100644
--- a/include/linux/exportfs.h
+++ b/include/linux/exportfs.h
@@ -97,6 +97,12 @@
FILEID_FAT_WITH_PARENT = 0x72,
/*
+ * 128 bit child FID (struct lu_fid)
+ * 128 bit parent FID (struct lu_fid)
+ */
+ FILEID_LUSTRE = 0x97,
+
+ /*
* Filesystems must not use 0xff file ID.
*/
FILEID_INVALID = 0xff,
diff --git a/include/linux/fsnotify_backend.h b/include/linux/fsnotify_backend.h
index 6b7e89f..533c440 100644
--- a/include/linux/fsnotify_backend.h
+++ b/include/linux/fsnotify_backend.h
@@ -220,10 +220,7 @@
/* List of marks by group->i_fsnotify_marks. Also reused for queueing
* mark into destroy_list when it's waiting for the end of SRCU period
* before it can be freed. [group->mark_mutex] */
- union {
- struct list_head g_list;
- struct rcu_head g_rcu;
- };
+ struct list_head g_list;
/* Protects inode / mnt pointers, flags, masks */
spinlock_t lock;
/* List of marks for inode / vfsmount [obj_lock] */
diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
index 81de712..c2b340e 100644
--- a/include/linux/ftrace.h
+++ b/include/linux/ftrace.h
@@ -603,6 +603,7 @@
extern int skip_trace(unsigned long ip);
extern void ftrace_module_init(struct module *mod);
+extern void ftrace_module_enable(struct module *mod);
extern void ftrace_release_mod(struct module *mod);
extern void ftrace_disable_daemon(void);
@@ -612,8 +613,9 @@
static inline int ftrace_force_update(void) { return 0; }
static inline void ftrace_disable_daemon(void) { }
static inline void ftrace_enable_daemon(void) { }
-static inline void ftrace_release_mod(struct module *mod) {}
-static inline void ftrace_module_init(struct module *mod) {}
+static inline void ftrace_module_init(struct module *mod) { }
+static inline void ftrace_module_enable(struct module *mod) { }
+static inline void ftrace_release_mod(struct module *mod) { }
static inline __init int register_ftrace_command(struct ftrace_func_command *cmd)
{
return -EINVAL;
diff --git a/include/linux/hyperv.h b/include/linux/hyperv.h
index 753dbad..d23dab0 100644
--- a/include/linux/hyperv.h
+++ b/include/linux/hyperv.h
@@ -235,6 +235,7 @@
#define VMBUS_CHANNEL_LOOPBACK_OFFER 0x100
#define VMBUS_CHANNEL_PARENT_OFFER 0x200
#define VMBUS_CHANNEL_REQUEST_MONITORED_NOTIFICATION 0x400
+#define VMBUS_CHANNEL_TLNPI_PROVIDER_OFFER 0x2000
struct vmpacket_descriptor {
u16 type;
@@ -391,6 +392,10 @@
CHANNELMSG_VERSION_RESPONSE = 15,
CHANNELMSG_UNLOAD = 16,
CHANNELMSG_UNLOAD_RESPONSE = 17,
+ CHANNELMSG_18 = 18,
+ CHANNELMSG_19 = 19,
+ CHANNELMSG_20 = 20,
+ CHANNELMSG_TL_CONNECT_REQUEST = 21,
CHANNELMSG_COUNT
};
@@ -561,6 +566,13 @@
u64 monitor_page2;
} __packed;
+/* Hyper-V socket: guest's connect()-ing to host */
+struct vmbus_channel_tl_connect_request {
+ struct vmbus_channel_message_header header;
+ uuid_le guest_endpoint_id;
+ uuid_le host_service_id;
+} __packed;
+
struct vmbus_channel_version_response {
struct vmbus_channel_message_header header;
u8 version_supported;
@@ -633,6 +645,32 @@
HV_SIGNAL_POLICY_EXPLICIT,
};
+enum vmbus_device_type {
+ HV_IDE = 0,
+ HV_SCSI,
+ HV_FC,
+ HV_NIC,
+ HV_ND,
+ HV_PCIE,
+ HV_FB,
+ HV_KBD,
+ HV_MOUSE,
+ HV_KVP,
+ HV_TS,
+ HV_HB,
+ HV_SHUTDOWN,
+ HV_FCOPY,
+ HV_BACKUP,
+ HV_DM,
+ HV_UNKOWN,
+};
+
+struct vmbus_device {
+ u16 dev_type;
+ uuid_le guid;
+ bool perf_device;
+};
+
struct vmbus_channel {
/* Unique channel id */
int id;
@@ -728,6 +766,12 @@
void (*sc_creation_callback)(struct vmbus_channel *new_sc);
/*
+ * Channel rescind callback. Some channels (the hvsock ones), need to
+ * register a callback which is invoked in vmbus_onoffer_rescind().
+ */
+ void (*chn_rescind_callback)(struct vmbus_channel *channel);
+
+ /*
* The spinlock to protect the structure. It is being used to protect
* test-and-set access to various attributes of the structure as well
* as all sc_list operations.
@@ -767,8 +811,30 @@
* signaling control.
*/
enum hv_signal_policy signal_policy;
+ /*
+ * On the channel send side, many of the VMBUS
+ * device drivers explicity serialize access to the
+ * outgoing ring buffer. Give more control to the
+ * VMBUS device drivers in terms how to serialize
+ * accesss to the outgoing ring buffer.
+ * The default behavior will be to aquire the
+ * ring lock to preserve the current behavior.
+ */
+ bool acquire_ring_lock;
+
};
+static inline void set_channel_lock_state(struct vmbus_channel *c, bool state)
+{
+ c->acquire_ring_lock = state;
+}
+
+static inline bool is_hvsock_channel(const struct vmbus_channel *c)
+{
+ return !!(c->offermsg.offer.chn_flags &
+ VMBUS_CHANNEL_TLNPI_PROVIDER_OFFER);
+}
+
static inline void set_channel_signal_state(struct vmbus_channel *c,
enum hv_signal_policy policy)
{
@@ -790,6 +856,12 @@
return c->per_channel_state;
}
+static inline void set_channel_pending_send_size(struct vmbus_channel *c,
+ u32 size)
+{
+ c->outbound.ring_buffer->pending_send_sz = size;
+}
+
void vmbus_onmessage(void *context);
int vmbus_request_offers(void);
@@ -801,6 +873,9 @@
void vmbus_set_sc_create_callback(struct vmbus_channel *primary_channel,
void (*sc_cr_cb)(struct vmbus_channel *new_sc));
+void vmbus_set_chn_rescind_callback(struct vmbus_channel *channel,
+ void (*chn_rescind_cb)(struct vmbus_channel *));
+
/*
* Retrieve the (sub) channel on which to send an outgoing request.
* When a primary channel has multiple sub-channels, we choose a
@@ -940,6 +1015,20 @@
struct hv_driver {
const char *name;
+ /*
+ * A hvsock offer, which has a VMBUS_CHANNEL_TLNPI_PROVIDER_OFFER
+ * channel flag, actually doesn't mean a synthetic device because the
+ * offer's if_type/if_instance can change for every new hvsock
+ * connection.
+ *
+ * However, to facilitate the notification of new-offer/rescind-offer
+ * from vmbus driver to hvsock driver, we can handle hvsock offer as
+ * a special vmbus device, and hence we need the below flag to
+ * indicate if the driver is the hvsock driver or not: we need to
+ * specially treat the hvosck offer & driver in vmbus_match().
+ */
+ bool hvsock;
+
/* the device type supported by this driver */
uuid_le dev_type;
const struct hv_vmbus_device_id *id_table;
@@ -959,6 +1048,8 @@
/* the device instance id of this device */
uuid_le dev_instance;
+ u16 vendor_id;
+ u16 device_id;
struct device device;
@@ -994,6 +1085,8 @@
const char *mod_name);
void vmbus_driver_unregister(struct hv_driver *hv_driver);
+void vmbus_hvsock_device_unregister(struct vmbus_channel *channel);
+
int vmbus_allocate_mmio(struct resource **new, struct hv_device *device_obj,
resource_size_t min, resource_size_t max,
resource_size_t size, resource_size_t align,
@@ -1242,4 +1335,6 @@
extern __u32 vmbus_proto_version;
+int vmbus_send_tl_connect_request(const uuid_le *shv_guest_servie_id,
+ const uuid_le *shv_host_servie_id);
#endif /* _HYPERV_H */
diff --git a/include/linux/intel-iommu.h b/include/linux/intel-iommu.h
index 821273c..2d9b6500 100644
--- a/include/linux/intel-iommu.h
+++ b/include/linux/intel-iommu.h
@@ -235,6 +235,9 @@
/* low 64 bit */
#define dma_frcd_page_addr(d) (d & (((u64)-1) << PAGE_SHIFT))
+/* PRS_REG */
+#define DMA_PRS_PPR ((u32)1)
+
#define IOMMU_WAIT_OP(iommu, offset, op, cond, sts) \
do { \
cycles_t start_time = get_cycles(); \
diff --git a/include/linux/libata.h b/include/linux/libata.h
index 851821b..bec2abb 100644
--- a/include/linux/libata.h
+++ b/include/linux/libata.h
@@ -526,6 +526,7 @@
enum ata_lpm_hints {
ATA_LPM_EMPTY = (1 << 0), /* port empty/probing */
ATA_LPM_HIPM = (1 << 1), /* may use HIPM */
+ ATA_LPM_WAKE_ONLY = (1 << 2), /* only wake up link */
};
/* forward declarations */
diff --git a/include/linux/libnvdimm.h b/include/linux/libnvdimm.h
index bed40df..141ffdd 100644
--- a/include/linux/libnvdimm.h
+++ b/include/linux/libnvdimm.h
@@ -26,9 +26,8 @@
/* need to set a limit somewhere, but yes, this is likely overkill */
ND_IOCTL_MAX_BUFLEN = SZ_4M,
- ND_CMD_MAX_ELEM = 4,
+ ND_CMD_MAX_ELEM = 5,
ND_CMD_MAX_ENVELOPE = 16,
- ND_CMD_ARS_STATUS_MAX = SZ_4K,
ND_MAX_MAPPINGS = 32,
/* region flag indicating to direct-map persistent memory by default */
diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h
index d675011..2190419 100644
--- a/include/linux/lightnvm.h
+++ b/include/linux/lightnvm.h
@@ -135,6 +135,10 @@
/* Memory types */
NVM_ID_FMTYPE_SLC = 0,
NVM_ID_FMTYPE_MLC = 1,
+
+ /* Device capabilities */
+ NVM_ID_DCAP_BBLKMGMT = 0x1,
+ NVM_UD_DCAP_ECC = 0x2,
};
struct nvm_id_lp_mlc {
diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index c57e424..4dca42f 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -66,7 +66,7 @@
/*
* class-hash:
*/
- struct list_head hash_entry;
+ struct hlist_node hash_entry;
/*
* global list of all lock-classes:
@@ -199,7 +199,7 @@
u8 irq_context;
u8 depth;
u16 base;
- struct list_head entry;
+ struct hlist_node entry;
u64 chain_key;
};
diff --git a/include/linux/mlx4/device.h b/include/linux/mlx4/device.h
index 430a929..a0e8cc8 100644
--- a/include/linux/mlx4/device.h
+++ b/include/linux/mlx4/device.h
@@ -44,6 +44,8 @@
#include <linux/timecounter.h>
+#define DEFAULT_UAR_PAGE_SHIFT 12
+
#define MAX_MSIX_P_PORT 17
#define MAX_MSIX 64
#define MIN_MSIX_P_PORT 5
@@ -856,6 +858,7 @@
u64 regid_promisc_array[MLX4_MAX_PORTS + 1];
u64 regid_allmulti_array[MLX4_MAX_PORTS + 1];
struct mlx4_vf_dev *dev_vfs;
+ u8 uar_page_shift;
};
struct mlx4_clock_params {
@@ -1528,4 +1531,14 @@
int mlx4_get_internal_clock_params(struct mlx4_dev *dev,
struct mlx4_clock_params *params);
+static inline int mlx4_to_hw_uar_index(struct mlx4_dev *dev, int index)
+{
+ return (index << (PAGE_SHIFT - dev->uar_page_shift));
+}
+
+static inline int mlx4_get_num_reserved_uar(struct mlx4_dev *dev)
+{
+ /* The first 128 UARs are used for EQ doorbells */
+ return (128 >> (PAGE_SHIFT - dev->uar_page_shift));
+}
#endif /* MLX4_DEVICE_H */
diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h
index 231ab6b..51f1e54 100644
--- a/include/linux/mlx5/mlx5_ifc.h
+++ b/include/linux/mlx5/mlx5_ifc.h
@@ -207,15 +207,15 @@
u8 outer_dmac[0x1];
u8 outer_smac[0x1];
u8 outer_ether_type[0x1];
- u8 reserved_0[0x1];
+ u8 reserved_at_3[0x1];
u8 outer_first_prio[0x1];
u8 outer_first_cfi[0x1];
u8 outer_first_vid[0x1];
- u8 reserved_1[0x1];
+ u8 reserved_at_7[0x1];
u8 outer_second_prio[0x1];
u8 outer_second_cfi[0x1];
u8 outer_second_vid[0x1];
- u8 reserved_2[0x1];
+ u8 reserved_at_b[0x1];
u8 outer_sip[0x1];
u8 outer_dip[0x1];
u8 outer_frag[0x1];
@@ -230,21 +230,21 @@
u8 outer_gre_protocol[0x1];
u8 outer_gre_key[0x1];
u8 outer_vxlan_vni[0x1];
- u8 reserved_3[0x5];
+ u8 reserved_at_1a[0x5];
u8 source_eswitch_port[0x1];
u8 inner_dmac[0x1];
u8 inner_smac[0x1];
u8 inner_ether_type[0x1];
- u8 reserved_4[0x1];
+ u8 reserved_at_23[0x1];
u8 inner_first_prio[0x1];
u8 inner_first_cfi[0x1];
u8 inner_first_vid[0x1];
- u8 reserved_5[0x1];
+ u8 reserved_at_27[0x1];
u8 inner_second_prio[0x1];
u8 inner_second_cfi[0x1];
u8 inner_second_vid[0x1];
- u8 reserved_6[0x1];
+ u8 reserved_at_2b[0x1];
u8 inner_sip[0x1];
u8 inner_dip[0x1];
u8 inner_frag[0x1];
@@ -256,37 +256,37 @@
u8 inner_tcp_sport[0x1];
u8 inner_tcp_dport[0x1];
u8 inner_tcp_flags[0x1];
- u8 reserved_7[0x9];
+ u8 reserved_at_37[0x9];
- u8 reserved_8[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_flow_table_prop_layout_bits {
u8 ft_support[0x1];
- u8 reserved_0[0x2];
+ u8 reserved_at_1[0x2];
u8 flow_modify_en[0x1];
u8 modify_root[0x1];
u8 identified_miss_table_mode[0x1];
u8 flow_table_modify[0x1];
- u8 reserved_1[0x19];
+ u8 reserved_at_7[0x19];
- u8 reserved_2[0x2];
+ u8 reserved_at_20[0x2];
u8 log_max_ft_size[0x6];
- u8 reserved_3[0x10];
+ u8 reserved_at_28[0x10];
u8 max_ft_level[0x8];
- u8 reserved_4[0x20];
+ u8 reserved_at_40[0x20];
- u8 reserved_5[0x18];
+ u8 reserved_at_60[0x18];
u8 log_max_ft_num[0x8];
- u8 reserved_6[0x18];
+ u8 reserved_at_80[0x18];
u8 log_max_destination[0x8];
- u8 reserved_7[0x18];
+ u8 reserved_at_a0[0x18];
u8 log_max_flow[0x8];
- u8 reserved_8[0x40];
+ u8 reserved_at_c0[0x40];
struct mlx5_ifc_flow_table_fields_supported_bits ft_field_support;
@@ -298,13 +298,13 @@
u8 receive[0x1];
u8 write[0x1];
u8 read[0x1];
- u8 reserved_0[0x1];
+ u8 reserved_at_4[0x1];
u8 srq_receive[0x1];
- u8 reserved_1[0x1a];
+ u8 reserved_at_6[0x1a];
};
struct mlx5_ifc_ipv4_layout_bits {
- u8 reserved_0[0x60];
+ u8 reserved_at_0[0x60];
u8 ipv4[0x20];
};
@@ -316,7 +316,7 @@
union mlx5_ifc_ipv6_layout_ipv4_layout_auto_bits {
struct mlx5_ifc_ipv6_layout_bits ipv6_layout;
struct mlx5_ifc_ipv4_layout_bits ipv4_layout;
- u8 reserved_0[0x80];
+ u8 reserved_at_0[0x80];
};
struct mlx5_ifc_fte_match_set_lyr_2_4_bits {
@@ -336,15 +336,15 @@
u8 ip_dscp[0x6];
u8 ip_ecn[0x2];
u8 vlan_tag[0x1];
- u8 reserved_0[0x1];
+ u8 reserved_at_91[0x1];
u8 frag[0x1];
- u8 reserved_1[0x4];
+ u8 reserved_at_93[0x4];
u8 tcp_flags[0x9];
u8 tcp_sport[0x10];
u8 tcp_dport[0x10];
- u8 reserved_2[0x20];
+ u8 reserved_at_c0[0x20];
u8 udp_sport[0x10];
u8 udp_dport[0x10];
@@ -355,9 +355,9 @@
};
struct mlx5_ifc_fte_match_set_misc_bits {
- u8 reserved_0[0x20];
+ u8 reserved_at_0[0x20];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 source_port[0x10];
u8 outer_second_prio[0x3];
@@ -369,31 +369,31 @@
u8 outer_second_vlan_tag[0x1];
u8 inner_second_vlan_tag[0x1];
- u8 reserved_2[0xe];
+ u8 reserved_at_62[0xe];
u8 gre_protocol[0x10];
u8 gre_key_h[0x18];
u8 gre_key_l[0x8];
u8 vxlan_vni[0x18];
- u8 reserved_3[0x8];
+ u8 reserved_at_b8[0x8];
- u8 reserved_4[0x20];
+ u8 reserved_at_c0[0x20];
- u8 reserved_5[0xc];
+ u8 reserved_at_e0[0xc];
u8 outer_ipv6_flow_label[0x14];
- u8 reserved_6[0xc];
+ u8 reserved_at_100[0xc];
u8 inner_ipv6_flow_label[0x14];
- u8 reserved_7[0xe0];
+ u8 reserved_at_120[0xe0];
};
struct mlx5_ifc_cmd_pas_bits {
u8 pa_h[0x20];
u8 pa_l[0x14];
- u8 reserved_0[0xc];
+ u8 reserved_at_34[0xc];
};
struct mlx5_ifc_uint64_bits {
@@ -418,31 +418,31 @@
struct mlx5_ifc_ads_bits {
u8 fl[0x1];
u8 free_ar[0x1];
- u8 reserved_0[0xe];
+ u8 reserved_at_2[0xe];
u8 pkey_index[0x10];
- u8 reserved_1[0x8];
+ u8 reserved_at_20[0x8];
u8 grh[0x1];
u8 mlid[0x7];
u8 rlid[0x10];
u8 ack_timeout[0x5];
- u8 reserved_2[0x3];
+ u8 reserved_at_45[0x3];
u8 src_addr_index[0x8];
- u8 reserved_3[0x4];
+ u8 reserved_at_50[0x4];
u8 stat_rate[0x4];
u8 hop_limit[0x8];
- u8 reserved_4[0x4];
+ u8 reserved_at_60[0x4];
u8 tclass[0x8];
u8 flow_label[0x14];
u8 rgid_rip[16][0x8];
- u8 reserved_5[0x4];
+ u8 reserved_at_100[0x4];
u8 f_dscp[0x1];
u8 f_ecn[0x1];
- u8 reserved_6[0x1];
+ u8 reserved_at_106[0x1];
u8 f_eth_prio[0x1];
u8 ecn[0x2];
u8 dscp[0x6];
@@ -458,25 +458,25 @@
};
struct mlx5_ifc_flow_table_nic_cap_bits {
- u8 reserved_0[0x200];
+ u8 reserved_at_0[0x200];
struct mlx5_ifc_flow_table_prop_layout_bits flow_table_properties_nic_receive;
- u8 reserved_1[0x200];
+ u8 reserved_at_400[0x200];
struct mlx5_ifc_flow_table_prop_layout_bits flow_table_properties_nic_receive_sniffer;
struct mlx5_ifc_flow_table_prop_layout_bits flow_table_properties_nic_transmit;
- u8 reserved_2[0x200];
+ u8 reserved_at_a00[0x200];
struct mlx5_ifc_flow_table_prop_layout_bits flow_table_properties_nic_transmit_sniffer;
- u8 reserved_3[0x7200];
+ u8 reserved_at_e00[0x7200];
};
struct mlx5_ifc_flow_table_eswitch_cap_bits {
- u8 reserved_0[0x200];
+ u8 reserved_at_0[0x200];
struct mlx5_ifc_flow_table_prop_layout_bits flow_table_properties_nic_esw_fdb;
@@ -484,7 +484,7 @@
struct mlx5_ifc_flow_table_prop_layout_bits flow_table_properties_esw_acl_egress;
- u8 reserved_1[0x7800];
+ u8 reserved_at_800[0x7800];
};
struct mlx5_ifc_e_switch_cap_bits {
@@ -493,9 +493,9 @@
u8 vport_svlan_insert[0x1];
u8 vport_cvlan_insert_if_not_exist[0x1];
u8 vport_cvlan_insert_overwrite[0x1];
- u8 reserved_0[0x1b];
+ u8 reserved_at_5[0x1b];
- u8 reserved_1[0x7e0];
+ u8 reserved_at_20[0x7e0];
};
struct mlx5_ifc_per_protocol_networking_offload_caps_bits {
@@ -504,51 +504,51 @@
u8 lro_cap[0x1];
u8 lro_psh_flag[0x1];
u8 lro_time_stamp[0x1];
- u8 reserved_0[0x3];
+ u8 reserved_at_5[0x3];
u8 self_lb_en_modifiable[0x1];
- u8 reserved_1[0x2];
+ u8 reserved_at_9[0x2];
u8 max_lso_cap[0x5];
- u8 reserved_2[0x4];
+ u8 reserved_at_10[0x4];
u8 rss_ind_tbl_cap[0x4];
- u8 reserved_3[0x3];
+ u8 reserved_at_18[0x3];
u8 tunnel_lso_const_out_ip_id[0x1];
- u8 reserved_4[0x2];
+ u8 reserved_at_1c[0x2];
u8 tunnel_statless_gre[0x1];
u8 tunnel_stateless_vxlan[0x1];
- u8 reserved_5[0x20];
+ u8 reserved_at_20[0x20];
- u8 reserved_6[0x10];
+ u8 reserved_at_40[0x10];
u8 lro_min_mss_size[0x10];
- u8 reserved_7[0x120];
+ u8 reserved_at_60[0x120];
u8 lro_timer_supported_periods[4][0x20];
- u8 reserved_8[0x600];
+ u8 reserved_at_200[0x600];
};
struct mlx5_ifc_roce_cap_bits {
u8 roce_apm[0x1];
- u8 reserved_0[0x1f];
+ u8 reserved_at_1[0x1f];
- u8 reserved_1[0x60];
+ u8 reserved_at_20[0x60];
- u8 reserved_2[0xc];
+ u8 reserved_at_80[0xc];
u8 l3_type[0x4];
- u8 reserved_3[0x8];
+ u8 reserved_at_90[0x8];
u8 roce_version[0x8];
- u8 reserved_4[0x10];
+ u8 reserved_at_a0[0x10];
u8 r_roce_dest_udp_port[0x10];
u8 r_roce_max_src_udp_port[0x10];
u8 r_roce_min_src_udp_port[0x10];
- u8 reserved_5[0x10];
+ u8 reserved_at_e0[0x10];
u8 roce_address_table_size[0x10];
- u8 reserved_6[0x700];
+ u8 reserved_at_100[0x700];
};
enum {
@@ -576,35 +576,35 @@
};
struct mlx5_ifc_atomic_caps_bits {
- u8 reserved_0[0x40];
+ u8 reserved_at_0[0x40];
u8 atomic_req_8B_endianess_mode[0x2];
- u8 reserved_1[0x4];
+ u8 reserved_at_42[0x4];
u8 supported_atomic_req_8B_endianess_mode_1[0x1];
- u8 reserved_2[0x19];
+ u8 reserved_at_47[0x19];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
- u8 reserved_4[0x10];
+ u8 reserved_at_80[0x10];
u8 atomic_operations[0x10];
- u8 reserved_5[0x10];
+ u8 reserved_at_a0[0x10];
u8 atomic_size_qp[0x10];
- u8 reserved_6[0x10];
+ u8 reserved_at_c0[0x10];
u8 atomic_size_dc[0x10];
- u8 reserved_7[0x720];
+ u8 reserved_at_e0[0x720];
};
struct mlx5_ifc_odp_cap_bits {
- u8 reserved_0[0x40];
+ u8 reserved_at_0[0x40];
u8 sig[0x1];
- u8 reserved_1[0x1f];
+ u8 reserved_at_41[0x1f];
- u8 reserved_2[0x20];
+ u8 reserved_at_60[0x20];
struct mlx5_ifc_odp_per_transport_service_cap_bits rc_odp_caps;
@@ -612,7 +612,7 @@
struct mlx5_ifc_odp_per_transport_service_cap_bits ud_odp_caps;
- u8 reserved_3[0x720];
+ u8 reserved_at_e0[0x720];
};
enum {
@@ -660,55 +660,55 @@
};
struct mlx5_ifc_cmd_hca_cap_bits {
- u8 reserved_0[0x80];
+ u8 reserved_at_0[0x80];
u8 log_max_srq_sz[0x8];
u8 log_max_qp_sz[0x8];
- u8 reserved_1[0xb];
+ u8 reserved_at_90[0xb];
u8 log_max_qp[0x5];
- u8 reserved_2[0xb];
+ u8 reserved_at_a0[0xb];
u8 log_max_srq[0x5];
- u8 reserved_3[0x10];
+ u8 reserved_at_b0[0x10];
- u8 reserved_4[0x8];
+ u8 reserved_at_c0[0x8];
u8 log_max_cq_sz[0x8];
- u8 reserved_5[0xb];
+ u8 reserved_at_d0[0xb];
u8 log_max_cq[0x5];
u8 log_max_eq_sz[0x8];
- u8 reserved_6[0x2];
+ u8 reserved_at_e8[0x2];
u8 log_max_mkey[0x6];
- u8 reserved_7[0xc];
+ u8 reserved_at_f0[0xc];
u8 log_max_eq[0x4];
u8 max_indirection[0x8];
- u8 reserved_8[0x1];
+ u8 reserved_at_108[0x1];
u8 log_max_mrw_sz[0x7];
- u8 reserved_9[0x2];
+ u8 reserved_at_110[0x2];
u8 log_max_bsf_list_size[0x6];
- u8 reserved_10[0x2];
+ u8 reserved_at_118[0x2];
u8 log_max_klm_list_size[0x6];
- u8 reserved_11[0xa];
+ u8 reserved_at_120[0xa];
u8 log_max_ra_req_dc[0x6];
- u8 reserved_12[0xa];
+ u8 reserved_at_130[0xa];
u8 log_max_ra_res_dc[0x6];
- u8 reserved_13[0xa];
+ u8 reserved_at_140[0xa];
u8 log_max_ra_req_qp[0x6];
- u8 reserved_14[0xa];
+ u8 reserved_at_150[0xa];
u8 log_max_ra_res_qp[0x6];
u8 pad_cap[0x1];
u8 cc_query_allowed[0x1];
u8 cc_modify_allowed[0x1];
- u8 reserved_15[0xd];
+ u8 reserved_at_163[0xd];
u8 gid_table_size[0x10];
u8 out_of_seq_cnt[0x1];
u8 vport_counters[0x1];
- u8 reserved_16[0x4];
+ u8 reserved_at_182[0x4];
u8 max_qp_cnt[0xa];
u8 pkey_table_size[0x10];
@@ -716,158 +716,158 @@
u8 vhca_group_manager[0x1];
u8 ib_virt[0x1];
u8 eth_virt[0x1];
- u8 reserved_17[0x1];
+ u8 reserved_at_1a4[0x1];
u8 ets[0x1];
u8 nic_flow_table[0x1];
u8 eswitch_flow_table[0x1];
u8 early_vf_enable;
- u8 reserved_18[0x2];
+ u8 reserved_at_1a8[0x2];
u8 local_ca_ack_delay[0x5];
- u8 reserved_19[0x6];
+ u8 reserved_at_1af[0x6];
u8 port_type[0x2];
u8 num_ports[0x8];
- u8 reserved_20[0x3];
+ u8 reserved_at_1bf[0x3];
u8 log_max_msg[0x5];
- u8 reserved_21[0x18];
+ u8 reserved_at_1c7[0x18];
u8 stat_rate_support[0x10];
- u8 reserved_22[0xc];
+ u8 reserved_at_1ef[0xc];
u8 cqe_version[0x4];
u8 compact_address_vector[0x1];
- u8 reserved_23[0xe];
+ u8 reserved_at_200[0xe];
u8 drain_sigerr[0x1];
u8 cmdif_checksum[0x2];
u8 sigerr_cqe[0x1];
- u8 reserved_24[0x1];
+ u8 reserved_at_212[0x1];
u8 wq_signature[0x1];
u8 sctr_data_cqe[0x1];
- u8 reserved_25[0x1];
+ u8 reserved_at_215[0x1];
u8 sho[0x1];
u8 tph[0x1];
u8 rf[0x1];
u8 dct[0x1];
- u8 reserved_26[0x1];
+ u8 reserved_at_21a[0x1];
u8 eth_net_offloads[0x1];
u8 roce[0x1];
u8 atomic[0x1];
- u8 reserved_27[0x1];
+ u8 reserved_at_21e[0x1];
u8 cq_oi[0x1];
u8 cq_resize[0x1];
u8 cq_moderation[0x1];
- u8 reserved_28[0x3];
+ u8 reserved_at_222[0x3];
u8 cq_eq_remap[0x1];
u8 pg[0x1];
u8 block_lb_mc[0x1];
- u8 reserved_29[0x1];
+ u8 reserved_at_228[0x1];
u8 scqe_break_moderation[0x1];
- u8 reserved_30[0x1];
+ u8 reserved_at_22a[0x1];
u8 cd[0x1];
- u8 reserved_31[0x1];
+ u8 reserved_at_22c[0x1];
u8 apm[0x1];
- u8 reserved_32[0x7];
+ u8 reserved_at_22e[0x7];
u8 qkv[0x1];
u8 pkv[0x1];
- u8 reserved_33[0x4];
+ u8 reserved_at_237[0x4];
u8 xrc[0x1];
u8 ud[0x1];
u8 uc[0x1];
u8 rc[0x1];
- u8 reserved_34[0xa];
+ u8 reserved_at_23f[0xa];
u8 uar_sz[0x6];
- u8 reserved_35[0x8];
+ u8 reserved_at_24f[0x8];
u8 log_pg_sz[0x8];
u8 bf[0x1];
- u8 reserved_36[0x1];
+ u8 reserved_at_260[0x1];
u8 pad_tx_eth_packet[0x1];
- u8 reserved_37[0x8];
+ u8 reserved_at_262[0x8];
u8 log_bf_reg_size[0x5];
- u8 reserved_38[0x10];
+ u8 reserved_at_26f[0x10];
- u8 reserved_39[0x10];
+ u8 reserved_at_27f[0x10];
u8 max_wqe_sz_sq[0x10];
- u8 reserved_40[0x10];
+ u8 reserved_at_29f[0x10];
u8 max_wqe_sz_rq[0x10];
- u8 reserved_41[0x10];
+ u8 reserved_at_2bf[0x10];
u8 max_wqe_sz_sq_dc[0x10];
- u8 reserved_42[0x7];
+ u8 reserved_at_2df[0x7];
u8 max_qp_mcg[0x19];
- u8 reserved_43[0x18];
+ u8 reserved_at_2ff[0x18];
u8 log_max_mcg[0x8];
- u8 reserved_44[0x3];
+ u8 reserved_at_31f[0x3];
u8 log_max_transport_domain[0x5];
- u8 reserved_45[0x3];
+ u8 reserved_at_327[0x3];
u8 log_max_pd[0x5];
- u8 reserved_46[0xb];
+ u8 reserved_at_32f[0xb];
u8 log_max_xrcd[0x5];
- u8 reserved_47[0x20];
+ u8 reserved_at_33f[0x20];
- u8 reserved_48[0x3];
+ u8 reserved_at_35f[0x3];
u8 log_max_rq[0x5];
- u8 reserved_49[0x3];
+ u8 reserved_at_367[0x3];
u8 log_max_sq[0x5];
- u8 reserved_50[0x3];
+ u8 reserved_at_36f[0x3];
u8 log_max_tir[0x5];
- u8 reserved_51[0x3];
+ u8 reserved_at_377[0x3];
u8 log_max_tis[0x5];
u8 basic_cyclic_rcv_wqe[0x1];
- u8 reserved_52[0x2];
+ u8 reserved_at_380[0x2];
u8 log_max_rmp[0x5];
- u8 reserved_53[0x3];
+ u8 reserved_at_387[0x3];
u8 log_max_rqt[0x5];
- u8 reserved_54[0x3];
+ u8 reserved_at_38f[0x3];
u8 log_max_rqt_size[0x5];
- u8 reserved_55[0x3];
+ u8 reserved_at_397[0x3];
u8 log_max_tis_per_sq[0x5];
- u8 reserved_56[0x3];
+ u8 reserved_at_39f[0x3];
u8 log_max_stride_sz_rq[0x5];
- u8 reserved_57[0x3];
+ u8 reserved_at_3a7[0x3];
u8 log_min_stride_sz_rq[0x5];
- u8 reserved_58[0x3];
+ u8 reserved_at_3af[0x3];
u8 log_max_stride_sz_sq[0x5];
- u8 reserved_59[0x3];
+ u8 reserved_at_3b7[0x3];
u8 log_min_stride_sz_sq[0x5];
- u8 reserved_60[0x1b];
+ u8 reserved_at_3bf[0x1b];
u8 log_max_wq_sz[0x5];
u8 nic_vport_change_event[0x1];
- u8 reserved_61[0xa];
+ u8 reserved_at_3e0[0xa];
u8 log_max_vlan_list[0x5];
- u8 reserved_62[0x3];
+ u8 reserved_at_3ef[0x3];
u8 log_max_current_mc_list[0x5];
- u8 reserved_63[0x3];
+ u8 reserved_at_3f7[0x3];
u8 log_max_current_uc_list[0x5];
- u8 reserved_64[0x80];
+ u8 reserved_at_3ff[0x80];
- u8 reserved_65[0x3];
+ u8 reserved_at_47f[0x3];
u8 log_max_l2_table[0x5];
- u8 reserved_66[0x8];
+ u8 reserved_at_487[0x8];
u8 log_uar_page_sz[0x10];
- u8 reserved_67[0x20];
+ u8 reserved_at_49f[0x20];
u8 device_frequency_mhz[0x20];
u8 device_frequency_khz[0x20];
- u8 reserved_68[0x5f];
+ u8 reserved_at_4ff[0x5f];
u8 cqe_zip[0x1];
u8 cqe_zip_timeout[0x10];
u8 cqe_zip_max_num[0x10];
- u8 reserved_69[0x220];
+ u8 reserved_at_57f[0x220];
};
enum mlx5_flow_destination_type {
@@ -880,7 +880,7 @@
u8 destination_type[0x8];
u8 destination_id[0x18];
- u8 reserved_0[0x20];
+ u8 reserved_at_20[0x20];
};
struct mlx5_ifc_fte_match_param_bits {
@@ -890,7 +890,7 @@
struct mlx5_ifc_fte_match_set_lyr_2_4_bits inner_headers;
- u8 reserved_0[0xa00];
+ u8 reserved_at_600[0xa00];
};
enum {
@@ -922,18 +922,18 @@
u8 wq_signature[0x1];
u8 end_padding_mode[0x2];
u8 cd_slave[0x1];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 hds_skip_first_sge[0x1];
u8 log2_hds_buf_size[0x3];
- u8 reserved_1[0x7];
+ u8 reserved_at_24[0x7];
u8 page_offset[0x5];
u8 lwm[0x10];
- u8 reserved_2[0x8];
+ u8 reserved_at_40[0x8];
u8 pd[0x18];
- u8 reserved_3[0x8];
+ u8 reserved_at_60[0x8];
u8 uar_page[0x18];
u8 dbr_addr[0x40];
@@ -942,60 +942,60 @@
u8 sw_counter[0x20];
- u8 reserved_4[0xc];
+ u8 reserved_at_100[0xc];
u8 log_wq_stride[0x4];
- u8 reserved_5[0x3];
+ u8 reserved_at_110[0x3];
u8 log_wq_pg_sz[0x5];
- u8 reserved_6[0x3];
+ u8 reserved_at_118[0x3];
u8 log_wq_sz[0x5];
- u8 reserved_7[0x4e0];
+ u8 reserved_at_120[0x4e0];
struct mlx5_ifc_cmd_pas_bits pas[0];
};
struct mlx5_ifc_rq_num_bits {
- u8 reserved_0[0x8];
+ u8 reserved_at_0[0x8];
u8 rq_num[0x18];
};
struct mlx5_ifc_mac_address_layout_bits {
- u8 reserved_0[0x10];
+ u8 reserved_at_0[0x10];
u8 mac_addr_47_32[0x10];
u8 mac_addr_31_0[0x20];
};
struct mlx5_ifc_vlan_layout_bits {
- u8 reserved_0[0x14];
+ u8 reserved_at_0[0x14];
u8 vlan[0x0c];
- u8 reserved_1[0x20];
+ u8 reserved_at_20[0x20];
};
struct mlx5_ifc_cong_control_r_roce_ecn_np_bits {
- u8 reserved_0[0xa0];
+ u8 reserved_at_0[0xa0];
u8 min_time_between_cnps[0x20];
- u8 reserved_1[0x12];
+ u8 reserved_at_c0[0x12];
u8 cnp_dscp[0x6];
- u8 reserved_2[0x5];
+ u8 reserved_at_d8[0x5];
u8 cnp_802p_prio[0x3];
- u8 reserved_3[0x720];
+ u8 reserved_at_e0[0x720];
};
struct mlx5_ifc_cong_control_r_roce_ecn_rp_bits {
- u8 reserved_0[0x60];
+ u8 reserved_at_0[0x60];
- u8 reserved_1[0x4];
+ u8 reserved_at_60[0x4];
u8 clamp_tgt_rate[0x1];
- u8 reserved_2[0x3];
+ u8 reserved_at_65[0x3];
u8 clamp_tgt_rate_after_time_inc[0x1];
- u8 reserved_3[0x17];
+ u8 reserved_at_69[0x17];
- u8 reserved_4[0x20];
+ u8 reserved_at_80[0x20];
u8 rpg_time_reset[0x20];
@@ -1015,7 +1015,7 @@
u8 rpg_min_rate[0x20];
- u8 reserved_5[0xe0];
+ u8 reserved_at_1c0[0xe0];
u8 rate_to_set_on_first_cnp[0x20];
@@ -1025,15 +1025,15 @@
u8 rate_reduce_monitor_period[0x20];
- u8 reserved_6[0x20];
+ u8 reserved_at_320[0x20];
u8 initial_alpha_value[0x20];
- u8 reserved_7[0x4a0];
+ u8 reserved_at_360[0x4a0];
};
struct mlx5_ifc_cong_control_802_1qau_rp_bits {
- u8 reserved_0[0x80];
+ u8 reserved_at_0[0x80];
u8 rppp_max_rps[0x20];
@@ -1055,7 +1055,7 @@
u8 rpg_min_rate[0x20];
- u8 reserved_1[0x640];
+ u8 reserved_at_1c0[0x640];
};
enum {
@@ -1205,7 +1205,7 @@
u8 successful_recovery_events[0x20];
- u8 reserved_0[0x180];
+ u8 reserved_at_640[0x180];
};
struct mlx5_ifc_eth_per_traffic_grp_data_layout_bits {
@@ -1213,7 +1213,7 @@
u8 transmit_queue_low[0x20];
- u8 reserved_0[0x780];
+ u8 reserved_at_40[0x780];
};
struct mlx5_ifc_eth_per_prio_grp_data_layout_bits {
@@ -1221,7 +1221,7 @@
u8 rx_octets_low[0x20];
- u8 reserved_0[0xc0];
+ u8 reserved_at_40[0xc0];
u8 rx_frames_high[0x20];
@@ -1231,7 +1231,7 @@
u8 tx_octets_low[0x20];
- u8 reserved_1[0xc0];
+ u8 reserved_at_180[0xc0];
u8 tx_frames_high[0x20];
@@ -1257,7 +1257,7 @@
u8 rx_pause_transition_low[0x20];
- u8 reserved_2[0x400];
+ u8 reserved_at_3c0[0x400];
};
struct mlx5_ifc_eth_extended_cntrs_grp_data_layout_bits {
@@ -1265,7 +1265,7 @@
u8 port_transmit_wait_low[0x20];
- u8 reserved_0[0x780];
+ u8 reserved_at_40[0x780];
};
struct mlx5_ifc_eth_3635_cntrs_grp_data_layout_bits {
@@ -1333,7 +1333,7 @@
u8 dot3out_pause_frames_low[0x20];
- u8 reserved_0[0x3c0];
+ u8 reserved_at_400[0x3c0];
};
struct mlx5_ifc_eth_2819_cntrs_grp_data_layout_bits {
@@ -1421,7 +1421,7 @@
u8 ether_stats_pkts8192to10239octets_low[0x20];
- u8 reserved_0[0x280];
+ u8 reserved_at_540[0x280];
};
struct mlx5_ifc_eth_2863_cntrs_grp_data_layout_bits {
@@ -1477,7 +1477,7 @@
u8 if_out_broadcast_pkts_low[0x20];
- u8 reserved_0[0x480];
+ u8 reserved_at_340[0x480];
};
struct mlx5_ifc_eth_802_3_cntrs_grp_data_layout_bits {
@@ -1557,54 +1557,54 @@
u8 a_pause_mac_ctrl_frames_transmitted_low[0x20];
- u8 reserved_0[0x300];
+ u8 reserved_at_4c0[0x300];
};
struct mlx5_ifc_cmd_inter_comp_event_bits {
u8 command_completion_vector[0x20];
- u8 reserved_0[0xc0];
+ u8 reserved_at_20[0xc0];
};
struct mlx5_ifc_stall_vl_event_bits {
- u8 reserved_0[0x18];
+ u8 reserved_at_0[0x18];
u8 port_num[0x1];
- u8 reserved_1[0x3];
+ u8 reserved_at_19[0x3];
u8 vl[0x4];
- u8 reserved_2[0xa0];
+ u8 reserved_at_20[0xa0];
};
struct mlx5_ifc_db_bf_congestion_event_bits {
u8 event_subtype[0x8];
- u8 reserved_0[0x8];
+ u8 reserved_at_8[0x8];
u8 congestion_level[0x8];
- u8 reserved_1[0x8];
+ u8 reserved_at_18[0x8];
- u8 reserved_2[0xa0];
+ u8 reserved_at_20[0xa0];
};
struct mlx5_ifc_gpio_event_bits {
- u8 reserved_0[0x60];
+ u8 reserved_at_0[0x60];
u8 gpio_event_hi[0x20];
u8 gpio_event_lo[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_a0[0x40];
};
struct mlx5_ifc_port_state_change_event_bits {
- u8 reserved_0[0x40];
+ u8 reserved_at_0[0x40];
u8 port_num[0x4];
- u8 reserved_1[0x1c];
+ u8 reserved_at_44[0x1c];
- u8 reserved_2[0x80];
+ u8 reserved_at_60[0x80];
};
struct mlx5_ifc_dropped_packet_logged_bits {
- u8 reserved_0[0xe0];
+ u8 reserved_at_0[0xe0];
};
enum {
@@ -1613,15 +1613,15 @@
};
struct mlx5_ifc_cq_error_bits {
- u8 reserved_0[0x8];
+ u8 reserved_at_0[0x8];
u8 cqn[0x18];
- u8 reserved_1[0x20];
+ u8 reserved_at_20[0x20];
- u8 reserved_2[0x18];
+ u8 reserved_at_40[0x18];
u8 syndrome[0x8];
- u8 reserved_3[0x80];
+ u8 reserved_at_60[0x80];
};
struct mlx5_ifc_rdma_page_fault_event_bits {
@@ -1629,14 +1629,14 @@
u8 r_key[0x20];
- u8 reserved_0[0x10];
+ u8 reserved_at_40[0x10];
u8 packet_len[0x10];
u8 rdma_op_len[0x20];
u8 rdma_va[0x40];
- u8 reserved_1[0x5];
+ u8 reserved_at_c0[0x5];
u8 rdma[0x1];
u8 write[0x1];
u8 requestor[0x1];
@@ -1646,15 +1646,15 @@
struct mlx5_ifc_wqe_associated_page_fault_event_bits {
u8 bytes_committed[0x20];
- u8 reserved_0[0x10];
+ u8 reserved_at_20[0x10];
u8 wqe_index[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_40[0x10];
u8 len[0x10];
- u8 reserved_2[0x60];
+ u8 reserved_at_60[0x60];
- u8 reserved_3[0x5];
+ u8 reserved_at_c0[0x5];
u8 rdma[0x1];
u8 write_read[0x1];
u8 requestor[0x1];
@@ -1662,26 +1662,26 @@
};
struct mlx5_ifc_qp_events_bits {
- u8 reserved_0[0xa0];
+ u8 reserved_at_0[0xa0];
u8 type[0x8];
- u8 reserved_1[0x18];
+ u8 reserved_at_a8[0x18];
- u8 reserved_2[0x8];
+ u8 reserved_at_c0[0x8];
u8 qpn_rqn_sqn[0x18];
};
struct mlx5_ifc_dct_events_bits {
- u8 reserved_0[0xc0];
+ u8 reserved_at_0[0xc0];
- u8 reserved_1[0x8];
+ u8 reserved_at_c0[0x8];
u8 dct_number[0x18];
};
struct mlx5_ifc_comp_event_bits {
- u8 reserved_0[0xc0];
+ u8 reserved_at_0[0xc0];
- u8 reserved_1[0x8];
+ u8 reserved_at_c0[0x8];
u8 cq_number[0x18];
};
@@ -1754,41 +1754,41 @@
struct mlx5_ifc_qpc_bits {
u8 state[0x4];
- u8 reserved_0[0x4];
+ u8 reserved_at_4[0x4];
u8 st[0x8];
- u8 reserved_1[0x3];
+ u8 reserved_at_10[0x3];
u8 pm_state[0x2];
- u8 reserved_2[0x7];
+ u8 reserved_at_15[0x7];
u8 end_padding_mode[0x2];
- u8 reserved_3[0x2];
+ u8 reserved_at_1e[0x2];
u8 wq_signature[0x1];
u8 block_lb_mc[0x1];
u8 atomic_like_write_en[0x1];
u8 latency_sensitive[0x1];
- u8 reserved_4[0x1];
+ u8 reserved_at_24[0x1];
u8 drain_sigerr[0x1];
- u8 reserved_5[0x2];
+ u8 reserved_at_26[0x2];
u8 pd[0x18];
u8 mtu[0x3];
u8 log_msg_max[0x5];
- u8 reserved_6[0x1];
+ u8 reserved_at_48[0x1];
u8 log_rq_size[0x4];
u8 log_rq_stride[0x3];
u8 no_sq[0x1];
u8 log_sq_size[0x4];
- u8 reserved_7[0x6];
+ u8 reserved_at_55[0x6];
u8 rlky[0x1];
- u8 reserved_8[0x4];
+ u8 reserved_at_5c[0x4];
u8 counter_set_id[0x8];
u8 uar_page[0x18];
- u8 reserved_9[0x8];
+ u8 reserved_at_80[0x8];
u8 user_index[0x18];
- u8 reserved_10[0x3];
+ u8 reserved_at_a0[0x3];
u8 log_page_size[0x5];
u8 remote_qpn[0x18];
@@ -1797,66 +1797,66 @@
struct mlx5_ifc_ads_bits secondary_address_path;
u8 log_ack_req_freq[0x4];
- u8 reserved_11[0x4];
+ u8 reserved_at_384[0x4];
u8 log_sra_max[0x3];
- u8 reserved_12[0x2];
+ u8 reserved_at_38b[0x2];
u8 retry_count[0x3];
u8 rnr_retry[0x3];
- u8 reserved_13[0x1];
+ u8 reserved_at_393[0x1];
u8 fre[0x1];
u8 cur_rnr_retry[0x3];
u8 cur_retry_count[0x3];
- u8 reserved_14[0x5];
+ u8 reserved_at_39b[0x5];
- u8 reserved_15[0x20];
+ u8 reserved_at_3a0[0x20];
- u8 reserved_16[0x8];
+ u8 reserved_at_3c0[0x8];
u8 next_send_psn[0x18];
- u8 reserved_17[0x8];
+ u8 reserved_at_3e0[0x8];
u8 cqn_snd[0x18];
- u8 reserved_18[0x40];
+ u8 reserved_at_400[0x40];
- u8 reserved_19[0x8];
+ u8 reserved_at_440[0x8];
u8 last_acked_psn[0x18];
- u8 reserved_20[0x8];
+ u8 reserved_at_460[0x8];
u8 ssn[0x18];
- u8 reserved_21[0x8];
+ u8 reserved_at_480[0x8];
u8 log_rra_max[0x3];
- u8 reserved_22[0x1];
+ u8 reserved_at_48b[0x1];
u8 atomic_mode[0x4];
u8 rre[0x1];
u8 rwe[0x1];
u8 rae[0x1];
- u8 reserved_23[0x1];
+ u8 reserved_at_493[0x1];
u8 page_offset[0x6];
- u8 reserved_24[0x3];
+ u8 reserved_at_49a[0x3];
u8 cd_slave_receive[0x1];
u8 cd_slave_send[0x1];
u8 cd_master[0x1];
- u8 reserved_25[0x3];
+ u8 reserved_at_4a0[0x3];
u8 min_rnr_nak[0x5];
u8 next_rcv_psn[0x18];
- u8 reserved_26[0x8];
+ u8 reserved_at_4c0[0x8];
u8 xrcd[0x18];
- u8 reserved_27[0x8];
+ u8 reserved_at_4e0[0x8];
u8 cqn_rcv[0x18];
u8 dbr_addr[0x40];
u8 q_key[0x20];
- u8 reserved_28[0x5];
+ u8 reserved_at_560[0x5];
u8 rq_type[0x3];
u8 srqn_rmpn[0x18];
- u8 reserved_29[0x8];
+ u8 reserved_at_580[0x8];
u8 rmsn[0x18];
u8 hw_sq_wqebb_counter[0x10];
@@ -1866,33 +1866,33 @@
u8 sw_rq_counter[0x20];
- u8 reserved_30[0x20];
+ u8 reserved_at_600[0x20];
- u8 reserved_31[0xf];
+ u8 reserved_at_620[0xf];
u8 cgs[0x1];
u8 cs_req[0x8];
u8 cs_res[0x8];
u8 dc_access_key[0x40];
- u8 reserved_32[0xc0];
+ u8 reserved_at_680[0xc0];
};
struct mlx5_ifc_roce_addr_layout_bits {
u8 source_l3_address[16][0x8];
- u8 reserved_0[0x3];
+ u8 reserved_at_80[0x3];
u8 vlan_valid[0x1];
u8 vlan_id[0xc];
u8 source_mac_47_32[0x10];
u8 source_mac_31_0[0x20];
- u8 reserved_1[0x14];
+ u8 reserved_at_c0[0x14];
u8 roce_l3_type[0x4];
u8 roce_version[0x8];
- u8 reserved_2[0x20];
+ u8 reserved_at_e0[0x20];
};
union mlx5_ifc_hca_cap_union_bits {
@@ -1904,7 +1904,7 @@
struct mlx5_ifc_flow_table_nic_cap_bits flow_table_nic_cap;
struct mlx5_ifc_flow_table_eswitch_cap_bits flow_table_eswitch_cap;
struct mlx5_ifc_e_switch_cap_bits e_switch_cap;
- u8 reserved_0[0x8000];
+ u8 reserved_at_0[0x8000];
};
enum {
@@ -1914,24 +1914,24 @@
};
struct mlx5_ifc_flow_context_bits {
- u8 reserved_0[0x20];
+ u8 reserved_at_0[0x20];
u8 group_id[0x20];
- u8 reserved_1[0x8];
+ u8 reserved_at_40[0x8];
u8 flow_tag[0x18];
- u8 reserved_2[0x10];
+ u8 reserved_at_60[0x10];
u8 action[0x10];
- u8 reserved_3[0x8];
+ u8 reserved_at_80[0x8];
u8 destination_list_size[0x18];
- u8 reserved_4[0x160];
+ u8 reserved_at_a0[0x160];
struct mlx5_ifc_fte_match_param_bits match_value;
- u8 reserved_5[0x600];
+ u8 reserved_at_1200[0x600];
struct mlx5_ifc_dest_format_struct_bits destination[0];
};
@@ -1944,43 +1944,43 @@
struct mlx5_ifc_xrc_srqc_bits {
u8 state[0x4];
u8 log_xrc_srq_size[0x4];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 wq_signature[0x1];
u8 cont_srq[0x1];
- u8 reserved_1[0x1];
+ u8 reserved_at_22[0x1];
u8 rlky[0x1];
u8 basic_cyclic_rcv_wqe[0x1];
u8 log_rq_stride[0x3];
u8 xrcd[0x18];
u8 page_offset[0x6];
- u8 reserved_2[0x2];
+ u8 reserved_at_46[0x2];
u8 cqn[0x18];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
u8 user_index_equal_xrc_srqn[0x1];
- u8 reserved_4[0x1];
+ u8 reserved_at_81[0x1];
u8 log_page_size[0x6];
u8 user_index[0x18];
- u8 reserved_5[0x20];
+ u8 reserved_at_a0[0x20];
- u8 reserved_6[0x8];
+ u8 reserved_at_c0[0x8];
u8 pd[0x18];
u8 lwm[0x10];
u8 wqe_cnt[0x10];
- u8 reserved_7[0x40];
+ u8 reserved_at_100[0x40];
u8 db_record_addr_h[0x20];
u8 db_record_addr_l[0x1e];
- u8 reserved_8[0x2];
+ u8 reserved_at_17e[0x2];
- u8 reserved_9[0x80];
+ u8 reserved_at_180[0x80];
};
struct mlx5_ifc_traffic_counter_bits {
@@ -1990,16 +1990,16 @@
};
struct mlx5_ifc_tisc_bits {
- u8 reserved_0[0xc];
+ u8 reserved_at_0[0xc];
u8 prio[0x4];
- u8 reserved_1[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_2[0x100];
+ u8 reserved_at_20[0x100];
- u8 reserved_3[0x8];
+ u8 reserved_at_120[0x8];
u8 transport_domain[0x18];
- u8 reserved_4[0x3c0];
+ u8 reserved_at_140[0x3c0];
};
enum {
@@ -2024,31 +2024,31 @@
};
struct mlx5_ifc_tirc_bits {
- u8 reserved_0[0x20];
+ u8 reserved_at_0[0x20];
u8 disp_type[0x4];
- u8 reserved_1[0x1c];
+ u8 reserved_at_24[0x1c];
- u8 reserved_2[0x40];
+ u8 reserved_at_40[0x40];
- u8 reserved_3[0x4];
+ u8 reserved_at_80[0x4];
u8 lro_timeout_period_usecs[0x10];
u8 lro_enable_mask[0x4];
u8 lro_max_ip_payload_size[0x8];
- u8 reserved_4[0x40];
+ u8 reserved_at_a0[0x40];
- u8 reserved_5[0x8];
+ u8 reserved_at_e0[0x8];
u8 inline_rqn[0x18];
u8 rx_hash_symmetric[0x1];
- u8 reserved_6[0x1];
+ u8 reserved_at_101[0x1];
u8 tunneled_offload_en[0x1];
- u8 reserved_7[0x5];
+ u8 reserved_at_103[0x5];
u8 indirect_table[0x18];
u8 rx_hash_fn[0x4];
- u8 reserved_8[0x2];
+ u8 reserved_at_124[0x2];
u8 self_lb_block[0x2];
u8 transport_domain[0x18];
@@ -2058,7 +2058,7 @@
struct mlx5_ifc_rx_hash_field_select_bits rx_hash_field_selector_inner;
- u8 reserved_9[0x4c0];
+ u8 reserved_at_2c0[0x4c0];
};
enum {
@@ -2069,39 +2069,39 @@
struct mlx5_ifc_srqc_bits {
u8 state[0x4];
u8 log_srq_size[0x4];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 wq_signature[0x1];
u8 cont_srq[0x1];
- u8 reserved_1[0x1];
+ u8 reserved_at_22[0x1];
u8 rlky[0x1];
- u8 reserved_2[0x1];
+ u8 reserved_at_24[0x1];
u8 log_rq_stride[0x3];
u8 xrcd[0x18];
u8 page_offset[0x6];
- u8 reserved_3[0x2];
+ u8 reserved_at_46[0x2];
u8 cqn[0x18];
- u8 reserved_4[0x20];
+ u8 reserved_at_60[0x20];
- u8 reserved_5[0x2];
+ u8 reserved_at_80[0x2];
u8 log_page_size[0x6];
- u8 reserved_6[0x18];
+ u8 reserved_at_88[0x18];
- u8 reserved_7[0x20];
+ u8 reserved_at_a0[0x20];
- u8 reserved_8[0x8];
+ u8 reserved_at_c0[0x8];
u8 pd[0x18];
u8 lwm[0x10];
u8 wqe_cnt[0x10];
- u8 reserved_9[0x40];
+ u8 reserved_at_100[0x40];
u8 dbr_addr[0x40];
- u8 reserved_10[0x80];
+ u8 reserved_at_180[0x80];
};
enum {
@@ -2115,39 +2115,39 @@
u8 cd_master[0x1];
u8 fre[0x1];
u8 flush_in_error_en[0x1];
- u8 reserved_0[0x4];
+ u8 reserved_at_4[0x4];
u8 state[0x4];
- u8 reserved_1[0x14];
+ u8 reserved_at_c[0x14];
- u8 reserved_2[0x8];
+ u8 reserved_at_20[0x8];
u8 user_index[0x18];
- u8 reserved_3[0x8];
+ u8 reserved_at_40[0x8];
u8 cqn[0x18];
- u8 reserved_4[0xa0];
+ u8 reserved_at_60[0xa0];
u8 tis_lst_sz[0x10];
- u8 reserved_5[0x10];
+ u8 reserved_at_110[0x10];
- u8 reserved_6[0x40];
+ u8 reserved_at_120[0x40];
- u8 reserved_7[0x8];
+ u8 reserved_at_160[0x8];
u8 tis_num_0[0x18];
struct mlx5_ifc_wq_bits wq;
};
struct mlx5_ifc_rqtc_bits {
- u8 reserved_0[0xa0];
+ u8 reserved_at_0[0xa0];
- u8 reserved_1[0x10];
+ u8 reserved_at_a0[0x10];
u8 rqt_max_size[0x10];
- u8 reserved_2[0x10];
+ u8 reserved_at_c0[0x10];
u8 rqt_actual_size[0x10];
- u8 reserved_3[0x6a0];
+ u8 reserved_at_e0[0x6a0];
struct mlx5_ifc_rq_num_bits rq_num[0];
};
@@ -2165,27 +2165,27 @@
struct mlx5_ifc_rqc_bits {
u8 rlky[0x1];
- u8 reserved_0[0x2];
+ u8 reserved_at_1[0x2];
u8 vsd[0x1];
u8 mem_rq_type[0x4];
u8 state[0x4];
- u8 reserved_1[0x1];
+ u8 reserved_at_c[0x1];
u8 flush_in_error_en[0x1];
- u8 reserved_2[0x12];
+ u8 reserved_at_e[0x12];
- u8 reserved_3[0x8];
+ u8 reserved_at_20[0x8];
u8 user_index[0x18];
- u8 reserved_4[0x8];
+ u8 reserved_at_40[0x8];
u8 cqn[0x18];
u8 counter_set_id[0x8];
- u8 reserved_5[0x18];
+ u8 reserved_at_68[0x18];
- u8 reserved_6[0x8];
+ u8 reserved_at_80[0x8];
u8 rmpn[0x18];
- u8 reserved_7[0xe0];
+ u8 reserved_at_a0[0xe0];
struct mlx5_ifc_wq_bits wq;
};
@@ -2196,31 +2196,31 @@
};
struct mlx5_ifc_rmpc_bits {
- u8 reserved_0[0x8];
+ u8 reserved_at_0[0x8];
u8 state[0x4];
- u8 reserved_1[0x14];
+ u8 reserved_at_c[0x14];
u8 basic_cyclic_rcv_wqe[0x1];
- u8 reserved_2[0x1f];
+ u8 reserved_at_21[0x1f];
- u8 reserved_3[0x140];
+ u8 reserved_at_40[0x140];
struct mlx5_ifc_wq_bits wq;
};
struct mlx5_ifc_nic_vport_context_bits {
- u8 reserved_0[0x1f];
+ u8 reserved_at_0[0x1f];
u8 roce_en[0x1];
u8 arm_change_event[0x1];
- u8 reserved_1[0x1a];
+ u8 reserved_at_21[0x1a];
u8 event_on_mtu[0x1];
u8 event_on_promisc_change[0x1];
u8 event_on_vlan_change[0x1];
u8 event_on_mc_address_change[0x1];
u8 event_on_uc_address_change[0x1];
- u8 reserved_2[0xf0];
+ u8 reserved_at_40[0xf0];
u8 mtu[0x10];
@@ -2228,21 +2228,21 @@
u8 port_guid[0x40];
u8 node_guid[0x40];
- u8 reserved_3[0x140];
+ u8 reserved_at_200[0x140];
u8 qkey_violation_counter[0x10];
- u8 reserved_4[0x430];
+ u8 reserved_at_350[0x430];
u8 promisc_uc[0x1];
u8 promisc_mc[0x1];
u8 promisc_all[0x1];
- u8 reserved_5[0x2];
+ u8 reserved_at_783[0x2];
u8 allowed_list_type[0x3];
- u8 reserved_6[0xc];
+ u8 reserved_at_788[0xc];
u8 allowed_list_size[0xc];
struct mlx5_ifc_mac_address_layout_bits permanent_address;
- u8 reserved_7[0x20];
+ u8 reserved_at_7e0[0x20];
u8 current_uc_mac_address[0][0x40];
};
@@ -2254,9 +2254,9 @@
};
struct mlx5_ifc_mkc_bits {
- u8 reserved_0[0x1];
+ u8 reserved_at_0[0x1];
u8 free[0x1];
- u8 reserved_1[0xd];
+ u8 reserved_at_2[0xd];
u8 small_fence_on_rdma_read_response[0x1];
u8 umr_en[0x1];
u8 a[0x1];
@@ -2265,19 +2265,19 @@
u8 lw[0x1];
u8 lr[0x1];
u8 access_mode[0x2];
- u8 reserved_2[0x8];
+ u8 reserved_at_18[0x8];
u8 qpn[0x18];
u8 mkey_7_0[0x8];
- u8 reserved_3[0x20];
+ u8 reserved_at_40[0x20];
u8 length64[0x1];
u8 bsf_en[0x1];
u8 sync_umr[0x1];
- u8 reserved_4[0x2];
+ u8 reserved_at_63[0x2];
u8 expected_sigerr_count[0x1];
- u8 reserved_5[0x1];
+ u8 reserved_at_66[0x1];
u8 en_rinval[0x1];
u8 pd[0x18];
@@ -2287,18 +2287,18 @@
u8 bsf_octword_size[0x20];
- u8 reserved_6[0x80];
+ u8 reserved_at_120[0x80];
u8 translations_octword_size[0x20];
- u8 reserved_7[0x1b];
+ u8 reserved_at_1c0[0x1b];
u8 log_page_size[0x5];
- u8 reserved_8[0x20];
+ u8 reserved_at_1e0[0x20];
};
struct mlx5_ifc_pkey_bits {
- u8 reserved_0[0x10];
+ u8 reserved_at_0[0x10];
u8 pkey[0x10];
};
@@ -2309,19 +2309,19 @@
struct mlx5_ifc_hca_vport_context_bits {
u8 field_select[0x20];
- u8 reserved_0[0xe0];
+ u8 reserved_at_20[0xe0];
u8 sm_virt_aware[0x1];
u8 has_smi[0x1];
u8 has_raw[0x1];
u8 grh_required[0x1];
- u8 reserved_1[0xc];
+ u8 reserved_at_104[0xc];
u8 port_physical_state[0x4];
u8 vport_state_policy[0x4];
u8 port_state[0x4];
u8 vport_state[0x4];
- u8 reserved_2[0x20];
+ u8 reserved_at_120[0x20];
u8 system_image_guid[0x40];
@@ -2337,33 +2337,33 @@
u8 cap_mask2_field_select[0x20];
- u8 reserved_3[0x80];
+ u8 reserved_at_280[0x80];
u8 lid[0x10];
- u8 reserved_4[0x4];
+ u8 reserved_at_310[0x4];
u8 init_type_reply[0x4];
u8 lmc[0x3];
u8 subnet_timeout[0x5];
u8 sm_lid[0x10];
u8 sm_sl[0x4];
- u8 reserved_5[0xc];
+ u8 reserved_at_334[0xc];
u8 qkey_violation_counter[0x10];
u8 pkey_violation_counter[0x10];
- u8 reserved_6[0xca0];
+ u8 reserved_at_360[0xca0];
};
struct mlx5_ifc_esw_vport_context_bits {
- u8 reserved_0[0x3];
+ u8 reserved_at_0[0x3];
u8 vport_svlan_strip[0x1];
u8 vport_cvlan_strip[0x1];
u8 vport_svlan_insert[0x1];
u8 vport_cvlan_insert[0x2];
- u8 reserved_1[0x18];
+ u8 reserved_at_8[0x18];
- u8 reserved_2[0x20];
+ u8 reserved_at_20[0x20];
u8 svlan_cfi[0x1];
u8 svlan_pcp[0x3];
@@ -2372,7 +2372,7 @@
u8 cvlan_pcp[0x3];
u8 cvlan_id[0xc];
- u8 reserved_3[0x7a0];
+ u8 reserved_at_60[0x7a0];
};
enum {
@@ -2387,41 +2387,41 @@
struct mlx5_ifc_eqc_bits {
u8 status[0x4];
- u8 reserved_0[0x9];
+ u8 reserved_at_4[0x9];
u8 ec[0x1];
u8 oi[0x1];
- u8 reserved_1[0x5];
+ u8 reserved_at_f[0x5];
u8 st[0x4];
- u8 reserved_2[0x8];
+ u8 reserved_at_18[0x8];
- u8 reserved_3[0x20];
+ u8 reserved_at_20[0x20];
- u8 reserved_4[0x14];
+ u8 reserved_at_40[0x14];
u8 page_offset[0x6];
- u8 reserved_5[0x6];
+ u8 reserved_at_5a[0x6];
- u8 reserved_6[0x3];
+ u8 reserved_at_60[0x3];
u8 log_eq_size[0x5];
u8 uar_page[0x18];
- u8 reserved_7[0x20];
+ u8 reserved_at_80[0x20];
- u8 reserved_8[0x18];
+ u8 reserved_at_a0[0x18];
u8 intr[0x8];
- u8 reserved_9[0x3];
+ u8 reserved_at_c0[0x3];
u8 log_page_size[0x5];
- u8 reserved_10[0x18];
+ u8 reserved_at_c8[0x18];
- u8 reserved_11[0x60];
+ u8 reserved_at_e0[0x60];
- u8 reserved_12[0x8];
+ u8 reserved_at_140[0x8];
u8 consumer_counter[0x18];
- u8 reserved_13[0x8];
+ u8 reserved_at_160[0x8];
u8 producer_counter[0x18];
- u8 reserved_14[0x80];
+ u8 reserved_at_180[0x80];
};
enum {
@@ -2445,14 +2445,14 @@
};
struct mlx5_ifc_dctc_bits {
- u8 reserved_0[0x4];
+ u8 reserved_at_0[0x4];
u8 state[0x4];
- u8 reserved_1[0x18];
+ u8 reserved_at_8[0x18];
- u8 reserved_2[0x8];
+ u8 reserved_at_20[0x8];
u8 user_index[0x18];
- u8 reserved_3[0x8];
+ u8 reserved_at_40[0x8];
u8 cqn[0x18];
u8 counter_set_id[0x8];
@@ -2464,45 +2464,45 @@
u8 latency_sensitive[0x1];
u8 rlky[0x1];
u8 free_ar[0x1];
- u8 reserved_4[0xd];
+ u8 reserved_at_73[0xd];
- u8 reserved_5[0x8];
+ u8 reserved_at_80[0x8];
u8 cs_res[0x8];
- u8 reserved_6[0x3];
+ u8 reserved_at_90[0x3];
u8 min_rnr_nak[0x5];
- u8 reserved_7[0x8];
+ u8 reserved_at_98[0x8];
- u8 reserved_8[0x8];
+ u8 reserved_at_a0[0x8];
u8 srqn[0x18];
- u8 reserved_9[0x8];
+ u8 reserved_at_c0[0x8];
u8 pd[0x18];
u8 tclass[0x8];
- u8 reserved_10[0x4];
+ u8 reserved_at_e8[0x4];
u8 flow_label[0x14];
u8 dc_access_key[0x40];
- u8 reserved_11[0x5];
+ u8 reserved_at_140[0x5];
u8 mtu[0x3];
u8 port[0x8];
u8 pkey_index[0x10];
- u8 reserved_12[0x8];
+ u8 reserved_at_160[0x8];
u8 my_addr_index[0x8];
- u8 reserved_13[0x8];
+ u8 reserved_at_170[0x8];
u8 hop_limit[0x8];
u8 dc_access_key_violation_count[0x20];
- u8 reserved_14[0x14];
+ u8 reserved_at_1a0[0x14];
u8 dei_cfi[0x1];
u8 eth_prio[0x3];
u8 ecn[0x2];
u8 dscp[0x6];
- u8 reserved_15[0x40];
+ u8 reserved_at_1c0[0x40];
};
enum {
@@ -2524,54 +2524,54 @@
struct mlx5_ifc_cqc_bits {
u8 status[0x4];
- u8 reserved_0[0x4];
+ u8 reserved_at_4[0x4];
u8 cqe_sz[0x3];
u8 cc[0x1];
- u8 reserved_1[0x1];
+ u8 reserved_at_c[0x1];
u8 scqe_break_moderation_en[0x1];
u8 oi[0x1];
- u8 reserved_2[0x2];
+ u8 reserved_at_f[0x2];
u8 cqe_zip_en[0x1];
u8 mini_cqe_res_format[0x2];
u8 st[0x4];
- u8 reserved_3[0x8];
+ u8 reserved_at_18[0x8];
- u8 reserved_4[0x20];
+ u8 reserved_at_20[0x20];
- u8 reserved_5[0x14];
+ u8 reserved_at_40[0x14];
u8 page_offset[0x6];
- u8 reserved_6[0x6];
+ u8 reserved_at_5a[0x6];
- u8 reserved_7[0x3];
+ u8 reserved_at_60[0x3];
u8 log_cq_size[0x5];
u8 uar_page[0x18];
- u8 reserved_8[0x4];
+ u8 reserved_at_80[0x4];
u8 cq_period[0xc];
u8 cq_max_count[0x10];
- u8 reserved_9[0x18];
+ u8 reserved_at_a0[0x18];
u8 c_eqn[0x8];
- u8 reserved_10[0x3];
+ u8 reserved_at_c0[0x3];
u8 log_page_size[0x5];
- u8 reserved_11[0x18];
+ u8 reserved_at_c8[0x18];
- u8 reserved_12[0x20];
+ u8 reserved_at_e0[0x20];
- u8 reserved_13[0x8];
+ u8 reserved_at_100[0x8];
u8 last_notified_index[0x18];
- u8 reserved_14[0x8];
+ u8 reserved_at_120[0x8];
u8 last_solicit_index[0x18];
- u8 reserved_15[0x8];
+ u8 reserved_at_140[0x8];
u8 consumer_counter[0x18];
- u8 reserved_16[0x8];
+ u8 reserved_at_160[0x8];
u8 producer_counter[0x18];
- u8 reserved_17[0x40];
+ u8 reserved_at_180[0x40];
u8 dbr_addr[0x40];
};
@@ -2580,16 +2580,16 @@
struct mlx5_ifc_cong_control_802_1qau_rp_bits cong_control_802_1qau_rp;
struct mlx5_ifc_cong_control_r_roce_ecn_rp_bits cong_control_r_roce_ecn_rp;
struct mlx5_ifc_cong_control_r_roce_ecn_np_bits cong_control_r_roce_ecn_np;
- u8 reserved_0[0x800];
+ u8 reserved_at_0[0x800];
};
struct mlx5_ifc_query_adapter_param_block_bits {
- u8 reserved_0[0xc0];
+ u8 reserved_at_0[0xc0];
- u8 reserved_1[0x8];
+ u8 reserved_at_c0[0x8];
u8 ieee_vendor_id[0x18];
- u8 reserved_2[0x10];
+ u8 reserved_at_e0[0x10];
u8 vsd_vendor_id[0x10];
u8 vsd[208][0x8];
@@ -2600,14 +2600,14 @@
union mlx5_ifc_modify_field_select_resize_field_select_auto_bits {
struct mlx5_ifc_modify_field_select_bits modify_field_select;
struct mlx5_ifc_resize_field_select_bits resize_field_select;
- u8 reserved_0[0x20];
+ u8 reserved_at_0[0x20];
};
union mlx5_ifc_field_select_802_1_r_roce_auto_bits {
struct mlx5_ifc_field_select_802_1qau_rp_bits field_select_802_1qau_rp;
struct mlx5_ifc_field_select_r_roce_rp_bits field_select_r_roce_rp;
struct mlx5_ifc_field_select_r_roce_np_bits field_select_r_roce_np;
- u8 reserved_0[0x20];
+ u8 reserved_at_0[0x20];
};
union mlx5_ifc_eth_cntrs_grp_data_layout_auto_bits {
@@ -2619,7 +2619,7 @@
struct mlx5_ifc_eth_per_prio_grp_data_layout_bits eth_per_prio_grp_data_layout;
struct mlx5_ifc_eth_per_traffic_grp_data_layout_bits eth_per_traffic_grp_data_layout;
struct mlx5_ifc_phys_layer_cntrs_bits phys_layer_cntrs;
- u8 reserved_0[0x7c0];
+ u8 reserved_at_0[0x7c0];
};
union mlx5_ifc_event_auto_bits {
@@ -2635,23 +2635,23 @@
struct mlx5_ifc_db_bf_congestion_event_bits db_bf_congestion_event;
struct mlx5_ifc_stall_vl_event_bits stall_vl_event;
struct mlx5_ifc_cmd_inter_comp_event_bits cmd_inter_comp_event;
- u8 reserved_0[0xe0];
+ u8 reserved_at_0[0xe0];
};
struct mlx5_ifc_health_buffer_bits {
- u8 reserved_0[0x100];
+ u8 reserved_at_0[0x100];
u8 assert_existptr[0x20];
u8 assert_callra[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_140[0x40];
u8 fw_version[0x20];
u8 hw_id[0x20];
- u8 reserved_2[0x20];
+ u8 reserved_at_1c0[0x20];
u8 irisc_index[0x8];
u8 synd[0x8];
@@ -2660,20 +2660,20 @@
struct mlx5_ifc_register_loopback_control_bits {
u8 no_lb[0x1];
- u8 reserved_0[0x7];
+ u8 reserved_at_1[0x7];
u8 port[0x8];
- u8 reserved_1[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_2[0x60];
+ u8 reserved_at_20[0x60];
};
struct mlx5_ifc_teardown_hca_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
};
enum {
@@ -2683,108 +2683,108 @@
struct mlx5_ifc_teardown_hca_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x10];
+ u8 reserved_at_40[0x10];
u8 profile[0x10];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_sqerr2rts_qp_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_sqerr2rts_qp_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x8];
+ u8 reserved_at_40[0x8];
u8 qpn[0x18];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
u8 opt_param_mask[0x20];
- u8 reserved_4[0x20];
+ u8 reserved_at_a0[0x20];
struct mlx5_ifc_qpc_bits qpc;
- u8 reserved_5[0x80];
+ u8 reserved_at_800[0x80];
};
struct mlx5_ifc_sqd2rts_qp_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_sqd2rts_qp_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x8];
+ u8 reserved_at_40[0x8];
u8 qpn[0x18];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
u8 opt_param_mask[0x20];
- u8 reserved_4[0x20];
+ u8 reserved_at_a0[0x20];
struct mlx5_ifc_qpc_bits qpc;
- u8 reserved_5[0x80];
+ u8 reserved_at_800[0x80];
};
struct mlx5_ifc_set_roce_address_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_set_roce_address_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
u8 roce_address_index[0x10];
- u8 reserved_2[0x10];
+ u8 reserved_at_50[0x10];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
struct mlx5_ifc_roce_addr_layout_bits roce_address;
};
struct mlx5_ifc_set_mad_demux_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
};
enum {
@@ -2794,89 +2794,89 @@
struct mlx5_ifc_set_mad_demux_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x20];
+ u8 reserved_at_40[0x20];
- u8 reserved_3[0x6];
+ u8 reserved_at_60[0x6];
u8 demux_mode[0x2];
- u8 reserved_4[0x18];
+ u8 reserved_at_68[0x18];
};
struct mlx5_ifc_set_l2_table_entry_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_set_l2_table_entry_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x60];
+ u8 reserved_at_40[0x60];
- u8 reserved_3[0x8];
+ u8 reserved_at_a0[0x8];
u8 table_index[0x18];
- u8 reserved_4[0x20];
+ u8 reserved_at_c0[0x20];
- u8 reserved_5[0x13];
+ u8 reserved_at_e0[0x13];
u8 vlan_valid[0x1];
u8 vlan[0xc];
struct mlx5_ifc_mac_address_layout_bits mac_address;
- u8 reserved_6[0xc0];
+ u8 reserved_at_140[0xc0];
};
struct mlx5_ifc_set_issi_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_set_issi_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x10];
+ u8 reserved_at_40[0x10];
u8 current_issi[0x10];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_set_hca_cap_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_set_hca_cap_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x40];
+ u8 reserved_at_40[0x40];
union mlx5_ifc_hca_cap_union_bits capability;
};
@@ -2890,156 +2890,156 @@
struct mlx5_ifc_set_fte_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_set_fte_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x40];
+ u8 reserved_at_40[0x40];
u8 table_type[0x8];
- u8 reserved_3[0x18];
+ u8 reserved_at_88[0x18];
- u8 reserved_4[0x8];
+ u8 reserved_at_a0[0x8];
u8 table_id[0x18];
- u8 reserved_5[0x18];
+ u8 reserved_at_c0[0x18];
u8 modify_enable_mask[0x8];
- u8 reserved_6[0x20];
+ u8 reserved_at_e0[0x20];
u8 flow_index[0x20];
- u8 reserved_7[0xe0];
+ u8 reserved_at_120[0xe0];
struct mlx5_ifc_flow_context_bits flow_context;
};
struct mlx5_ifc_rts2rts_qp_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_rts2rts_qp_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x8];
+ u8 reserved_at_40[0x8];
u8 qpn[0x18];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
u8 opt_param_mask[0x20];
- u8 reserved_4[0x20];
+ u8 reserved_at_a0[0x20];
struct mlx5_ifc_qpc_bits qpc;
- u8 reserved_5[0x80];
+ u8 reserved_at_800[0x80];
};
struct mlx5_ifc_rtr2rts_qp_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_rtr2rts_qp_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x8];
+ u8 reserved_at_40[0x8];
u8 qpn[0x18];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
u8 opt_param_mask[0x20];
- u8 reserved_4[0x20];
+ u8 reserved_at_a0[0x20];
struct mlx5_ifc_qpc_bits qpc;
- u8 reserved_5[0x80];
+ u8 reserved_at_800[0x80];
};
struct mlx5_ifc_rst2init_qp_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_rst2init_qp_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x8];
+ u8 reserved_at_40[0x8];
u8 qpn[0x18];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
u8 opt_param_mask[0x20];
- u8 reserved_4[0x20];
+ u8 reserved_at_a0[0x20];
struct mlx5_ifc_qpc_bits qpc;
- u8 reserved_5[0x80];
+ u8 reserved_at_800[0x80];
};
struct mlx5_ifc_query_xrc_srq_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
struct mlx5_ifc_xrc_srqc_bits xrc_srq_context_entry;
- u8 reserved_2[0x600];
+ u8 reserved_at_280[0x600];
u8 pas[0][0x40];
};
struct mlx5_ifc_query_xrc_srq_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x8];
+ u8 reserved_at_40[0x8];
u8 xrc_srqn[0x18];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
};
enum {
@@ -3049,13 +3049,13 @@
struct mlx5_ifc_query_vport_state_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x20];
+ u8 reserved_at_40[0x20];
- u8 reserved_2[0x18];
+ u8 reserved_at_60[0x18];
u8 admin_state[0x4];
u8 state[0x4];
};
@@ -3067,25 +3067,25 @@
struct mlx5_ifc_query_vport_state_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
u8 other_vport[0x1];
- u8 reserved_2[0xf];
+ u8 reserved_at_41[0xf];
u8 vport_number[0x10];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_query_vport_counter_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
struct mlx5_ifc_traffic_counter_bits received_errors;
@@ -3111,7 +3111,7 @@
struct mlx5_ifc_traffic_counter_bits transmitted_eth_multicast;
- u8 reserved_2[0xa00];
+ u8 reserved_at_680[0xa00];
};
enum {
@@ -3120,328 +3120,328 @@
struct mlx5_ifc_query_vport_counter_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
u8 other_vport[0x1];
- u8 reserved_2[0xf];
+ u8 reserved_at_41[0xf];
u8 vport_number[0x10];
- u8 reserved_3[0x60];
+ u8 reserved_at_60[0x60];
u8 clear[0x1];
- u8 reserved_4[0x1f];
+ u8 reserved_at_c1[0x1f];
- u8 reserved_5[0x20];
+ u8 reserved_at_e0[0x20];
};
struct mlx5_ifc_query_tis_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
struct mlx5_ifc_tisc_bits tis_context;
};
struct mlx5_ifc_query_tis_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x8];
+ u8 reserved_at_40[0x8];
u8 tisn[0x18];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_query_tir_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0xc0];
+ u8 reserved_at_40[0xc0];
struct mlx5_ifc_tirc_bits tir_context;
};
struct mlx5_ifc_query_tir_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x8];
+ u8 reserved_at_40[0x8];
u8 tirn[0x18];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_query_srq_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
struct mlx5_ifc_srqc_bits srq_context_entry;
- u8 reserved_2[0x600];
+ u8 reserved_at_280[0x600];
u8 pas[0][0x40];
};
struct mlx5_ifc_query_srq_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x8];
+ u8 reserved_at_40[0x8];
u8 srqn[0x18];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_query_sq_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0xc0];
+ u8 reserved_at_40[0xc0];
struct mlx5_ifc_sqc_bits sq_context;
};
struct mlx5_ifc_query_sq_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x8];
+ u8 reserved_at_40[0x8];
u8 sqn[0x18];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_query_special_contexts_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x20];
+ u8 reserved_at_40[0x20];
u8 resd_lkey[0x20];
};
struct mlx5_ifc_query_special_contexts_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_query_rqt_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0xc0];
+ u8 reserved_at_40[0xc0];
struct mlx5_ifc_rqtc_bits rqt_context;
};
struct mlx5_ifc_query_rqt_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x8];
+ u8 reserved_at_40[0x8];
u8 rqtn[0x18];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_query_rq_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0xc0];
+ u8 reserved_at_40[0xc0];
struct mlx5_ifc_rqc_bits rq_context;
};
struct mlx5_ifc_query_rq_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x8];
+ u8 reserved_at_40[0x8];
u8 rqn[0x18];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_query_roce_address_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
struct mlx5_ifc_roce_addr_layout_bits roce_address;
};
struct mlx5_ifc_query_roce_address_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
u8 roce_address_index[0x10];
- u8 reserved_2[0x10];
+ u8 reserved_at_50[0x10];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_query_rmp_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0xc0];
+ u8 reserved_at_40[0xc0];
struct mlx5_ifc_rmpc_bits rmp_context;
};
struct mlx5_ifc_query_rmp_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x8];
+ u8 reserved_at_40[0x8];
u8 rmpn[0x18];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_query_qp_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
u8 opt_param_mask[0x20];
- u8 reserved_2[0x20];
+ u8 reserved_at_a0[0x20];
struct mlx5_ifc_qpc_bits qpc;
- u8 reserved_3[0x80];
+ u8 reserved_at_800[0x80];
u8 pas[0][0x40];
};
struct mlx5_ifc_query_qp_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x8];
+ u8 reserved_at_40[0x8];
u8 qpn[0x18];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_query_q_counter_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
u8 rx_write_requests[0x20];
- u8 reserved_2[0x20];
+ u8 reserved_at_a0[0x20];
u8 rx_read_requests[0x20];
- u8 reserved_3[0x20];
+ u8 reserved_at_e0[0x20];
u8 rx_atomic_requests[0x20];
- u8 reserved_4[0x20];
+ u8 reserved_at_120[0x20];
u8 rx_dct_connect[0x20];
- u8 reserved_5[0x20];
+ u8 reserved_at_160[0x20];
u8 out_of_buffer[0x20];
- u8 reserved_6[0x20];
+ u8 reserved_at_1a0[0x20];
u8 out_of_sequence[0x20];
- u8 reserved_7[0x620];
+ u8 reserved_at_1e0[0x620];
};
struct mlx5_ifc_query_q_counter_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x80];
+ u8 reserved_at_40[0x80];
u8 clear[0x1];
- u8 reserved_3[0x1f];
+ u8 reserved_at_c1[0x1f];
- u8 reserved_4[0x18];
+ u8 reserved_at_e0[0x18];
u8 counter_set_id[0x8];
};
struct mlx5_ifc_query_pages_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x10];
+ u8 reserved_at_40[0x10];
u8 function_id[0x10];
u8 num_pages[0x20];
@@ -3455,55 +3455,55 @@
struct mlx5_ifc_query_pages_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x10];
+ u8 reserved_at_40[0x10];
u8 function_id[0x10];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_query_nic_vport_context_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
struct mlx5_ifc_nic_vport_context_bits nic_vport_context;
};
struct mlx5_ifc_query_nic_vport_context_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
u8 other_vport[0x1];
- u8 reserved_2[0xf];
+ u8 reserved_at_41[0xf];
u8 vport_number[0x10];
- u8 reserved_3[0x5];
+ u8 reserved_at_60[0x5];
u8 allowed_list_type[0x3];
- u8 reserved_4[0x18];
+ u8 reserved_at_68[0x18];
};
struct mlx5_ifc_query_mkey_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
struct mlx5_ifc_mkc_bits memory_key_mkey_entry;
- u8 reserved_2[0x600];
+ u8 reserved_at_280[0x600];
u8 bsf0_klm0_pas_mtt0_1[16][0x8];
@@ -3512,265 +3512,265 @@
struct mlx5_ifc_query_mkey_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x8];
+ u8 reserved_at_40[0x8];
u8 mkey_index[0x18];
u8 pg_access[0x1];
- u8 reserved_3[0x1f];
+ u8 reserved_at_61[0x1f];
};
struct mlx5_ifc_query_mad_demux_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
u8 mad_dumux_parameters_block[0x20];
};
struct mlx5_ifc_query_mad_demux_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_query_l2_table_entry_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0xa0];
+ u8 reserved_at_40[0xa0];
- u8 reserved_2[0x13];
+ u8 reserved_at_e0[0x13];
u8 vlan_valid[0x1];
u8 vlan[0xc];
struct mlx5_ifc_mac_address_layout_bits mac_address;
- u8 reserved_3[0xc0];
+ u8 reserved_at_140[0xc0];
};
struct mlx5_ifc_query_l2_table_entry_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x60];
+ u8 reserved_at_40[0x60];
- u8 reserved_3[0x8];
+ u8 reserved_at_a0[0x8];
u8 table_index[0x18];
- u8 reserved_4[0x140];
+ u8 reserved_at_c0[0x140];
};
struct mlx5_ifc_query_issi_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x10];
+ u8 reserved_at_40[0x10];
u8 current_issi[0x10];
- u8 reserved_2[0xa0];
+ u8 reserved_at_60[0xa0];
- u8 supported_issi_reserved[76][0x8];
+ u8 reserved_at_100[76][0x8];
u8 supported_issi_dw0[0x20];
};
struct mlx5_ifc_query_issi_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_query_hca_vport_pkey_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
struct mlx5_ifc_pkey_bits pkey[0];
};
struct mlx5_ifc_query_hca_vport_pkey_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
u8 other_vport[0x1];
- u8 reserved_2[0xb];
+ u8 reserved_at_41[0xb];
u8 port_num[0x4];
u8 vport_number[0x10];
- u8 reserved_3[0x10];
+ u8 reserved_at_60[0x10];
u8 pkey_index[0x10];
};
struct mlx5_ifc_query_hca_vport_gid_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x20];
+ u8 reserved_at_40[0x20];
u8 gids_num[0x10];
- u8 reserved_2[0x10];
+ u8 reserved_at_70[0x10];
struct mlx5_ifc_array128_auto_bits gid[0];
};
struct mlx5_ifc_query_hca_vport_gid_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
u8 other_vport[0x1];
- u8 reserved_2[0xb];
+ u8 reserved_at_41[0xb];
u8 port_num[0x4];
u8 vport_number[0x10];
- u8 reserved_3[0x10];
+ u8 reserved_at_60[0x10];
u8 gid_index[0x10];
};
struct mlx5_ifc_query_hca_vport_context_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
struct mlx5_ifc_hca_vport_context_bits hca_vport_context;
};
struct mlx5_ifc_query_hca_vport_context_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
u8 other_vport[0x1];
- u8 reserved_2[0xb];
+ u8 reserved_at_41[0xb];
u8 port_num[0x4];
u8 vport_number[0x10];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_query_hca_cap_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
union mlx5_ifc_hca_cap_union_bits capability;
};
struct mlx5_ifc_query_hca_cap_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_query_flow_table_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x80];
+ u8 reserved_at_40[0x80];
- u8 reserved_2[0x8];
+ u8 reserved_at_c0[0x8];
u8 level[0x8];
- u8 reserved_3[0x8];
+ u8 reserved_at_d0[0x8];
u8 log_size[0x8];
- u8 reserved_4[0x120];
+ u8 reserved_at_e0[0x120];
};
struct mlx5_ifc_query_flow_table_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x40];
+ u8 reserved_at_40[0x40];
u8 table_type[0x8];
- u8 reserved_3[0x18];
+ u8 reserved_at_88[0x18];
- u8 reserved_4[0x8];
+ u8 reserved_at_a0[0x8];
u8 table_id[0x18];
- u8 reserved_5[0x140];
+ u8 reserved_at_c0[0x140];
};
struct mlx5_ifc_query_fte_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x1c0];
+ u8 reserved_at_40[0x1c0];
struct mlx5_ifc_flow_context_bits flow_context;
};
struct mlx5_ifc_query_fte_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x40];
+ u8 reserved_at_40[0x40];
u8 table_type[0x8];
- u8 reserved_3[0x18];
+ u8 reserved_at_88[0x18];
- u8 reserved_4[0x8];
+ u8 reserved_at_a0[0x8];
u8 table_id[0x18];
- u8 reserved_5[0x40];
+ u8 reserved_at_c0[0x40];
u8 flow_index[0x20];
- u8 reserved_6[0xe0];
+ u8 reserved_at_120[0xe0];
};
enum {
@@ -3781,84 +3781,84 @@
struct mlx5_ifc_query_flow_group_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0xa0];
+ u8 reserved_at_40[0xa0];
u8 start_flow_index[0x20];
- u8 reserved_2[0x20];
+ u8 reserved_at_100[0x20];
u8 end_flow_index[0x20];
- u8 reserved_3[0xa0];
+ u8 reserved_at_140[0xa0];
- u8 reserved_4[0x18];
+ u8 reserved_at_1e0[0x18];
u8 match_criteria_enable[0x8];
struct mlx5_ifc_fte_match_param_bits match_criteria;
- u8 reserved_5[0xe00];
+ u8 reserved_at_1200[0xe00];
};
struct mlx5_ifc_query_flow_group_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x40];
+ u8 reserved_at_40[0x40];
u8 table_type[0x8];
- u8 reserved_3[0x18];
+ u8 reserved_at_88[0x18];
- u8 reserved_4[0x8];
+ u8 reserved_at_a0[0x8];
u8 table_id[0x18];
u8 group_id[0x20];
- u8 reserved_5[0x120];
+ u8 reserved_at_e0[0x120];
};
struct mlx5_ifc_query_esw_vport_context_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
struct mlx5_ifc_esw_vport_context_bits esw_vport_context;
};
struct mlx5_ifc_query_esw_vport_context_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
u8 other_vport[0x1];
- u8 reserved_2[0xf];
+ u8 reserved_at_41[0xf];
u8 vport_number[0x10];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_modify_esw_vport_context_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_esw_vport_context_fields_select_bits {
- u8 reserved[0x1c];
+ u8 reserved_at_0[0x1c];
u8 vport_cvlan_insert[0x1];
u8 vport_svlan_insert[0x1];
u8 vport_cvlan_strip[0x1];
@@ -3867,13 +3867,13 @@
struct mlx5_ifc_modify_esw_vport_context_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
u8 other_vport[0x1];
- u8 reserved_2[0xf];
+ u8 reserved_at_41[0xf];
u8 vport_number[0x10];
struct mlx5_ifc_esw_vport_context_fields_select_bits field_select;
@@ -3883,124 +3883,124 @@
struct mlx5_ifc_query_eq_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
struct mlx5_ifc_eqc_bits eq_context_entry;
- u8 reserved_2[0x40];
+ u8 reserved_at_280[0x40];
u8 event_bitmask[0x40];
- u8 reserved_3[0x580];
+ u8 reserved_at_300[0x580];
u8 pas[0][0x40];
};
struct mlx5_ifc_query_eq_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x18];
+ u8 reserved_at_40[0x18];
u8 eq_number[0x8];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_query_dct_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
struct mlx5_ifc_dctc_bits dct_context_entry;
- u8 reserved_2[0x180];
+ u8 reserved_at_280[0x180];
};
struct mlx5_ifc_query_dct_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x8];
+ u8 reserved_at_40[0x8];
u8 dctn[0x18];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_query_cq_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
struct mlx5_ifc_cqc_bits cq_context;
- u8 reserved_2[0x600];
+ u8 reserved_at_280[0x600];
u8 pas[0][0x40];
};
struct mlx5_ifc_query_cq_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x8];
+ u8 reserved_at_40[0x8];
u8 cqn[0x18];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_query_cong_status_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x20];
+ u8 reserved_at_40[0x20];
u8 enable[0x1];
u8 tag_enable[0x1];
- u8 reserved_2[0x1e];
+ u8 reserved_at_62[0x1e];
};
struct mlx5_ifc_query_cong_status_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x18];
+ u8 reserved_at_40[0x18];
u8 priority[0x4];
u8 cong_protocol[0x4];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_query_cong_statistics_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
u8 cur_flows[0x20];
@@ -4014,7 +4014,7 @@
u8 cnp_handled_low[0x20];
- u8 reserved_2[0x100];
+ u8 reserved_at_140[0x100];
u8 time_stamp_high[0x20];
@@ -4030,453 +4030,453 @@
u8 cnps_sent_low[0x20];
- u8 reserved_3[0x560];
+ u8 reserved_at_320[0x560];
};
struct mlx5_ifc_query_cong_statistics_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
u8 clear[0x1];
- u8 reserved_2[0x1f];
+ u8 reserved_at_41[0x1f];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_query_cong_params_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
union mlx5_ifc_cong_control_roce_ecn_auto_bits congestion_parameters;
};
struct mlx5_ifc_query_cong_params_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x1c];
+ u8 reserved_at_40[0x1c];
u8 cong_protocol[0x4];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_query_adapter_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
struct mlx5_ifc_query_adapter_param_block_bits query_adapter_struct;
};
struct mlx5_ifc_query_adapter_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_qp_2rst_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_qp_2rst_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x8];
+ u8 reserved_at_40[0x8];
u8 qpn[0x18];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_qp_2err_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_qp_2err_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x8];
+ u8 reserved_at_40[0x8];
u8 qpn[0x18];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_page_fault_resume_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_page_fault_resume_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
u8 error[0x1];
- u8 reserved_2[0x4];
+ u8 reserved_at_41[0x4];
u8 rdma[0x1];
u8 read_write[0x1];
u8 req_res[0x1];
u8 qpn[0x18];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_nop_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_nop_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_modify_vport_state_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_modify_vport_state_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
u8 other_vport[0x1];
- u8 reserved_2[0xf];
+ u8 reserved_at_41[0xf];
u8 vport_number[0x10];
- u8 reserved_3[0x18];
+ u8 reserved_at_60[0x18];
u8 admin_state[0x4];
- u8 reserved_4[0x4];
+ u8 reserved_at_7c[0x4];
};
struct mlx5_ifc_modify_tis_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_modify_tis_bitmask_bits {
- u8 reserved_0[0x20];
+ u8 reserved_at_0[0x20];
- u8 reserved_1[0x1f];
+ u8 reserved_at_20[0x1f];
u8 prio[0x1];
};
struct mlx5_ifc_modify_tis_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x8];
+ u8 reserved_at_40[0x8];
u8 tisn[0x18];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
struct mlx5_ifc_modify_tis_bitmask_bits bitmask;
- u8 reserved_4[0x40];
+ u8 reserved_at_c0[0x40];
struct mlx5_ifc_tisc_bits ctx;
};
struct mlx5_ifc_modify_tir_bitmask_bits {
- u8 reserved_0[0x20];
+ u8 reserved_at_0[0x20];
- u8 reserved_1[0x1b];
+ u8 reserved_at_20[0x1b];
u8 self_lb_en[0x1];
- u8 reserved_2[0x3];
+ u8 reserved_at_3c[0x3];
u8 lro[0x1];
};
struct mlx5_ifc_modify_tir_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_modify_tir_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x8];
+ u8 reserved_at_40[0x8];
u8 tirn[0x18];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
struct mlx5_ifc_modify_tir_bitmask_bits bitmask;
- u8 reserved_4[0x40];
+ u8 reserved_at_c0[0x40];
struct mlx5_ifc_tirc_bits ctx;
};
struct mlx5_ifc_modify_sq_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_modify_sq_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
u8 sq_state[0x4];
- u8 reserved_2[0x4];
+ u8 reserved_at_44[0x4];
u8 sqn[0x18];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
u8 modify_bitmask[0x40];
- u8 reserved_4[0x40];
+ u8 reserved_at_c0[0x40];
struct mlx5_ifc_sqc_bits ctx;
};
struct mlx5_ifc_modify_rqt_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_rqt_bitmask_bits {
- u8 reserved[0x20];
+ u8 reserved_at_0[0x20];
- u8 reserved1[0x1f];
+ u8 reserved_at_20[0x1f];
u8 rqn_list[0x1];
};
struct mlx5_ifc_modify_rqt_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x8];
+ u8 reserved_at_40[0x8];
u8 rqtn[0x18];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
struct mlx5_ifc_rqt_bitmask_bits bitmask;
- u8 reserved_4[0x40];
+ u8 reserved_at_c0[0x40];
struct mlx5_ifc_rqtc_bits ctx;
};
struct mlx5_ifc_modify_rq_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_modify_rq_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
u8 rq_state[0x4];
- u8 reserved_2[0x4];
+ u8 reserved_at_44[0x4];
u8 rqn[0x18];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
u8 modify_bitmask[0x40];
- u8 reserved_4[0x40];
+ u8 reserved_at_c0[0x40];
struct mlx5_ifc_rqc_bits ctx;
};
struct mlx5_ifc_modify_rmp_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_rmp_bitmask_bits {
- u8 reserved[0x20];
+ u8 reserved_at_0[0x20];
- u8 reserved1[0x1f];
+ u8 reserved_at_20[0x1f];
u8 lwm[0x1];
};
struct mlx5_ifc_modify_rmp_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
u8 rmp_state[0x4];
- u8 reserved_2[0x4];
+ u8 reserved_at_44[0x4];
u8 rmpn[0x18];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
struct mlx5_ifc_rmp_bitmask_bits bitmask;
- u8 reserved_4[0x40];
+ u8 reserved_at_c0[0x40];
struct mlx5_ifc_rmpc_bits ctx;
};
struct mlx5_ifc_modify_nic_vport_context_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_modify_nic_vport_field_select_bits {
- u8 reserved_0[0x19];
+ u8 reserved_at_0[0x19];
u8 mtu[0x1];
u8 change_event[0x1];
u8 promisc[0x1];
u8 permanent_address[0x1];
u8 addresses_list[0x1];
u8 roce_en[0x1];
- u8 reserved_1[0x1];
+ u8 reserved_at_1f[0x1];
};
struct mlx5_ifc_modify_nic_vport_context_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
u8 other_vport[0x1];
- u8 reserved_2[0xf];
+ u8 reserved_at_41[0xf];
u8 vport_number[0x10];
struct mlx5_ifc_modify_nic_vport_field_select_bits field_select;
- u8 reserved_3[0x780];
+ u8 reserved_at_80[0x780];
struct mlx5_ifc_nic_vport_context_bits nic_vport_context;
};
struct mlx5_ifc_modify_hca_vport_context_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_modify_hca_vport_context_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
u8 other_vport[0x1];
- u8 reserved_2[0xb];
+ u8 reserved_at_41[0xb];
u8 port_num[0x4];
u8 vport_number[0x10];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
struct mlx5_ifc_hca_vport_context_bits hca_vport_context;
};
struct mlx5_ifc_modify_cq_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
};
enum {
@@ -4486,83 +4486,83 @@
struct mlx5_ifc_modify_cq_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x8];
+ u8 reserved_at_40[0x8];
u8 cqn[0x18];
union mlx5_ifc_modify_field_select_resize_field_select_auto_bits modify_field_select_resize_field_select;
struct mlx5_ifc_cqc_bits cq_context;
- u8 reserved_3[0x600];
+ u8 reserved_at_280[0x600];
u8 pas[0][0x40];
};
struct mlx5_ifc_modify_cong_status_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_modify_cong_status_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x18];
+ u8 reserved_at_40[0x18];
u8 priority[0x4];
u8 cong_protocol[0x4];
u8 enable[0x1];
u8 tag_enable[0x1];
- u8 reserved_3[0x1e];
+ u8 reserved_at_62[0x1e];
};
struct mlx5_ifc_modify_cong_params_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_modify_cong_params_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x1c];
+ u8 reserved_at_40[0x1c];
u8 cong_protocol[0x4];
union mlx5_ifc_field_select_802_1_r_roce_auto_bits field_select;
- u8 reserved_3[0x80];
+ u8 reserved_at_80[0x80];
union mlx5_ifc_cong_control_roce_ecn_auto_bits congestion_parameters;
};
struct mlx5_ifc_manage_pages_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
u8 output_num_entries[0x20];
- u8 reserved_1[0x20];
+ u8 reserved_at_60[0x20];
u8 pas[0][0x40];
};
@@ -4575,12 +4575,12 @@
struct mlx5_ifc_manage_pages_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x10];
+ u8 reserved_at_40[0x10];
u8 function_id[0x10];
u8 input_num_entries[0x20];
@@ -4590,117 +4590,117 @@
struct mlx5_ifc_mad_ifc_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
u8 response_mad_packet[256][0x8];
};
struct mlx5_ifc_mad_ifc_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
u8 remote_lid[0x10];
- u8 reserved_2[0x8];
+ u8 reserved_at_50[0x8];
u8 port[0x8];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
u8 mad[256][0x8];
};
struct mlx5_ifc_init_hca_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_init_hca_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_init2rtr_qp_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_init2rtr_qp_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x8];
+ u8 reserved_at_40[0x8];
u8 qpn[0x18];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
u8 opt_param_mask[0x20];
- u8 reserved_4[0x20];
+ u8 reserved_at_a0[0x20];
struct mlx5_ifc_qpc_bits qpc;
- u8 reserved_5[0x80];
+ u8 reserved_at_800[0x80];
};
struct mlx5_ifc_init2init_qp_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_init2init_qp_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x8];
+ u8 reserved_at_40[0x8];
u8 qpn[0x18];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
u8 opt_param_mask[0x20];
- u8 reserved_4[0x20];
+ u8 reserved_at_a0[0x20];
struct mlx5_ifc_qpc_bits qpc;
- u8 reserved_5[0x80];
+ u8 reserved_at_800[0x80];
};
struct mlx5_ifc_get_dropped_packet_log_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
u8 packet_headers_log[128][0x8];
@@ -4709,1029 +4709,1029 @@
struct mlx5_ifc_get_dropped_packet_log_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_gen_eqe_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x18];
+ u8 reserved_at_40[0x18];
u8 eq_number[0x8];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
u8 eqe[64][0x8];
};
struct mlx5_ifc_gen_eq_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_enable_hca_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x20];
+ u8 reserved_at_40[0x20];
};
struct mlx5_ifc_enable_hca_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x10];
+ u8 reserved_at_40[0x10];
u8 function_id[0x10];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_drain_dct_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_drain_dct_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x8];
+ u8 reserved_at_40[0x8];
u8 dctn[0x18];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_disable_hca_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x20];
+ u8 reserved_at_40[0x20];
};
struct mlx5_ifc_disable_hca_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x10];
+ u8 reserved_at_40[0x10];
u8 function_id[0x10];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_detach_from_mcg_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_detach_from_mcg_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x8];
+ u8 reserved_at_40[0x8];
u8 qpn[0x18];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
u8 multicast_gid[16][0x8];
};
struct mlx5_ifc_destroy_xrc_srq_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_destroy_xrc_srq_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x8];
+ u8 reserved_at_40[0x8];
u8 xrc_srqn[0x18];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_destroy_tis_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_destroy_tis_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x8];
+ u8 reserved_at_40[0x8];
u8 tisn[0x18];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_destroy_tir_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_destroy_tir_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x8];
+ u8 reserved_at_40[0x8];
u8 tirn[0x18];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_destroy_srq_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_destroy_srq_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x8];
+ u8 reserved_at_40[0x8];
u8 srqn[0x18];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_destroy_sq_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_destroy_sq_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x8];
+ u8 reserved_at_40[0x8];
u8 sqn[0x18];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_destroy_rqt_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_destroy_rqt_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x8];
+ u8 reserved_at_40[0x8];
u8 rqtn[0x18];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_destroy_rq_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_destroy_rq_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x8];
+ u8 reserved_at_40[0x8];
u8 rqn[0x18];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_destroy_rmp_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_destroy_rmp_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x8];
+ u8 reserved_at_40[0x8];
u8 rmpn[0x18];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_destroy_qp_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_destroy_qp_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x8];
+ u8 reserved_at_40[0x8];
u8 qpn[0x18];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_destroy_psv_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_destroy_psv_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x8];
+ u8 reserved_at_40[0x8];
u8 psvn[0x18];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_destroy_mkey_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_destroy_mkey_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x8];
+ u8 reserved_at_40[0x8];
u8 mkey_index[0x18];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_destroy_flow_table_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_destroy_flow_table_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x40];
+ u8 reserved_at_40[0x40];
u8 table_type[0x8];
- u8 reserved_3[0x18];
+ u8 reserved_at_88[0x18];
- u8 reserved_4[0x8];
+ u8 reserved_at_a0[0x8];
u8 table_id[0x18];
- u8 reserved_5[0x140];
+ u8 reserved_at_c0[0x140];
};
struct mlx5_ifc_destroy_flow_group_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_destroy_flow_group_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x40];
+ u8 reserved_at_40[0x40];
u8 table_type[0x8];
- u8 reserved_3[0x18];
+ u8 reserved_at_88[0x18];
- u8 reserved_4[0x8];
+ u8 reserved_at_a0[0x8];
u8 table_id[0x18];
u8 group_id[0x20];
- u8 reserved_5[0x120];
+ u8 reserved_at_e0[0x120];
};
struct mlx5_ifc_destroy_eq_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_destroy_eq_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x18];
+ u8 reserved_at_40[0x18];
u8 eq_number[0x8];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_destroy_dct_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_destroy_dct_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x8];
+ u8 reserved_at_40[0x8];
u8 dctn[0x18];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_destroy_cq_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_destroy_cq_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x8];
+ u8 reserved_at_40[0x8];
u8 cqn[0x18];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_delete_vxlan_udp_dport_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_delete_vxlan_udp_dport_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x20];
+ u8 reserved_at_40[0x20];
- u8 reserved_3[0x10];
+ u8 reserved_at_60[0x10];
u8 vxlan_udp_port[0x10];
};
struct mlx5_ifc_delete_l2_table_entry_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_delete_l2_table_entry_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x60];
+ u8 reserved_at_40[0x60];
- u8 reserved_3[0x8];
+ u8 reserved_at_a0[0x8];
u8 table_index[0x18];
- u8 reserved_4[0x140];
+ u8 reserved_at_c0[0x140];
};
struct mlx5_ifc_delete_fte_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_delete_fte_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x40];
+ u8 reserved_at_40[0x40];
u8 table_type[0x8];
- u8 reserved_3[0x18];
+ u8 reserved_at_88[0x18];
- u8 reserved_4[0x8];
+ u8 reserved_at_a0[0x8];
u8 table_id[0x18];
- u8 reserved_5[0x40];
+ u8 reserved_at_c0[0x40];
u8 flow_index[0x20];
- u8 reserved_6[0xe0];
+ u8 reserved_at_120[0xe0];
};
struct mlx5_ifc_dealloc_xrcd_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_dealloc_xrcd_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x8];
+ u8 reserved_at_40[0x8];
u8 xrcd[0x18];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_dealloc_uar_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_dealloc_uar_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x8];
+ u8 reserved_at_40[0x8];
u8 uar[0x18];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_dealloc_transport_domain_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_dealloc_transport_domain_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x8];
+ u8 reserved_at_40[0x8];
u8 transport_domain[0x18];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_dealloc_q_counter_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_dealloc_q_counter_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x18];
+ u8 reserved_at_40[0x18];
u8 counter_set_id[0x8];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_dealloc_pd_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_dealloc_pd_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x8];
+ u8 reserved_at_40[0x8];
u8 pd[0x18];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_create_xrc_srq_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x8];
+ u8 reserved_at_40[0x8];
u8 xrc_srqn[0x18];
- u8 reserved_2[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_create_xrc_srq_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x40];
+ u8 reserved_at_40[0x40];
struct mlx5_ifc_xrc_srqc_bits xrc_srq_context_entry;
- u8 reserved_3[0x600];
+ u8 reserved_at_280[0x600];
u8 pas[0][0x40];
};
struct mlx5_ifc_create_tis_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x8];
+ u8 reserved_at_40[0x8];
u8 tisn[0x18];
- u8 reserved_2[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_create_tis_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0xc0];
+ u8 reserved_at_40[0xc0];
struct mlx5_ifc_tisc_bits ctx;
};
struct mlx5_ifc_create_tir_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x8];
+ u8 reserved_at_40[0x8];
u8 tirn[0x18];
- u8 reserved_2[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_create_tir_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0xc0];
+ u8 reserved_at_40[0xc0];
struct mlx5_ifc_tirc_bits ctx;
};
struct mlx5_ifc_create_srq_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x8];
+ u8 reserved_at_40[0x8];
u8 srqn[0x18];
- u8 reserved_2[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_create_srq_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x40];
+ u8 reserved_at_40[0x40];
struct mlx5_ifc_srqc_bits srq_context_entry;
- u8 reserved_3[0x600];
+ u8 reserved_at_280[0x600];
u8 pas[0][0x40];
};
struct mlx5_ifc_create_sq_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x8];
+ u8 reserved_at_40[0x8];
u8 sqn[0x18];
- u8 reserved_2[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_create_sq_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0xc0];
+ u8 reserved_at_40[0xc0];
struct mlx5_ifc_sqc_bits ctx;
};
struct mlx5_ifc_create_rqt_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x8];
+ u8 reserved_at_40[0x8];
u8 rqtn[0x18];
- u8 reserved_2[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_create_rqt_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0xc0];
+ u8 reserved_at_40[0xc0];
struct mlx5_ifc_rqtc_bits rqt_context;
};
struct mlx5_ifc_create_rq_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x8];
+ u8 reserved_at_40[0x8];
u8 rqn[0x18];
- u8 reserved_2[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_create_rq_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0xc0];
+ u8 reserved_at_40[0xc0];
struct mlx5_ifc_rqc_bits ctx;
};
struct mlx5_ifc_create_rmp_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x8];
+ u8 reserved_at_40[0x8];
u8 rmpn[0x18];
- u8 reserved_2[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_create_rmp_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0xc0];
+ u8 reserved_at_40[0xc0];
struct mlx5_ifc_rmpc_bits ctx;
};
struct mlx5_ifc_create_qp_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x8];
+ u8 reserved_at_40[0x8];
u8 qpn[0x18];
- u8 reserved_2[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_create_qp_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x40];
+ u8 reserved_at_40[0x40];
u8 opt_param_mask[0x20];
- u8 reserved_3[0x20];
+ u8 reserved_at_a0[0x20];
struct mlx5_ifc_qpc_bits qpc;
- u8 reserved_4[0x80];
+ u8 reserved_at_800[0x80];
u8 pas[0][0x40];
};
struct mlx5_ifc_create_psv_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
- u8 reserved_2[0x8];
+ u8 reserved_at_80[0x8];
u8 psv0_index[0x18];
- u8 reserved_3[0x8];
+ u8 reserved_at_a0[0x8];
u8 psv1_index[0x18];
- u8 reserved_4[0x8];
+ u8 reserved_at_c0[0x8];
u8 psv2_index[0x18];
- u8 reserved_5[0x8];
+ u8 reserved_at_e0[0x8];
u8 psv3_index[0x18];
};
struct mlx5_ifc_create_psv_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
u8 num_psv[0x4];
- u8 reserved_2[0x4];
+ u8 reserved_at_44[0x4];
u8 pd[0x18];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_create_mkey_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x8];
+ u8 reserved_at_40[0x8];
u8 mkey_index[0x18];
- u8 reserved_2[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_create_mkey_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x20];
+ u8 reserved_at_40[0x20];
u8 pg_access[0x1];
- u8 reserved_3[0x1f];
+ u8 reserved_at_61[0x1f];
struct mlx5_ifc_mkc_bits memory_key_mkey_entry;
- u8 reserved_4[0x80];
+ u8 reserved_at_280[0x80];
u8 translations_octword_actual_size[0x20];
- u8 reserved_5[0x560];
+ u8 reserved_at_320[0x560];
u8 klm_pas_mtt[0][0x20];
};
struct mlx5_ifc_create_flow_table_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x8];
+ u8 reserved_at_40[0x8];
u8 table_id[0x18];
- u8 reserved_2[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_create_flow_table_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x40];
+ u8 reserved_at_40[0x40];
u8 table_type[0x8];
- u8 reserved_3[0x18];
+ u8 reserved_at_88[0x18];
- u8 reserved_4[0x20];
+ u8 reserved_at_a0[0x20];
- u8 reserved_5[0x4];
+ u8 reserved_at_c0[0x4];
u8 table_miss_mode[0x4];
u8 level[0x8];
- u8 reserved_6[0x8];
+ u8 reserved_at_d0[0x8];
u8 log_size[0x8];
- u8 reserved_7[0x8];
+ u8 reserved_at_e0[0x8];
u8 table_miss_id[0x18];
- u8 reserved_8[0x100];
+ u8 reserved_at_100[0x100];
};
struct mlx5_ifc_create_flow_group_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x8];
+ u8 reserved_at_40[0x8];
u8 group_id[0x18];
- u8 reserved_2[0x20];
+ u8 reserved_at_60[0x20];
};
enum {
@@ -5742,134 +5742,134 @@
struct mlx5_ifc_create_flow_group_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x40];
+ u8 reserved_at_40[0x40];
u8 table_type[0x8];
- u8 reserved_3[0x18];
+ u8 reserved_at_88[0x18];
- u8 reserved_4[0x8];
+ u8 reserved_at_a0[0x8];
u8 table_id[0x18];
- u8 reserved_5[0x20];
+ u8 reserved_at_c0[0x20];
u8 start_flow_index[0x20];
- u8 reserved_6[0x20];
+ u8 reserved_at_100[0x20];
u8 end_flow_index[0x20];
- u8 reserved_7[0xa0];
+ u8 reserved_at_140[0xa0];
- u8 reserved_8[0x18];
+ u8 reserved_at_1e0[0x18];
u8 match_criteria_enable[0x8];
struct mlx5_ifc_fte_match_param_bits match_criteria;
- u8 reserved_9[0xe00];
+ u8 reserved_at_1200[0xe00];
};
struct mlx5_ifc_create_eq_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x18];
+ u8 reserved_at_40[0x18];
u8 eq_number[0x8];
- u8 reserved_2[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_create_eq_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x40];
+ u8 reserved_at_40[0x40];
struct mlx5_ifc_eqc_bits eq_context_entry;
- u8 reserved_3[0x40];
+ u8 reserved_at_280[0x40];
u8 event_bitmask[0x40];
- u8 reserved_4[0x580];
+ u8 reserved_at_300[0x580];
u8 pas[0][0x40];
};
struct mlx5_ifc_create_dct_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x8];
+ u8 reserved_at_40[0x8];
u8 dctn[0x18];
- u8 reserved_2[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_create_dct_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x40];
+ u8 reserved_at_40[0x40];
struct mlx5_ifc_dctc_bits dct_context_entry;
- u8 reserved_3[0x180];
+ u8 reserved_at_280[0x180];
};
struct mlx5_ifc_create_cq_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x8];
+ u8 reserved_at_40[0x8];
u8 cqn[0x18];
- u8 reserved_2[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_create_cq_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x40];
+ u8 reserved_at_40[0x40];
struct mlx5_ifc_cqc_bits cq_context;
- u8 reserved_3[0x600];
+ u8 reserved_at_280[0x600];
u8 pas[0][0x40];
};
struct mlx5_ifc_config_int_moderation_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x4];
+ u8 reserved_at_40[0x4];
u8 min_delay[0xc];
u8 int_vector[0x10];
- u8 reserved_2[0x20];
+ u8 reserved_at_60[0x20];
};
enum {
@@ -5879,49 +5879,49 @@
struct mlx5_ifc_config_int_moderation_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x4];
+ u8 reserved_at_40[0x4];
u8 min_delay[0xc];
u8 int_vector[0x10];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_attach_to_mcg_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_attach_to_mcg_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x8];
+ u8 reserved_at_40[0x8];
u8 qpn[0x18];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
u8 multicast_gid[16][0x8];
};
struct mlx5_ifc_arm_xrc_srq_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
};
enum {
@@ -5930,25 +5930,25 @@
struct mlx5_ifc_arm_xrc_srq_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x8];
+ u8 reserved_at_40[0x8];
u8 xrc_srqn[0x18];
- u8 reserved_3[0x10];
+ u8 reserved_at_60[0x10];
u8 lwm[0x10];
};
struct mlx5_ifc_arm_rq_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
};
enum {
@@ -5957,179 +5957,179 @@
struct mlx5_ifc_arm_rq_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x8];
+ u8 reserved_at_40[0x8];
u8 srq_number[0x18];
- u8 reserved_3[0x10];
+ u8 reserved_at_60[0x10];
u8 lwm[0x10];
};
struct mlx5_ifc_arm_dct_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_arm_dct_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x8];
+ u8 reserved_at_40[0x8];
u8 dct_number[0x18];
- u8 reserved_3[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_alloc_xrcd_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x8];
+ u8 reserved_at_40[0x8];
u8 xrcd[0x18];
- u8 reserved_2[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_alloc_xrcd_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_alloc_uar_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x8];
+ u8 reserved_at_40[0x8];
u8 uar[0x18];
- u8 reserved_2[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_alloc_uar_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_alloc_transport_domain_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x8];
+ u8 reserved_at_40[0x8];
u8 transport_domain[0x18];
- u8 reserved_2[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_alloc_transport_domain_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_alloc_q_counter_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x18];
+ u8 reserved_at_40[0x18];
u8 counter_set_id[0x8];
- u8 reserved_2[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_alloc_q_counter_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_alloc_pd_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x8];
+ u8 reserved_at_40[0x8];
u8 pd[0x18];
- u8 reserved_2[0x20];
+ u8 reserved_at_60[0x20];
};
struct mlx5_ifc_alloc_pd_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_add_vxlan_udp_dport_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_add_vxlan_udp_dport_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x20];
+ u8 reserved_at_40[0x20];
- u8 reserved_3[0x10];
+ u8 reserved_at_60[0x10];
u8 vxlan_udp_port[0x10];
};
struct mlx5_ifc_access_register_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
u8 register_data[0][0x20];
};
@@ -6141,12 +6141,12 @@
struct mlx5_ifc_access_register_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x10];
+ u8 reserved_at_40[0x10];
u8 register_id[0x10];
u8 argument[0x20];
@@ -6159,24 +6159,24 @@
u8 version[0x4];
u8 local_port[0x8];
u8 pnat[0x2];
- u8 reserved_0[0x2];
+ u8 reserved_at_12[0x2];
u8 lane[0x4];
- u8 reserved_1[0x8];
+ u8 reserved_at_18[0x8];
- u8 reserved_2[0x20];
+ u8 reserved_at_20[0x20];
- u8 reserved_3[0x7];
+ u8 reserved_at_40[0x7];
u8 polarity[0x1];
u8 ob_tap0[0x8];
u8 ob_tap1[0x8];
u8 ob_tap2[0x8];
- u8 reserved_4[0xc];
+ u8 reserved_at_60[0xc];
u8 ob_preemp_mode[0x4];
u8 ob_reg[0x8];
u8 ob_bias[0x8];
- u8 reserved_5[0x20];
+ u8 reserved_at_80[0x20];
};
struct mlx5_ifc_slrg_reg_bits {
@@ -6184,36 +6184,36 @@
u8 version[0x4];
u8 local_port[0x8];
u8 pnat[0x2];
- u8 reserved_0[0x2];
+ u8 reserved_at_12[0x2];
u8 lane[0x4];
- u8 reserved_1[0x8];
+ u8 reserved_at_18[0x8];
u8 time_to_link_up[0x10];
- u8 reserved_2[0xc];
+ u8 reserved_at_30[0xc];
u8 grade_lane_speed[0x4];
u8 grade_version[0x8];
u8 grade[0x18];
- u8 reserved_3[0x4];
+ u8 reserved_at_60[0x4];
u8 height_grade_type[0x4];
u8 height_grade[0x18];
u8 height_dz[0x10];
u8 height_dv[0x10];
- u8 reserved_4[0x10];
+ u8 reserved_at_a0[0x10];
u8 height_sigma[0x10];
- u8 reserved_5[0x20];
+ u8 reserved_at_c0[0x20];
- u8 reserved_6[0x4];
+ u8 reserved_at_e0[0x4];
u8 phase_grade_type[0x4];
u8 phase_grade[0x18];
- u8 reserved_7[0x8];
+ u8 reserved_at_100[0x8];
u8 phase_eo_pos[0x8];
- u8 reserved_8[0x8];
+ u8 reserved_at_110[0x8];
u8 phase_eo_neg[0x8];
u8 ffe_set_tested[0x10];
@@ -6221,70 +6221,70 @@
};
struct mlx5_ifc_pvlc_reg_bits {
- u8 reserved_0[0x8];
+ u8 reserved_at_0[0x8];
u8 local_port[0x8];
- u8 reserved_1[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_2[0x1c];
+ u8 reserved_at_20[0x1c];
u8 vl_hw_cap[0x4];
- u8 reserved_3[0x1c];
+ u8 reserved_at_40[0x1c];
u8 vl_admin[0x4];
- u8 reserved_4[0x1c];
+ u8 reserved_at_60[0x1c];
u8 vl_operational[0x4];
};
struct mlx5_ifc_pude_reg_bits {
u8 swid[0x8];
u8 local_port[0x8];
- u8 reserved_0[0x4];
+ u8 reserved_at_10[0x4];
u8 admin_status[0x4];
- u8 reserved_1[0x4];
+ u8 reserved_at_18[0x4];
u8 oper_status[0x4];
- u8 reserved_2[0x60];
+ u8 reserved_at_20[0x60];
};
struct mlx5_ifc_ptys_reg_bits {
- u8 reserved_0[0x8];
+ u8 reserved_at_0[0x8];
u8 local_port[0x8];
- u8 reserved_1[0xd];
+ u8 reserved_at_10[0xd];
u8 proto_mask[0x3];
- u8 reserved_2[0x40];
+ u8 reserved_at_20[0x40];
u8 eth_proto_capability[0x20];
u8 ib_link_width_capability[0x10];
u8 ib_proto_capability[0x10];
- u8 reserved_3[0x20];
+ u8 reserved_at_a0[0x20];
u8 eth_proto_admin[0x20];
u8 ib_link_width_admin[0x10];
u8 ib_proto_admin[0x10];
- u8 reserved_4[0x20];
+ u8 reserved_at_100[0x20];
u8 eth_proto_oper[0x20];
u8 ib_link_width_oper[0x10];
u8 ib_proto_oper[0x10];
- u8 reserved_5[0x20];
+ u8 reserved_at_160[0x20];
u8 eth_proto_lp_advertise[0x20];
- u8 reserved_6[0x60];
+ u8 reserved_at_1a0[0x60];
};
struct mlx5_ifc_ptas_reg_bits {
- u8 reserved_0[0x20];
+ u8 reserved_at_0[0x20];
u8 algorithm_options[0x10];
- u8 reserved_1[0x4];
+ u8 reserved_at_30[0x4];
u8 repetitions_mode[0x4];
u8 num_of_repetitions[0x8];
@@ -6310,13 +6310,13 @@
u8 ndeo_error_threshold[0x10];
u8 mixer_offset_step_size[0x10];
- u8 reserved_2[0x8];
+ u8 reserved_at_110[0x8];
u8 mix90_phase_for_voltage_bath[0x8];
u8 mixer_offset_start[0x10];
u8 mixer_offset_end[0x10];
- u8 reserved_3[0x15];
+ u8 reserved_at_140[0x15];
u8 ber_test_time[0xb];
};
@@ -6324,154 +6324,154 @@
u8 swid[0x8];
u8 local_port[0x8];
u8 sub_port[0x8];
- u8 reserved_0[0x8];
+ u8 reserved_at_18[0x8];
- u8 reserved_1[0x20];
+ u8 reserved_at_20[0x20];
};
struct mlx5_ifc_pqdr_reg_bits {
- u8 reserved_0[0x8];
+ u8 reserved_at_0[0x8];
u8 local_port[0x8];
- u8 reserved_1[0x5];
+ u8 reserved_at_10[0x5];
u8 prio[0x3];
- u8 reserved_2[0x6];
+ u8 reserved_at_18[0x6];
u8 mode[0x2];
- u8 reserved_3[0x20];
+ u8 reserved_at_20[0x20];
- u8 reserved_4[0x10];
+ u8 reserved_at_40[0x10];
u8 min_threshold[0x10];
- u8 reserved_5[0x10];
+ u8 reserved_at_60[0x10];
u8 max_threshold[0x10];
- u8 reserved_6[0x10];
+ u8 reserved_at_80[0x10];
u8 mark_probability_denominator[0x10];
- u8 reserved_7[0x60];
+ u8 reserved_at_a0[0x60];
};
struct mlx5_ifc_ppsc_reg_bits {
- u8 reserved_0[0x8];
+ u8 reserved_at_0[0x8];
u8 local_port[0x8];
- u8 reserved_1[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_2[0x60];
+ u8 reserved_at_20[0x60];
- u8 reserved_3[0x1c];
+ u8 reserved_at_80[0x1c];
u8 wrps_admin[0x4];
- u8 reserved_4[0x1c];
+ u8 reserved_at_a0[0x1c];
u8 wrps_status[0x4];
- u8 reserved_5[0x8];
+ u8 reserved_at_c0[0x8];
u8 up_threshold[0x8];
- u8 reserved_6[0x8];
+ u8 reserved_at_d0[0x8];
u8 down_threshold[0x8];
- u8 reserved_7[0x20];
+ u8 reserved_at_e0[0x20];
- u8 reserved_8[0x1c];
+ u8 reserved_at_100[0x1c];
u8 srps_admin[0x4];
- u8 reserved_9[0x1c];
+ u8 reserved_at_120[0x1c];
u8 srps_status[0x4];
- u8 reserved_10[0x40];
+ u8 reserved_at_140[0x40];
};
struct mlx5_ifc_pplr_reg_bits {
- u8 reserved_0[0x8];
+ u8 reserved_at_0[0x8];
u8 local_port[0x8];
- u8 reserved_1[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_2[0x8];
+ u8 reserved_at_20[0x8];
u8 lb_cap[0x8];
- u8 reserved_3[0x8];
+ u8 reserved_at_30[0x8];
u8 lb_en[0x8];
};
struct mlx5_ifc_pplm_reg_bits {
- u8 reserved_0[0x8];
+ u8 reserved_at_0[0x8];
u8 local_port[0x8];
- u8 reserved_1[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_2[0x20];
+ u8 reserved_at_20[0x20];
u8 port_profile_mode[0x8];
u8 static_port_profile[0x8];
u8 active_port_profile[0x8];
- u8 reserved_3[0x8];
+ u8 reserved_at_58[0x8];
u8 retransmission_active[0x8];
u8 fec_mode_active[0x18];
- u8 reserved_4[0x20];
+ u8 reserved_at_80[0x20];
};
struct mlx5_ifc_ppcnt_reg_bits {
u8 swid[0x8];
u8 local_port[0x8];
u8 pnat[0x2];
- u8 reserved_0[0x8];
+ u8 reserved_at_12[0x8];
u8 grp[0x6];
u8 clr[0x1];
- u8 reserved_1[0x1c];
+ u8 reserved_at_21[0x1c];
u8 prio_tc[0x3];
union mlx5_ifc_eth_cntrs_grp_data_layout_auto_bits counter_set;
};
struct mlx5_ifc_ppad_reg_bits {
- u8 reserved_0[0x3];
+ u8 reserved_at_0[0x3];
u8 single_mac[0x1];
- u8 reserved_1[0x4];
+ u8 reserved_at_4[0x4];
u8 local_port[0x8];
u8 mac_47_32[0x10];
u8 mac_31_0[0x20];
- u8 reserved_2[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_pmtu_reg_bits {
- u8 reserved_0[0x8];
+ u8 reserved_at_0[0x8];
u8 local_port[0x8];
- u8 reserved_1[0x10];
+ u8 reserved_at_10[0x10];
u8 max_mtu[0x10];
- u8 reserved_2[0x10];
+ u8 reserved_at_30[0x10];
u8 admin_mtu[0x10];
- u8 reserved_3[0x10];
+ u8 reserved_at_50[0x10];
u8 oper_mtu[0x10];
- u8 reserved_4[0x10];
+ u8 reserved_at_70[0x10];
};
struct mlx5_ifc_pmpr_reg_bits {
- u8 reserved_0[0x8];
+ u8 reserved_at_0[0x8];
u8 module[0x8];
- u8 reserved_1[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_2[0x18];
+ u8 reserved_at_20[0x18];
u8 attenuation_5g[0x8];
- u8 reserved_3[0x18];
+ u8 reserved_at_40[0x18];
u8 attenuation_7g[0x8];
- u8 reserved_4[0x18];
+ u8 reserved_at_60[0x18];
u8 attenuation_12g[0x8];
};
struct mlx5_ifc_pmpe_reg_bits {
- u8 reserved_0[0x8];
+ u8 reserved_at_0[0x8];
u8 module[0x8];
- u8 reserved_1[0xc];
+ u8 reserved_at_10[0xc];
u8 module_status[0x4];
- u8 reserved_2[0x60];
+ u8 reserved_at_20[0x60];
};
struct mlx5_ifc_pmpc_reg_bits {
@@ -6479,20 +6479,20 @@
};
struct mlx5_ifc_pmlpn_reg_bits {
- u8 reserved_0[0x4];
+ u8 reserved_at_0[0x4];
u8 mlpn_status[0x4];
u8 local_port[0x8];
- u8 reserved_1[0x10];
+ u8 reserved_at_10[0x10];
u8 e[0x1];
- u8 reserved_2[0x1f];
+ u8 reserved_at_21[0x1f];
};
struct mlx5_ifc_pmlp_reg_bits {
u8 rxtx[0x1];
- u8 reserved_0[0x7];
+ u8 reserved_at_1[0x7];
u8 local_port[0x8];
- u8 reserved_1[0x8];
+ u8 reserved_at_10[0x8];
u8 width[0x8];
u8 lane0_module_mapping[0x20];
@@ -6503,36 +6503,36 @@
u8 lane3_module_mapping[0x20];
- u8 reserved_2[0x160];
+ u8 reserved_at_a0[0x160];
};
struct mlx5_ifc_pmaos_reg_bits {
- u8 reserved_0[0x8];
+ u8 reserved_at_0[0x8];
u8 module[0x8];
- u8 reserved_1[0x4];
+ u8 reserved_at_10[0x4];
u8 admin_status[0x4];
- u8 reserved_2[0x4];
+ u8 reserved_at_18[0x4];
u8 oper_status[0x4];
u8 ase[0x1];
u8 ee[0x1];
- u8 reserved_3[0x1c];
+ u8 reserved_at_22[0x1c];
u8 e[0x2];
- u8 reserved_4[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_plpc_reg_bits {
- u8 reserved_0[0x4];
+ u8 reserved_at_0[0x4];
u8 profile_id[0xc];
- u8 reserved_1[0x4];
+ u8 reserved_at_10[0x4];
u8 proto_mask[0x4];
- u8 reserved_2[0x8];
+ u8 reserved_at_18[0x8];
- u8 reserved_3[0x10];
+ u8 reserved_at_20[0x10];
u8 lane_speed[0x10];
- u8 reserved_4[0x17];
+ u8 reserved_at_40[0x17];
u8 lpbf[0x1];
u8 fec_mode_policy[0x8];
@@ -6545,44 +6545,44 @@
u8 retransmission_request_admin[0x8];
u8 fec_mode_request_admin[0x18];
- u8 reserved_5[0x80];
+ u8 reserved_at_c0[0x80];
};
struct mlx5_ifc_plib_reg_bits {
- u8 reserved_0[0x8];
+ u8 reserved_at_0[0x8];
u8 local_port[0x8];
- u8 reserved_1[0x8];
+ u8 reserved_at_10[0x8];
u8 ib_port[0x8];
- u8 reserved_2[0x60];
+ u8 reserved_at_20[0x60];
};
struct mlx5_ifc_plbf_reg_bits {
- u8 reserved_0[0x8];
+ u8 reserved_at_0[0x8];
u8 local_port[0x8];
- u8 reserved_1[0xd];
+ u8 reserved_at_10[0xd];
u8 lbf_mode[0x3];
- u8 reserved_2[0x20];
+ u8 reserved_at_20[0x20];
};
struct mlx5_ifc_pipg_reg_bits {
- u8 reserved_0[0x8];
+ u8 reserved_at_0[0x8];
u8 local_port[0x8];
- u8 reserved_1[0x10];
+ u8 reserved_at_10[0x10];
u8 dic[0x1];
- u8 reserved_2[0x19];
+ u8 reserved_at_21[0x19];
u8 ipg[0x4];
- u8 reserved_3[0x2];
+ u8 reserved_at_3e[0x2];
};
struct mlx5_ifc_pifr_reg_bits {
- u8 reserved_0[0x8];
+ u8 reserved_at_0[0x8];
u8 local_port[0x8];
- u8 reserved_1[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_2[0xe0];
+ u8 reserved_at_20[0xe0];
u8 port_filter[8][0x20];
@@ -6590,36 +6590,36 @@
};
struct mlx5_ifc_pfcc_reg_bits {
- u8 reserved_0[0x8];
+ u8 reserved_at_0[0x8];
u8 local_port[0x8];
- u8 reserved_1[0x10];
+ u8 reserved_at_10[0x10];
u8 ppan[0x4];
- u8 reserved_2[0x4];
+ u8 reserved_at_24[0x4];
u8 prio_mask_tx[0x8];
- u8 reserved_3[0x8];
+ u8 reserved_at_30[0x8];
u8 prio_mask_rx[0x8];
u8 pptx[0x1];
u8 aptx[0x1];
- u8 reserved_4[0x6];
+ u8 reserved_at_42[0x6];
u8 pfctx[0x8];
- u8 reserved_5[0x10];
+ u8 reserved_at_50[0x10];
u8 pprx[0x1];
u8 aprx[0x1];
- u8 reserved_6[0x6];
+ u8 reserved_at_62[0x6];
u8 pfcrx[0x8];
- u8 reserved_7[0x10];
+ u8 reserved_at_70[0x10];
- u8 reserved_8[0x80];
+ u8 reserved_at_80[0x80];
};
struct mlx5_ifc_pelc_reg_bits {
u8 op[0x4];
- u8 reserved_0[0x4];
+ u8 reserved_at_4[0x4];
u8 local_port[0x8];
- u8 reserved_1[0x10];
+ u8 reserved_at_10[0x10];
u8 op_admin[0x8];
u8 op_capability[0x8];
@@ -6634,28 +6634,28 @@
u8 active[0x40];
- u8 reserved_2[0x80];
+ u8 reserved_at_140[0x80];
};
struct mlx5_ifc_peir_reg_bits {
- u8 reserved_0[0x8];
+ u8 reserved_at_0[0x8];
u8 local_port[0x8];
- u8 reserved_1[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_2[0xc];
+ u8 reserved_at_20[0xc];
u8 error_count[0x4];
- u8 reserved_3[0x10];
+ u8 reserved_at_30[0x10];
- u8 reserved_4[0xc];
+ u8 reserved_at_40[0xc];
u8 lane[0x4];
- u8 reserved_5[0x8];
+ u8 reserved_at_50[0x8];
u8 error_type[0x8];
};
struct mlx5_ifc_pcap_reg_bits {
- u8 reserved_0[0x8];
+ u8 reserved_at_0[0x8];
u8 local_port[0x8];
- u8 reserved_1[0x10];
+ u8 reserved_at_10[0x10];
u8 port_capability_mask[4][0x20];
};
@@ -6663,46 +6663,46 @@
struct mlx5_ifc_paos_reg_bits {
u8 swid[0x8];
u8 local_port[0x8];
- u8 reserved_0[0x4];
+ u8 reserved_at_10[0x4];
u8 admin_status[0x4];
- u8 reserved_1[0x4];
+ u8 reserved_at_18[0x4];
u8 oper_status[0x4];
u8 ase[0x1];
u8 ee[0x1];
- u8 reserved_2[0x1c];
+ u8 reserved_at_22[0x1c];
u8 e[0x2];
- u8 reserved_3[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_pamp_reg_bits {
- u8 reserved_0[0x8];
+ u8 reserved_at_0[0x8];
u8 opamp_group[0x8];
- u8 reserved_1[0xc];
+ u8 reserved_at_10[0xc];
u8 opamp_group_type[0x4];
u8 start_index[0x10];
- u8 reserved_2[0x4];
+ u8 reserved_at_30[0x4];
u8 num_of_indices[0xc];
u8 index_data[18][0x10];
};
struct mlx5_ifc_lane_2_module_mapping_bits {
- u8 reserved_0[0x6];
+ u8 reserved_at_0[0x6];
u8 rx_lane[0x2];
- u8 reserved_1[0x6];
+ u8 reserved_at_8[0x6];
u8 tx_lane[0x2];
- u8 reserved_2[0x8];
+ u8 reserved_at_10[0x8];
u8 module[0x8];
};
struct mlx5_ifc_bufferx_reg_bits {
- u8 reserved_0[0x6];
+ u8 reserved_at_0[0x6];
u8 lossy[0x1];
u8 epsb[0x1];
- u8 reserved_1[0xc];
+ u8 reserved_at_8[0xc];
u8 size[0xc];
u8 xoff_threshold[0x10];
@@ -6714,21 +6714,21 @@
};
struct mlx5_ifc_register_power_settings_bits {
- u8 reserved_0[0x18];
+ u8 reserved_at_0[0x18];
u8 power_settings_level[0x8];
- u8 reserved_1[0x60];
+ u8 reserved_at_20[0x60];
};
struct mlx5_ifc_register_host_endianness_bits {
u8 he[0x1];
- u8 reserved_0[0x1f];
+ u8 reserved_at_1[0x1f];
- u8 reserved_1[0x60];
+ u8 reserved_at_20[0x60];
};
struct mlx5_ifc_umr_pointer_desc_argument_bits {
- u8 reserved_0[0x20];
+ u8 reserved_at_0[0x20];
u8 mkey[0x20];
@@ -6741,7 +6741,7 @@
u8 dc_key[0x40];
u8 ext[0x1];
- u8 reserved_0[0x7];
+ u8 reserved_at_41[0x7];
u8 destination_qp_dct[0x18];
u8 static_rate[0x4];
@@ -6750,7 +6750,7 @@
u8 mlid[0x7];
u8 rlid_udp_sport[0x10];
- u8 reserved_1[0x20];
+ u8 reserved_at_80[0x20];
u8 rmac_47_16[0x20];
@@ -6758,9 +6758,9 @@
u8 tclass[0x8];
u8 hop_limit[0x8];
- u8 reserved_2[0x1];
+ u8 reserved_at_e0[0x1];
u8 grh[0x1];
- u8 reserved_3[0x2];
+ u8 reserved_at_e2[0x2];
u8 src_addr_index[0x8];
u8 flow_label[0x14];
@@ -6768,27 +6768,27 @@
};
struct mlx5_ifc_pages_req_event_bits {
- u8 reserved_0[0x10];
+ u8 reserved_at_0[0x10];
u8 function_id[0x10];
u8 num_pages[0x20];
- u8 reserved_1[0xa0];
+ u8 reserved_at_40[0xa0];
};
struct mlx5_ifc_eqe_bits {
- u8 reserved_0[0x8];
+ u8 reserved_at_0[0x8];
u8 event_type[0x8];
- u8 reserved_1[0x8];
+ u8 reserved_at_10[0x8];
u8 event_sub_type[0x8];
- u8 reserved_2[0xe0];
+ u8 reserved_at_20[0xe0];
union mlx5_ifc_event_auto_bits event_data;
- u8 reserved_3[0x10];
+ u8 reserved_at_1e0[0x10];
u8 signature[0x8];
- u8 reserved_4[0x7];
+ u8 reserved_at_1f8[0x7];
u8 owner[0x1];
};
@@ -6798,14 +6798,14 @@
struct mlx5_ifc_cmd_queue_entry_bits {
u8 type[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 input_length[0x20];
u8 input_mailbox_pointer_63_32[0x20];
u8 input_mailbox_pointer_31_9[0x17];
- u8 reserved_1[0x9];
+ u8 reserved_at_77[0x9];
u8 command_input_inline_data[16][0x8];
@@ -6814,20 +6814,20 @@
u8 output_mailbox_pointer_63_32[0x20];
u8 output_mailbox_pointer_31_9[0x17];
- u8 reserved_2[0x9];
+ u8 reserved_at_1b7[0x9];
u8 output_length[0x20];
u8 token[0x8];
u8 signature[0x8];
- u8 reserved_3[0x8];
+ u8 reserved_at_1f0[0x8];
u8 status[0x7];
u8 ownership[0x1];
};
struct mlx5_ifc_cmd_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
@@ -6836,9 +6836,9 @@
struct mlx5_ifc_cmd_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
u8 command[0][0x20];
@@ -6847,16 +6847,16 @@
struct mlx5_ifc_cmd_if_box_bits {
u8 mailbox_data[512][0x8];
- u8 reserved_0[0x180];
+ u8 reserved_at_1000[0x180];
u8 next_pointer_63_32[0x20];
u8 next_pointer_31_10[0x16];
- u8 reserved_1[0xa];
+ u8 reserved_at_11b6[0xa];
u8 block_number[0x20];
- u8 reserved_2[0x8];
+ u8 reserved_at_11e0[0x8];
u8 token[0x8];
u8 ctrl_signature[0x8];
u8 signature[0x8];
@@ -6866,7 +6866,7 @@
u8 ptag_63_32[0x20];
u8 ptag_31_8[0x18];
- u8 reserved_0[0x6];
+ u8 reserved_at_38[0x6];
u8 wr_en[0x1];
u8 rd_en[0x1];
};
@@ -6904,38 +6904,38 @@
u8 cmd_interface_rev[0x10];
u8 fw_rev_subminor[0x10];
- u8 reserved_0[0x40];
+ u8 reserved_at_40[0x40];
u8 cmdq_phy_addr_63_32[0x20];
u8 cmdq_phy_addr_31_12[0x14];
- u8 reserved_1[0x2];
+ u8 reserved_at_b4[0x2];
u8 nic_interface[0x2];
u8 log_cmdq_size[0x4];
u8 log_cmdq_stride[0x4];
u8 command_doorbell_vector[0x20];
- u8 reserved_2[0xf00];
+ u8 reserved_at_e0[0xf00];
u8 initializing[0x1];
- u8 reserved_3[0x4];
+ u8 reserved_at_fe1[0x4];
u8 nic_interface_supported[0x3];
- u8 reserved_4[0x18];
+ u8 reserved_at_fe8[0x18];
struct mlx5_ifc_health_buffer_bits health_buffer;
u8 no_dram_nic_offset[0x20];
- u8 reserved_5[0x6e40];
+ u8 reserved_at_1220[0x6e40];
- u8 reserved_6[0x1f];
+ u8 reserved_at_8060[0x1f];
u8 clear_int[0x1];
u8 health_syndrome[0x8];
u8 health_counter[0x18];
- u8 reserved_7[0x17fc0];
+ u8 reserved_at_80a0[0x17fc0];
};
union mlx5_ifc_ports_control_registers_document_bits {
@@ -6980,44 +6980,44 @@
struct mlx5_ifc_pvlc_reg_bits pvlc_reg;
struct mlx5_ifc_slrg_reg_bits slrg_reg;
struct mlx5_ifc_sltp_reg_bits sltp_reg;
- u8 reserved_0[0x60e0];
+ u8 reserved_at_0[0x60e0];
};
union mlx5_ifc_debug_enhancements_document_bits {
struct mlx5_ifc_health_buffer_bits health_buffer;
- u8 reserved_0[0x200];
+ u8 reserved_at_0[0x200];
};
union mlx5_ifc_uplink_pci_interface_document_bits {
struct mlx5_ifc_initial_seg_bits initial_seg;
- u8 reserved_0[0x20060];
+ u8 reserved_at_0[0x20060];
};
struct mlx5_ifc_set_flow_table_root_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_set_flow_table_root_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x40];
+ u8 reserved_at_40[0x40];
u8 table_type[0x8];
- u8 reserved_3[0x18];
+ u8 reserved_at_88[0x18];
- u8 reserved_4[0x8];
+ u8 reserved_at_a0[0x8];
u8 table_id[0x18];
- u8 reserved_5[0x140];
+ u8 reserved_at_c0[0x140];
};
enum {
@@ -7026,39 +7026,39 @@
struct mlx5_ifc_modify_flow_table_out_bits {
u8 status[0x8];
- u8 reserved_0[0x18];
+ u8 reserved_at_8[0x18];
u8 syndrome[0x20];
- u8 reserved_1[0x40];
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_modify_flow_table_in_bits {
u8 opcode[0x10];
- u8 reserved_0[0x10];
+ u8 reserved_at_10[0x10];
- u8 reserved_1[0x10];
+ u8 reserved_at_20[0x10];
u8 op_mod[0x10];
- u8 reserved_2[0x20];
+ u8 reserved_at_40[0x20];
- u8 reserved_3[0x10];
+ u8 reserved_at_60[0x10];
u8 modify_field_select[0x10];
u8 table_type[0x8];
- u8 reserved_4[0x18];
+ u8 reserved_at_88[0x18];
- u8 reserved_5[0x8];
+ u8 reserved_at_a0[0x8];
u8 table_id[0x18];
- u8 reserved_6[0x4];
+ u8 reserved_at_c0[0x4];
u8 table_miss_mode[0x4];
- u8 reserved_7[0x18];
+ u8 reserved_at_c8[0x18];
- u8 reserved_8[0x8];
+ u8 reserved_at_e0[0x8];
u8 table_miss_id[0x18];
- u8 reserved_9[0x100];
+ u8 reserved_at_100[0x100];
};
#endif /* MLX5_IFC_H */
diff --git a/include/linux/module.h b/include/linux/module.h
index 4560d8f..2bb0c30 100644
--- a/include/linux/module.h
+++ b/include/linux/module.h
@@ -324,6 +324,12 @@
#define __module_layout_align
#endif
+struct mod_kallsyms {
+ Elf_Sym *symtab;
+ unsigned int num_symtab;
+ char *strtab;
+};
+
struct module {
enum module_state state;
@@ -405,15 +411,10 @@
#endif
#ifdef CONFIG_KALLSYMS
- /*
- * We keep the symbol and string tables for kallsyms.
- * The core_* fields below are temporary, loader-only (they
- * could really be discarded after module init).
- */
- Elf_Sym *symtab, *core_symtab;
- unsigned int num_symtab, core_num_syms;
- char *strtab, *core_strtab;
-
+ /* Protected by RCU and/or module_mutex: use rcu_dereference() */
+ struct mod_kallsyms *kallsyms;
+ struct mod_kallsyms core_kallsyms;
+
/* Section attributes */
struct module_sect_attrs *sect_attrs;
diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index 289c231..5440b7b 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -3718,7 +3718,7 @@
void *netdev_lower_get_next(struct net_device *dev,
struct list_head **iter);
#define netdev_for_each_lower_dev(dev, ldev, iter) \
- for (iter = &(dev)->adj_list.lower, \
+ for (iter = (dev)->adj_list.lower.next, \
ldev = netdev_lower_get_next(dev, &(iter)); \
ldev; \
ldev = netdev_lower_get_next(dev, &(iter)))
diff --git a/include/linux/nfs_fs.h b/include/linux/nfs_fs.h
index 48e0320..67300f8 100644
--- a/include/linux/nfs_fs.h
+++ b/include/linux/nfs_fs.h
@@ -550,9 +550,7 @@
static inline loff_t nfs_size_to_loff_t(__u64 size)
{
- if (size > (__u64) OFFSET_MAX - 1)
- return OFFSET_MAX - 1;
- return (loff_t) size;
+ return min_t(u64, size, OFFSET_MAX);
}
static inline ino_t
diff --git a/include/linux/nfs_xdr.h b/include/linux/nfs_xdr.h
index 791098a..d320906 100644
--- a/include/linux/nfs_xdr.h
+++ b/include/linux/nfs_xdr.h
@@ -275,6 +275,7 @@
size_t layoutupdate_len;
struct page *layoutupdate_page;
struct page **layoutupdate_pages;
+ __be32 *start_p;
};
struct nfs4_layoutcommit_res {
diff --git a/include/linux/pci.h b/include/linux/pci.h
index 27df4a6..2771625 100644
--- a/include/linux/pci.h
+++ b/include/linux/pci.h
@@ -988,23 +988,6 @@
return pdev->is_managed;
}
-static inline void pci_set_managed_irq(struct pci_dev *pdev, unsigned int irq)
-{
- pdev->irq = irq;
- pdev->irq_managed = 1;
-}
-
-static inline void pci_reset_managed_irq(struct pci_dev *pdev)
-{
- pdev->irq = 0;
- pdev->irq_managed = 0;
-}
-
-static inline bool pci_has_managed_irq(struct pci_dev *pdev)
-{
- return pdev->irq_managed && pdev->irq > 0;
-}
-
void pci_disable_device(struct pci_dev *dev);
extern unsigned int pcibios_max_latency;
diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index b35a61a..f5c5a3f 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -397,6 +397,7 @@
* enum perf_event_active_state - the states of a event
*/
enum perf_event_active_state {
+ PERF_EVENT_STATE_DEAD = -4,
PERF_EVENT_STATE_EXIT = -3,
PERF_EVENT_STATE_ERROR = -2,
PERF_EVENT_STATE_OFF = -1,
@@ -905,7 +906,7 @@
}
}
-extern struct static_key_deferred perf_sched_events;
+extern struct static_key_false perf_sched_events;
static __always_inline bool
perf_sw_migrate_enabled(void)
@@ -924,7 +925,7 @@
static inline void perf_event_task_sched_in(struct task_struct *prev,
struct task_struct *task)
{
- if (static_key_false(&perf_sched_events.key))
+ if (static_branch_unlikely(&perf_sched_events))
__perf_event_task_sched_in(prev, task);
if (perf_sw_migrate_enabled() && task->sched_migrated) {
@@ -941,7 +942,7 @@
{
perf_sw_event_sched(PERF_COUNT_SW_CONTEXT_SWITCHES, 1, 0);
- if (static_key_false(&perf_sched_events.key))
+ if (static_branch_unlikely(&perf_sched_events))
__perf_event_task_sched_out(prev, next);
}
diff --git a/include/linux/pfn.h b/include/linux/pfn.h
index 2d8e497..1132953 100644
--- a/include/linux/pfn.h
+++ b/include/linux/pfn.h
@@ -10,7 +10,7 @@
* backing is indicated by flags in the high bits of the value.
*/
typedef struct {
- unsigned long val;
+ u64 val;
} pfn_t;
#endif
diff --git a/include/linux/pfn_t.h b/include/linux/pfn_t.h
index 37448ab..9499481 100644
--- a/include/linux/pfn_t.h
+++ b/include/linux/pfn_t.h
@@ -9,14 +9,13 @@
* PFN_DEV - pfn is not covered by system memmap by default
* PFN_MAP - pfn has a dynamic page mapping established by a device driver
*/
-#define PFN_FLAGS_MASK (((unsigned long) ~PAGE_MASK) \
- << (BITS_PER_LONG - PAGE_SHIFT))
-#define PFN_SG_CHAIN (1UL << (BITS_PER_LONG - 1))
-#define PFN_SG_LAST (1UL << (BITS_PER_LONG - 2))
-#define PFN_DEV (1UL << (BITS_PER_LONG - 3))
-#define PFN_MAP (1UL << (BITS_PER_LONG - 4))
+#define PFN_FLAGS_MASK (((u64) ~PAGE_MASK) << (BITS_PER_LONG_LONG - PAGE_SHIFT))
+#define PFN_SG_CHAIN (1ULL << (BITS_PER_LONG_LONG - 1))
+#define PFN_SG_LAST (1ULL << (BITS_PER_LONG_LONG - 2))
+#define PFN_DEV (1ULL << (BITS_PER_LONG_LONG - 3))
+#define PFN_MAP (1ULL << (BITS_PER_LONG_LONG - 4))
-static inline pfn_t __pfn_to_pfn_t(unsigned long pfn, unsigned long flags)
+static inline pfn_t __pfn_to_pfn_t(unsigned long pfn, u64 flags)
{
pfn_t pfn_t = { .val = pfn | (flags & PFN_FLAGS_MASK), };
@@ -29,7 +28,7 @@
return __pfn_to_pfn_t(pfn, 0);
}
-extern pfn_t phys_to_pfn_t(phys_addr_t addr, unsigned long flags);
+extern pfn_t phys_to_pfn_t(phys_addr_t addr, u64 flags);
static inline bool pfn_t_has_page(pfn_t pfn)
{
@@ -87,7 +86,7 @@
#ifdef __HAVE_ARCH_PTE_DEVMAP
static inline bool pfn_t_devmap(pfn_t pfn)
{
- const unsigned long flags = PFN_DEV|PFN_MAP;
+ const u64 flags = PFN_DEV|PFN_MAP;
return (pfn.val & flags) == flags;
}
diff --git a/include/linux/power/bq27xxx_battery.h b/include/linux/power/bq27xxx_battery.h
index 998d8f1..b50c049 100644
--- a/include/linux/power/bq27xxx_battery.h
+++ b/include/linux/power/bq27xxx_battery.h
@@ -49,6 +49,7 @@
struct bq27xxx_device_info {
struct device *dev;
+ int id;
enum bq27xxx_chip chip;
const char *name;
struct bq27xxx_access_methods bus;
diff --git a/include/linux/random.h b/include/linux/random.h
index a75840c..9c29122 100644
--- a/include/linux/random.h
+++ b/include/linux/random.h
@@ -34,6 +34,7 @@
#endif
unsigned int get_random_int(void);
+unsigned long get_random_long(void);
unsigned long randomize_range(unsigned long start, unsigned long end, unsigned long len);
u32 prandom_u32(void);
diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index 11f935c..4ce9ff7 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -299,6 +299,7 @@
#else
#define MAX_SKB_FRAGS (65536/PAGE_SIZE + 1)
#endif
+extern int sysctl_max_skb_frags;
typedef struct skb_frag_struct skb_frag_t;
diff --git a/include/linux/soc/ti/knav_dma.h b/include/linux/soc/ti/knav_dma.h
index 343c13a..35cb926 100644
--- a/include/linux/soc/ti/knav_dma.h
+++ b/include/linux/soc/ti/knav_dma.h
@@ -44,6 +44,7 @@
#define KNAV_DMA_NUM_EPIB_WORDS 4
#define KNAV_DMA_NUM_PS_WORDS 16
+#define KNAV_DMA_NUM_SW_DATA_WORDS 4
#define KNAV_DMA_FDQ_PER_CHAN 4
/* Tx channel scheduling priority */
@@ -142,6 +143,7 @@
* @orig_buff: buff pointer since 'buff' can be overwritten
* @epib: Extended packet info block
* @psdata: Protocol specific
+ * @sw_data: Software private data not touched by h/w
*/
struct knav_dma_desc {
__le32 desc_info;
@@ -154,7 +156,7 @@
__le32 orig_buff;
__le32 epib[KNAV_DMA_NUM_EPIB_WORDS];
__le32 psdata[KNAV_DMA_NUM_PS_WORDS];
- __le32 pad[4];
+ u32 sw_data[KNAV_DMA_NUM_SW_DATA_WORDS];
} ____cacheline_aligned;
#if IS_ENABLED(CONFIG_KEYSTONE_NAVIGATOR_DMA)
diff --git a/include/linux/stm.h b/include/linux/stm.h
index 9d0083d..1a79ed8 100644
--- a/include/linux/stm.h
+++ b/include/linux/stm.h
@@ -67,6 +67,16 @@
* description. That is, the lowest master that can be allocated to software
* writers is @sw_start and data from this writer will appear is @sw_start
* master in the STP stream.
+ *
+ * The @packet callback should adhere to the following rules:
+ * 1) it must return the number of bytes it consumed from the payload;
+ * 2) therefore, if it sent a packet that does not have payload (like FLAG),
+ * it must return zero;
+ * 3) if it does not support the requested packet type/flag combination,
+ * it must return -ENOTSUPP.
+ *
+ * The @unlink callback is called when there are no more active writers so
+ * that the master/channel can be quiesced.
*/
struct stm_data {
const char *name;
diff --git a/include/linux/tracepoint.h b/include/linux/tracepoint.h
index acd522a..acfdbf3 100644
--- a/include/linux/tracepoint.h
+++ b/include/linux/tracepoint.h
@@ -14,8 +14,10 @@
* See the file COPYING for more details.
*/
+#include <linux/smp.h>
#include <linux/errno.h>
#include <linux/types.h>
+#include <linux/cpumask.h>
#include <linux/rcupdate.h>
#include <linux/tracepoint-defs.h>
@@ -132,6 +134,9 @@
void *it_func; \
void *__data; \
\
+ if (!cpu_online(raw_smp_processor_id())) \
+ return; \
+ \
if (!(cond)) \
return; \
prercu; \
diff --git a/include/linux/ucs2_string.h b/include/linux/ucs2_string.h
index cbb20af..bb679b4 100644
--- a/include/linux/ucs2_string.h
+++ b/include/linux/ucs2_string.h
@@ -11,4 +11,8 @@
unsigned long ucs2_strsize(const ucs2_char_t *data, unsigned long maxlength);
int ucs2_strncmp(const ucs2_char_t *a, const ucs2_char_t *b, size_t len);
+unsigned long ucs2_utf8size(const ucs2_char_t *src);
+unsigned long ucs2_as_utf8(u8 *dest, const ucs2_char_t *src,
+ unsigned long maxlength);
+
#endif /* _LINUX_UCS2_STRING_H_ */
diff --git a/include/linux/vmw_vmci_defs.h b/include/linux/vmw_vmci_defs.h
index 65ac54c..1bd31a3 100644
--- a/include/linux/vmw_vmci_defs.h
+++ b/include/linux/vmw_vmci_defs.h
@@ -734,6 +734,41 @@
}
/*
+ * Helper to read a value from a head or tail pointer. For X86_32, the
+ * pointer is treated as a 32bit value, since the pointer value
+ * never exceeds a 32bit value in this case. Also, doing an
+ * atomic64_read on X86_32 uniprocessor systems may be implemented
+ * as a non locked cmpxchg8b, that may end up overwriting updates done
+ * by the VMCI device to the memory location. On 32bit SMP, the lock
+ * prefix will be used, so correctness isn't an issue, but using a
+ * 64bit operation still adds unnecessary overhead.
+ */
+static inline u64 vmci_q_read_pointer(atomic64_t *var)
+{
+#if defined(CONFIG_X86_32)
+ return atomic_read((atomic_t *)var);
+#else
+ return atomic64_read(var);
+#endif
+}
+
+/*
+ * Helper to set the value of a head or tail pointer. For X86_32, the
+ * pointer is treated as a 32bit value, since the pointer value
+ * never exceeds a 32bit value in this case. On 32bit SMP, using a
+ * locked cmpxchg8b adds unnecessary overhead.
+ */
+static inline void vmci_q_set_pointer(atomic64_t *var,
+ u64 new_val)
+{
+#if defined(CONFIG_X86_32)
+ return atomic_set((atomic_t *)var, (u32)new_val);
+#else
+ return atomic64_set(var, new_val);
+#endif
+}
+
+/*
* Helper to add a given offset to a head or tail pointer. Wraps the
* value of the pointer around the max size of the queue.
*/
@@ -741,14 +776,14 @@
size_t add,
u64 size)
{
- u64 new_val = atomic64_read(var);
+ u64 new_val = vmci_q_read_pointer(var);
if (new_val >= size - add)
new_val -= size;
new_val += add;
- atomic64_set(var, new_val);
+ vmci_q_set_pointer(var, new_val);
}
/*
@@ -758,7 +793,7 @@
vmci_q_header_producer_tail(const struct vmci_queue_header *q_header)
{
struct vmci_queue_header *qh = (struct vmci_queue_header *)q_header;
- return atomic64_read(&qh->producer_tail);
+ return vmci_q_read_pointer(&qh->producer_tail);
}
/*
@@ -768,7 +803,7 @@
vmci_q_header_consumer_head(const struct vmci_queue_header *q_header)
{
struct vmci_queue_header *qh = (struct vmci_queue_header *)q_header;
- return atomic64_read(&qh->consumer_head);
+ return vmci_q_read_pointer(&qh->consumer_head);
}
/*
diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h
index 0e32bc7..ca73c503 100644
--- a/include/linux/workqueue.h
+++ b/include/linux/workqueue.h
@@ -311,6 +311,7 @@
__WQ_DRAINING = 1 << 16, /* internal: workqueue is draining */
__WQ_ORDERED = 1 << 17, /* internal: workqueue is ordered */
+ __WQ_LEGACY = 1 << 18, /* internal: create*_workqueue() */
WQ_MAX_ACTIVE = 512, /* I like 512, better ideas? */
WQ_MAX_UNBOUND_PER_CPU = 4, /* 4 * #cpus for unbound wq */
@@ -411,12 +412,12 @@
alloc_workqueue(fmt, WQ_UNBOUND | __WQ_ORDERED | (flags), 1, ##args)
#define create_workqueue(name) \
- alloc_workqueue("%s", WQ_MEM_RECLAIM, 1, (name))
+ alloc_workqueue("%s", __WQ_LEGACY | WQ_MEM_RECLAIM, 1, (name))
#define create_freezable_workqueue(name) \
- alloc_workqueue("%s", WQ_FREEZABLE | WQ_UNBOUND | WQ_MEM_RECLAIM, \
- 1, (name))
+ alloc_workqueue("%s", __WQ_LEGACY | WQ_FREEZABLE | WQ_UNBOUND | \
+ WQ_MEM_RECLAIM, 1, (name))
#define create_singlethread_workqueue(name) \
- alloc_ordered_workqueue("%s", WQ_MEM_RECLAIM, name)
+ alloc_ordered_workqueue("%s", __WQ_LEGACY | WQ_MEM_RECLAIM, name)
extern void destroy_workqueue(struct workqueue_struct *wq);
diff --git a/include/net/af_unix.h b/include/net/af_unix.h
index 2a91a05..9b4c418 100644
--- a/include/net/af_unix.h
+++ b/include/net/af_unix.h
@@ -6,8 +6,8 @@
#include <linux/mutex.h>
#include <net/sock.h>
-void unix_inflight(struct file *fp);
-void unix_notinflight(struct file *fp);
+void unix_inflight(struct user_struct *user, struct file *fp);
+void unix_notinflight(struct user_struct *user, struct file *fp);
void unix_gc(void);
void wait_for_unix_gc(void);
struct sock *unix_get_socket(struct file *filp);
diff --git a/include/net/inet_connection_sock.h b/include/net/inet_connection_sock.h
index 481fe1c..49dcad4 100644
--- a/include/net/inet_connection_sock.h
+++ b/include/net/inet_connection_sock.h
@@ -270,8 +270,9 @@
struct sock *newsk,
const struct request_sock *req);
-void inet_csk_reqsk_queue_add(struct sock *sk, struct request_sock *req,
- struct sock *child);
+struct sock *inet_csk_reqsk_queue_add(struct sock *sk,
+ struct request_sock *req,
+ struct sock *child);
void inet_csk_reqsk_queue_hash_add(struct sock *sk, struct request_sock *req,
unsigned long timeout);
struct sock *inet_csk_complete_hashdance(struct sock *sk, struct sock *child,
diff --git a/include/net/ip_fib.h b/include/net/ip_fib.h
index 7029527..4079fc1 100644
--- a/include/net/ip_fib.h
+++ b/include/net/ip_fib.h
@@ -61,6 +61,7 @@
struct rtable __rcu *fnhe_rth_input;
struct rtable __rcu *fnhe_rth_output;
unsigned long fnhe_stamp;
+ struct rcu_head rcu;
};
struct fnhe_hash_bucket {
diff --git a/include/net/ip_tunnels.h b/include/net/ip_tunnels.h
index 6db96ea..dda9abf 100644
--- a/include/net/ip_tunnels.h
+++ b/include/net/ip_tunnels.h
@@ -230,6 +230,7 @@
int ip_tunnel_ioctl(struct net_device *dev, struct ip_tunnel_parm *p, int cmd);
int ip_tunnel_encap(struct sk_buff *skb, struct ip_tunnel *t,
u8 *protocol, struct flowi4 *fl4);
+int __ip_tunnel_change_mtu(struct net_device *dev, int new_mtu, bool strict);
int ip_tunnel_change_mtu(struct net_device *dev, int new_mtu);
struct rtnl_link_stats64 *ip_tunnel_get_stats64(struct net_device *dev,
diff --git a/include/net/scm.h b/include/net/scm.h
index 262532d..59fa93c 100644
--- a/include/net/scm.h
+++ b/include/net/scm.h
@@ -21,6 +21,7 @@
struct scm_fp_list {
short count;
short max;
+ struct user_struct *user;
struct file *fp[SCM_MAX_FD];
};
diff --git a/include/net/tcp.h b/include/net/tcp.h
index f6f8f03..ae6468f 100644
--- a/include/net/tcp.h
+++ b/include/net/tcp.h
@@ -447,7 +447,7 @@
void tcp_v4_send_check(struct sock *sk, struct sk_buff *skb);
void tcp_v4_mtu_reduced(struct sock *sk);
-void tcp_req_err(struct sock *sk, u32 seq);
+void tcp_req_err(struct sock *sk, u32 seq, bool abort);
int tcp_v4_conn_request(struct sock *sk, struct sk_buff *skb);
struct sock *tcp_create_openreq_child(const struct sock *sk,
struct request_sock *req,
diff --git a/include/sound/hdaudio.h b/include/sound/hdaudio.h
index e2b712c..c21c38c 100644
--- a/include/sound/hdaudio.h
+++ b/include/sound/hdaudio.h
@@ -343,7 +343,7 @@
void snd_hdac_bus_exit_link_reset(struct hdac_bus *bus);
void snd_hdac_bus_update_rirb(struct hdac_bus *bus);
-void snd_hdac_bus_handle_stream_irq(struct hdac_bus *bus, unsigned int status,
+int snd_hdac_bus_handle_stream_irq(struct hdac_bus *bus, unsigned int status,
void (*ack)(struct hdac_bus *,
struct hdac_stream *));
diff --git a/include/target/target_core_backend.h b/include/target/target_core_backend.h
index 56cf8e4..28ee5c2 100644
--- a/include/target/target_core_backend.h
+++ b/include/target/target_core_backend.h
@@ -94,5 +94,8 @@
sense_reason_t (*exec_cmd)(struct se_cmd *cmd));
bool target_sense_desc_format(struct se_device *dev);
+sector_t target_to_linux_sector(struct se_device *dev, sector_t lb);
+bool target_configure_unmap_from_queue(struct se_dev_attrib *attrib,
+ struct request_queue *q, int block_size);
#endif /* TARGET_CORE_BACKEND_H */
diff --git a/include/target/target_core_base.h b/include/target/target_core_base.h
index 5d82816..e8c8c08 100644
--- a/include/target/target_core_base.h
+++ b/include/target/target_core_base.h
@@ -140,6 +140,8 @@
SCF_COMPARE_AND_WRITE = 0x00080000,
SCF_COMPARE_AND_WRITE_POST = 0x00100000,
SCF_PASSTHROUGH_PROT_SG_TO_MEM_NOALLOC = 0x00200000,
+ SCF_ACK_KREF = 0x00400000,
+ SCF_USE_CPUID = 0x00800000,
};
/* struct se_dev_entry->lun_flags and struct se_lun->lun_access */
@@ -187,6 +189,7 @@
TARGET_SCF_BIDI_OP = 0x01,
TARGET_SCF_ACK_KREF = 0x02,
TARGET_SCF_UNKNOWN_SIZE = 0x04,
+ TARGET_SCF_USE_CPUID = 0x08,
};
/* fabric independent task management function values */
@@ -490,8 +493,9 @@
#define CMD_T_SENT (1 << 4)
#define CMD_T_STOP (1 << 5)
#define CMD_T_DEV_ACTIVE (1 << 7)
-#define CMD_T_REQUEST_STOP (1 << 8)
#define CMD_T_BUSY (1 << 9)
+#define CMD_T_TAS (1 << 10)
+#define CMD_T_FABRIC_STOP (1 << 11)
spinlock_t t_state_lock;
struct kref cmd_kref;
struct completion t_transport_stop_comp;
@@ -511,9 +515,6 @@
struct list_head state_list;
- /* old task stop completion, consider merging with some of the above */
- struct completion task_stop_comp;
-
/* backend private data */
void *priv;
diff --git a/include/uapi/linux/ndctl.h b/include/uapi/linux/ndctl.h
index 5b4a4be..cc68b921 100644
--- a/include/uapi/linux/ndctl.h
+++ b/include/uapi/linux/ndctl.h
@@ -66,14 +66,18 @@
__u64 length;
__u32 status;
__u32 max_ars_out;
+ __u32 clear_err_unit;
+ __u32 reserved;
} __packed;
struct nd_cmd_ars_start {
__u64 address;
__u64 length;
__u16 type;
- __u8 reserved[6];
+ __u8 flags;
+ __u8 reserved[5];
__u32 status;
+ __u32 scrub_time;
} __packed;
struct nd_cmd_ars_status {
@@ -81,11 +85,14 @@
__u32 out_length;
__u64 address;
__u64 length;
+ __u64 restart_address;
+ __u64 restart_length;
__u16 type;
+ __u16 flags;
__u32 num_records;
struct nd_ars_record {
__u32 handle;
- __u32 flags;
+ __u32 reserved;
__u64 err_address;
__u64 length;
} __packed records[0];
diff --git a/ipc/shm.c b/ipc/shm.c
index ed3027d..331fc1b 100644
--- a/ipc/shm.c
+++ b/ipc/shm.c
@@ -156,11 +156,12 @@
struct kern_ipc_perm *ipcp = ipc_lock(&shm_ids(ns), id);
/*
- * We raced in the idr lookup or with shm_destroy(). Either way, the
- * ID is busted.
+ * Callers of shm_lock() must validate the status of the returned ipc
+ * object pointer (as returned by ipc_lock()), and error out as
+ * appropriate.
*/
- WARN_ON(IS_ERR(ipcp));
-
+ if (IS_ERR(ipcp))
+ return (void *)ipcp;
return container_of(ipcp, struct shmid_kernel, shm_perm);
}
@@ -186,18 +187,33 @@
}
-/* This is called by fork, once for every shm attach. */
-static void shm_open(struct vm_area_struct *vma)
+static int __shm_open(struct vm_area_struct *vma)
{
struct file *file = vma->vm_file;
struct shm_file_data *sfd = shm_file_data(file);
struct shmid_kernel *shp;
shp = shm_lock(sfd->ns, sfd->id);
+
+ if (IS_ERR(shp))
+ return PTR_ERR(shp);
+
shp->shm_atim = get_seconds();
shp->shm_lprid = task_tgid_vnr(current);
shp->shm_nattch++;
shm_unlock(shp);
+ return 0;
+}
+
+/* This is called by fork, once for every shm attach. */
+static void shm_open(struct vm_area_struct *vma)
+{
+ int err = __shm_open(vma);
+ /*
+ * We raced in the idr lookup or with shm_destroy().
+ * Either way, the ID is busted.
+ */
+ WARN_ON_ONCE(err);
}
/*
@@ -260,6 +276,14 @@
down_write(&shm_ids(ns).rwsem);
/* remove from the list of attaches of the shm segment */
shp = shm_lock(ns, sfd->id);
+
+ /*
+ * We raced in the idr lookup or with shm_destroy().
+ * Either way, the ID is busted.
+ */
+ if (WARN_ON_ONCE(IS_ERR(shp)))
+ goto done; /* no-op */
+
shp->shm_lprid = task_tgid_vnr(current);
shp->shm_dtim = get_seconds();
shp->shm_nattch--;
@@ -267,6 +291,7 @@
shm_destroy(ns, shp);
else
shm_unlock(shp);
+done:
up_write(&shm_ids(ns).rwsem);
}
@@ -388,17 +413,25 @@
struct shm_file_data *sfd = shm_file_data(file);
int ret;
- ret = sfd->file->f_op->mmap(sfd->file, vma);
- if (ret != 0)
+ /*
+ * In case of remap_file_pages() emulation, the file can represent
+ * removed IPC ID: propogate shm_lock() error to caller.
+ */
+ ret =__shm_open(vma);
+ if (ret)
return ret;
+
+ ret = sfd->file->f_op->mmap(sfd->file, vma);
+ if (ret) {
+ shm_close(vma);
+ return ret;
+ }
sfd->vm_ops = vma->vm_ops;
#ifdef CONFIG_MMU
WARN_ON(!sfd->vm_ops->fault);
#endif
vma->vm_ops = &shm_vm_ops;
- shm_open(vma);
-
- return ret;
+ return 0;
}
static int shm_release(struct inode *ino, struct file *file)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index d1d3e8f..2e7f7ab 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -2082,7 +2082,7 @@
/* adjust offset of jmps if necessary */
if (i < pos && i + insn->off + 1 > pos)
insn->off += delta;
- else if (i > pos && i + insn->off + 1 < pos)
+ else if (i > pos + delta && i + insn->off + 1 <= pos + delta)
insn->off -= delta;
}
}
diff --git a/kernel/cgroup.c b/kernel/cgroup.c
index c03a640..d27904c 100644
--- a/kernel/cgroup.c
+++ b/kernel/cgroup.c
@@ -58,6 +58,7 @@
#include <linux/kthread.h>
#include <linux/delay.h>
#include <linux/atomic.h>
+#include <linux/cpuset.h>
#include <net/sock.h>
/*
@@ -2739,6 +2740,7 @@
out_unlock_threadgroup:
percpu_up_write(&cgroup_threadgroup_rwsem);
cgroup_kn_unlock(of->kn);
+ cpuset_post_attach_flush();
return ret ?: nbytes;
}
@@ -4655,14 +4657,15 @@
if (ss) {
/* css free path */
+ struct cgroup_subsys_state *parent = css->parent;
int id = css->id;
- if (css->parent)
- css_put(css->parent);
-
ss->css_free(css);
cgroup_idr_remove(&ss->css_idr, id);
cgroup_put(cgrp);
+
+ if (parent)
+ css_put(parent);
} else {
/* cgroup free path */
atomic_dec(&cgrp->root->nr_cgrps);
@@ -4758,6 +4761,7 @@
INIT_LIST_HEAD(&css->sibling);
INIT_LIST_HEAD(&css->children);
css->serial_nr = css_serial_nr_next++;
+ atomic_set(&css->online_cnt, 0);
if (cgroup_parent(cgrp)) {
css->parent = cgroup_css(cgroup_parent(cgrp), ss);
@@ -4780,6 +4784,10 @@
if (!ret) {
css->flags |= CSS_ONLINE;
rcu_assign_pointer(css->cgroup->subsys[ss->id], css);
+
+ atomic_inc(&css->online_cnt);
+ if (css->parent)
+ atomic_inc(&css->parent->online_cnt);
}
return ret;
}
@@ -5017,10 +5025,15 @@
container_of(work, struct cgroup_subsys_state, destroy_work);
mutex_lock(&cgroup_mutex);
- offline_css(css);
- mutex_unlock(&cgroup_mutex);
- css_put(css);
+ do {
+ offline_css(css);
+ css_put(css);
+ /* @css can't go away while we're holding cgroup_mutex */
+ css = css->parent;
+ } while (css && atomic_dec_and_test(&css->online_cnt));
+
+ mutex_unlock(&cgroup_mutex);
}
/* css kill confirmation processing requires process context, bounce */
@@ -5029,8 +5042,10 @@
struct cgroup_subsys_state *css =
container_of(ref, struct cgroup_subsys_state, refcnt);
- INIT_WORK(&css->destroy_work, css_killed_work_fn);
- queue_work(cgroup_destroy_wq, &css->destroy_work);
+ if (atomic_dec_and_test(&css->online_cnt)) {
+ INIT_WORK(&css->destroy_work, css_killed_work_fn);
+ queue_work(cgroup_destroy_wq, &css->destroy_work);
+ }
}
/**
diff --git a/kernel/cpuset.c b/kernel/cpuset.c
index 3e945fc..41989ab 100644
--- a/kernel/cpuset.c
+++ b/kernel/cpuset.c
@@ -287,6 +287,8 @@
static DEFINE_MUTEX(cpuset_mutex);
static DEFINE_SPINLOCK(callback_lock);
+static struct workqueue_struct *cpuset_migrate_mm_wq;
+
/*
* CPU / memory hotplug is handled asynchronously.
*/
@@ -972,31 +974,51 @@
}
/*
- * cpuset_migrate_mm
- *
- * Migrate memory region from one set of nodes to another.
- *
- * Temporarilly set tasks mems_allowed to target nodes of migration,
- * so that the migration code can allocate pages on these nodes.
- *
- * While the mm_struct we are migrating is typically from some
- * other task, the task_struct mems_allowed that we are hacking
- * is for our current task, which must allocate new pages for that
- * migrating memory region.
+ * Migrate memory region from one set of nodes to another. This is
+ * performed asynchronously as it can be called from process migration path
+ * holding locks involved in process management. All mm migrations are
+ * performed in the queued order and can be waited for by flushing
+ * cpuset_migrate_mm_wq.
*/
+struct cpuset_migrate_mm_work {
+ struct work_struct work;
+ struct mm_struct *mm;
+ nodemask_t from;
+ nodemask_t to;
+};
+
+static void cpuset_migrate_mm_workfn(struct work_struct *work)
+{
+ struct cpuset_migrate_mm_work *mwork =
+ container_of(work, struct cpuset_migrate_mm_work, work);
+
+ /* on a wq worker, no need to worry about %current's mems_allowed */
+ do_migrate_pages(mwork->mm, &mwork->from, &mwork->to, MPOL_MF_MOVE_ALL);
+ mmput(mwork->mm);
+ kfree(mwork);
+}
+
static void cpuset_migrate_mm(struct mm_struct *mm, const nodemask_t *from,
const nodemask_t *to)
{
- struct task_struct *tsk = current;
+ struct cpuset_migrate_mm_work *mwork;
- tsk->mems_allowed = *to;
+ mwork = kzalloc(sizeof(*mwork), GFP_KERNEL);
+ if (mwork) {
+ mwork->mm = mm;
+ mwork->from = *from;
+ mwork->to = *to;
+ INIT_WORK(&mwork->work, cpuset_migrate_mm_workfn);
+ queue_work(cpuset_migrate_mm_wq, &mwork->work);
+ } else {
+ mmput(mm);
+ }
+}
- do_migrate_pages(mm, from, to, MPOL_MF_MOVE_ALL);
-
- rcu_read_lock();
- guarantee_online_mems(task_cs(tsk), &tsk->mems_allowed);
- rcu_read_unlock();
+void cpuset_post_attach_flush(void)
+{
+ flush_workqueue(cpuset_migrate_mm_wq);
}
/*
@@ -1097,7 +1119,8 @@
mpol_rebind_mm(mm, &cs->mems_allowed);
if (migrate)
cpuset_migrate_mm(mm, &cs->old_mems_allowed, &newmems);
- mmput(mm);
+ else
+ mmput(mm);
}
css_task_iter_end(&it);
@@ -1545,11 +1568,11 @@
* @old_mems_allowed is the right nodesets that we
* migrate mm from.
*/
- if (is_memory_migrate(cs)) {
+ if (is_memory_migrate(cs))
cpuset_migrate_mm(mm, &oldcs->old_mems_allowed,
&cpuset_attach_nodemask_to);
- }
- mmput(mm);
+ else
+ mmput(mm);
}
}
@@ -1714,6 +1737,7 @@
mutex_unlock(&cpuset_mutex);
kernfs_unbreak_active_protection(of->kn);
css_put(&cs->css);
+ flush_workqueue(cpuset_migrate_mm_wq);
return retval ?: nbytes;
}
@@ -2359,6 +2383,9 @@
top_cpuset.effective_mems = node_states[N_MEMORY];
register_hotmemory_notifier(&cpuset_track_online_nodes_nb);
+
+ cpuset_migrate_mm_wq = alloc_ordered_workqueue("cpuset_migrate_mm", 0);
+ BUG_ON(!cpuset_migrate_mm_wq);
}
/**
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 5946460..6146148 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -64,8 +64,17 @@
struct task_struct *p = tfc->p;
if (p) {
- tfc->ret = -EAGAIN;
- if (task_cpu(p) != smp_processor_id() || !task_curr(p))
+ /* -EAGAIN */
+ if (task_cpu(p) != smp_processor_id())
+ return;
+
+ /*
+ * Now that we're on right CPU with IRQs disabled, we can test
+ * if we hit the right task without races.
+ */
+
+ tfc->ret = -ESRCH; /* No such (running) process */
+ if (p != current)
return;
}
@@ -92,13 +101,17 @@
.p = p,
.func = func,
.info = info,
- .ret = -ESRCH, /* No such (running) process */
+ .ret = -EAGAIN,
};
+ int ret;
- if (task_curr(p))
- smp_call_function_single(task_cpu(p), remote_function, &data, 1);
+ do {
+ ret = smp_call_function_single(task_cpu(p), remote_function, &data, 1);
+ if (!ret)
+ ret = data.ret;
+ } while (ret == -EAGAIN);
- return data.ret;
+ return ret;
}
/**
@@ -169,19 +182,6 @@
* rely on ctx->is_active and therefore cannot use event_function_call().
* See perf_install_in_context().
*
- * This is because we need a ctx->lock serialized variable (ctx->is_active)
- * to reliably determine if a particular task/context is scheduled in. The
- * task_curr() use in task_function_call() is racy in that a remote context
- * switch is not a single atomic operation.
- *
- * As is, the situation is 'safe' because we set rq->curr before we do the
- * actual context switch. This means that task_curr() will fail early, but
- * we'll continue spinning on ctx->is_active until we've passed
- * perf_event_task_sched_out().
- *
- * Without this ctx->lock serialized variable we could have race where we find
- * the task (and hence the context) would not be active while in fact they are.
- *
* If ctx->nr_events, then ctx->is_active and cpuctx->task_ctx are set.
*/
@@ -212,7 +212,7 @@
*/
if (ctx->task) {
if (ctx->task != current) {
- ret = -EAGAIN;
+ ret = -ESRCH;
goto unlock;
}
@@ -276,10 +276,10 @@
return;
}
-again:
if (task == TASK_TOMBSTONE)
return;
+again:
if (!task_function_call(task, event_function, &efs))
return;
@@ -289,13 +289,15 @@
* a concurrent perf_event_context_sched_out().
*/
task = ctx->task;
- if (task != TASK_TOMBSTONE) {
- if (ctx->is_active) {
- raw_spin_unlock_irq(&ctx->lock);
- goto again;
- }
- func(event, NULL, ctx, data);
+ if (task == TASK_TOMBSTONE) {
+ raw_spin_unlock_irq(&ctx->lock);
+ return;
}
+ if (ctx->is_active) {
+ raw_spin_unlock_irq(&ctx->lock);
+ goto again;
+ }
+ func(event, NULL, ctx, data);
raw_spin_unlock_irq(&ctx->lock);
}
@@ -314,6 +316,7 @@
enum event_type_t {
EVENT_FLEXIBLE = 0x1,
EVENT_PINNED = 0x2,
+ EVENT_TIME = 0x4,
EVENT_ALL = EVENT_FLEXIBLE | EVENT_PINNED,
};
@@ -321,7 +324,13 @@
* perf_sched_events : >0 events exist
* perf_cgroup_events: >0 per-cpu cgroup events exist on this cpu
*/
-struct static_key_deferred perf_sched_events __read_mostly;
+
+static void perf_sched_delayed(struct work_struct *work);
+DEFINE_STATIC_KEY_FALSE(perf_sched_events);
+static DECLARE_DELAYED_WORK(perf_sched_work, perf_sched_delayed);
+static DEFINE_MUTEX(perf_sched_mutex);
+static atomic_t perf_sched_count;
+
static DEFINE_PER_CPU(atomic_t, perf_cgroup_events);
static DEFINE_PER_CPU(int, perf_sched_cb_usages);
@@ -1288,16 +1297,18 @@
/*
* Update the total_time_enabled and total_time_running fields for a event.
- * The caller of this function needs to hold the ctx->lock.
*/
static void update_event_times(struct perf_event *event)
{
struct perf_event_context *ctx = event->ctx;
u64 run_end;
+ lockdep_assert_held(&ctx->lock);
+
if (event->state < PERF_EVENT_STATE_INACTIVE ||
event->group_leader->state < PERF_EVENT_STATE_INACTIVE)
return;
+
/*
* in cgroup mode, time_enabled represents
* the time the event was enabled AND active
@@ -1645,7 +1656,7 @@
static bool is_orphaned_event(struct perf_event *event)
{
- return event->state == PERF_EVENT_STATE_EXIT;
+ return event->state == PERF_EVENT_STATE_DEAD;
}
static inline int pmu_filter_match(struct perf_event *event)
@@ -1690,14 +1701,14 @@
perf_pmu_disable(event->pmu);
+ event->tstamp_stopped = tstamp;
+ event->pmu->del(event, 0);
+ event->oncpu = -1;
event->state = PERF_EVENT_STATE_INACTIVE;
if (event->pending_disable) {
event->pending_disable = 0;
event->state = PERF_EVENT_STATE_OFF;
}
- event->tstamp_stopped = tstamp;
- event->pmu->del(event, 0);
- event->oncpu = -1;
if (!is_software_event(event))
cpuctx->active_oncpu--;
@@ -1732,7 +1743,6 @@
}
#define DETACH_GROUP 0x01UL
-#define DETACH_STATE 0x02UL
/*
* Cross CPU call to remove a performance event
@@ -1752,8 +1762,6 @@
if (flags & DETACH_GROUP)
perf_group_detach(event);
list_del_event(event, ctx);
- if (flags & DETACH_STATE)
- event->state = PERF_EVENT_STATE_EXIT;
if (!ctx->nr_events && ctx->is_active) {
ctx->is_active = 0;
@@ -2063,14 +2071,27 @@
event->tstamp_stopped = tstamp;
}
-static void task_ctx_sched_out(struct perf_cpu_context *cpuctx,
- struct perf_event_context *ctx);
+static void ctx_sched_out(struct perf_event_context *ctx,
+ struct perf_cpu_context *cpuctx,
+ enum event_type_t event_type);
static void
ctx_sched_in(struct perf_event_context *ctx,
struct perf_cpu_context *cpuctx,
enum event_type_t event_type,
struct task_struct *task);
+static void task_ctx_sched_out(struct perf_cpu_context *cpuctx,
+ struct perf_event_context *ctx)
+{
+ if (!cpuctx->task_ctx)
+ return;
+
+ if (WARN_ON_ONCE(ctx != cpuctx->task_ctx))
+ return;
+
+ ctx_sched_out(ctx, cpuctx, EVENT_ALL);
+}
+
static void perf_event_sched_in(struct perf_cpu_context *cpuctx,
struct perf_event_context *ctx,
struct task_struct *task)
@@ -2097,49 +2118,68 @@
/*
* Cross CPU call to install and enable a performance event
*
- * Must be called with ctx->mutex held
+ * Very similar to remote_function() + event_function() but cannot assume that
+ * things like ctx->is_active and cpuctx->task_ctx are set.
*/
static int __perf_install_in_context(void *info)
{
- struct perf_event_context *ctx = info;
+ struct perf_event *event = info;
+ struct perf_event_context *ctx = event->ctx;
struct perf_cpu_context *cpuctx = __get_cpu_context(ctx);
struct perf_event_context *task_ctx = cpuctx->task_ctx;
+ bool activate = true;
+ int ret = 0;
raw_spin_lock(&cpuctx->ctx.lock);
if (ctx->task) {
raw_spin_lock(&ctx->lock);
- /*
- * If we hit the 'wrong' task, we've since scheduled and
- * everything should be sorted, nothing to do!
- */
task_ctx = ctx;
- if (ctx->task != current)
+
+ /* If we're on the wrong CPU, try again */
+ if (task_cpu(ctx->task) != smp_processor_id()) {
+ ret = -ESRCH;
goto unlock;
+ }
/*
- * If task_ctx is set, it had better be to us.
+ * If we're on the right CPU, see if the task we target is
+ * current, if not we don't have to activate the ctx, a future
+ * context switch will do that for us.
*/
- WARN_ON_ONCE(cpuctx->task_ctx != ctx && cpuctx->task_ctx);
+ if (ctx->task != current)
+ activate = false;
+ else
+ WARN_ON_ONCE(cpuctx->task_ctx && cpuctx->task_ctx != ctx);
+
} else if (task_ctx) {
raw_spin_lock(&task_ctx->lock);
}
- ctx_resched(cpuctx, task_ctx);
+ if (activate) {
+ ctx_sched_out(ctx, cpuctx, EVENT_TIME);
+ add_event_to_ctx(event, ctx);
+ ctx_resched(cpuctx, task_ctx);
+ } else {
+ add_event_to_ctx(event, ctx);
+ }
+
unlock:
perf_ctx_unlock(cpuctx, task_ctx);
- return 0;
+ return ret;
}
/*
- * Attach a performance event to a context
+ * Attach a performance event to a context.
+ *
+ * Very similar to event_function_call, see comment there.
*/
static void
perf_install_in_context(struct perf_event_context *ctx,
struct perf_event *event,
int cpu)
{
- struct task_struct *task = NULL;
+ struct task_struct *task = READ_ONCE(ctx->task);
lockdep_assert_held(&ctx->mutex);
@@ -2147,40 +2187,46 @@
if (event->cpu != -1)
event->cpu = cpu;
+ if (!task) {
+ cpu_function_call(cpu, __perf_install_in_context, event);
+ return;
+ }
+
+ /*
+ * Should not happen, we validate the ctx is still alive before calling.
+ */
+ if (WARN_ON_ONCE(task == TASK_TOMBSTONE))
+ return;
+
/*
* Installing events is tricky because we cannot rely on ctx->is_active
* to be set in case this is the nr_events 0 -> 1 transition.
- *
- * So what we do is we add the event to the list here, which will allow
- * a future context switch to DTRT and then send a racy IPI. If the IPI
- * fails to hit the right task, this means a context switch must have
- * happened and that will have taken care of business.
*/
+again:
+ /*
+ * Cannot use task_function_call() because we need to run on the task's
+ * CPU regardless of whether its current or not.
+ */
+ if (!cpu_function_call(task_cpu(task), __perf_install_in_context, event))
+ return;
+
raw_spin_lock_irq(&ctx->lock);
task = ctx->task;
- /*
- * Worse, we cannot even rely on the ctx actually existing anymore. If
- * between find_get_context() and perf_install_in_context() the task
- * went through perf_event_exit_task() its dead and we should not be
- * adding new events.
- */
- if (task == TASK_TOMBSTONE) {
+ if (WARN_ON_ONCE(task == TASK_TOMBSTONE)) {
+ /*
+ * Cannot happen because we already checked above (which also
+ * cannot happen), and we hold ctx->mutex, which serializes us
+ * against perf_event_exit_task_context().
+ */
raw_spin_unlock_irq(&ctx->lock);
return;
}
- update_context_time(ctx);
- /*
- * Update cgrp time only if current cgrp matches event->cgrp.
- * Must be done before calling add_event_to_ctx().
- */
- update_cgrp_time_from_event(event);
- add_event_to_ctx(event, ctx);
raw_spin_unlock_irq(&ctx->lock);
-
- if (task)
- task_function_call(task, __perf_install_in_context, ctx);
- else
- cpu_function_call(cpu, __perf_install_in_context, ctx);
+ /*
+ * Since !ctx->is_active doesn't mean anything, we must IPI
+ * unconditionally.
+ */
+ goto again;
}
/*
@@ -2219,17 +2265,18 @@
event->state <= PERF_EVENT_STATE_ERROR)
return;
- update_context_time(ctx);
+ if (ctx->is_active)
+ ctx_sched_out(ctx, cpuctx, EVENT_TIME);
+
__perf_event_mark_enabled(event);
if (!ctx->is_active)
return;
if (!event_filter_match(event)) {
- if (is_cgroup_event(event)) {
- perf_cgroup_set_timestamp(current, ctx); // XXX ?
+ if (is_cgroup_event(event))
perf_cgroup_defer_enabled(event);
- }
+ ctx_sched_in(ctx, cpuctx, EVENT_TIME, current);
return;
}
@@ -2237,8 +2284,10 @@
* If the event is in a group and isn't the group leader,
* then don't put it on unless the group is on.
*/
- if (leader != event && leader->state != PERF_EVENT_STATE_ACTIVE)
+ if (leader != event && leader->state != PERF_EVENT_STATE_ACTIVE) {
+ ctx_sched_in(ctx, cpuctx, EVENT_TIME, current);
return;
+ }
task_ctx = cpuctx->task_ctx;
if (ctx->task)
@@ -2344,24 +2393,33 @@
}
ctx->is_active &= ~event_type;
+ if (!(ctx->is_active & EVENT_ALL))
+ ctx->is_active = 0;
+
if (ctx->task) {
WARN_ON_ONCE(cpuctx->task_ctx != ctx);
if (!ctx->is_active)
cpuctx->task_ctx = NULL;
}
- update_context_time(ctx);
- update_cgrp_time_from_cpuctx(cpuctx);
- if (!ctx->nr_active)
+ is_active ^= ctx->is_active; /* changed bits */
+
+ if (is_active & EVENT_TIME) {
+ /* update (and stop) ctx time */
+ update_context_time(ctx);
+ update_cgrp_time_from_cpuctx(cpuctx);
+ }
+
+ if (!ctx->nr_active || !(is_active & EVENT_ALL))
return;
perf_pmu_disable(ctx->pmu);
- if ((is_active & EVENT_PINNED) && (event_type & EVENT_PINNED)) {
+ if (is_active & EVENT_PINNED) {
list_for_each_entry(event, &ctx->pinned_groups, group_entry)
group_sched_out(event, cpuctx, ctx);
}
- if ((is_active & EVENT_FLEXIBLE) && (event_type & EVENT_FLEXIBLE)) {
+ if (is_active & EVENT_FLEXIBLE) {
list_for_each_entry(event, &ctx->flexible_groups, group_entry)
group_sched_out(event, cpuctx, ctx);
}
@@ -2641,18 +2699,6 @@
perf_cgroup_sched_out(task, next);
}
-static void task_ctx_sched_out(struct perf_cpu_context *cpuctx,
- struct perf_event_context *ctx)
-{
- if (!cpuctx->task_ctx)
- return;
-
- if (WARN_ON_ONCE(ctx != cpuctx->task_ctx))
- return;
-
- ctx_sched_out(ctx, cpuctx, EVENT_ALL);
-}
-
/*
* Called with IRQs disabled
*/
@@ -2735,7 +2781,7 @@
if (likely(!ctx->nr_events))
return;
- ctx->is_active |= event_type;
+ ctx->is_active |= (event_type | EVENT_TIME);
if (ctx->task) {
if (!is_active)
cpuctx->task_ctx = ctx;
@@ -2743,18 +2789,24 @@
WARN_ON_ONCE(cpuctx->task_ctx != ctx);
}
- now = perf_clock();
- ctx->timestamp = now;
- perf_cgroup_set_timestamp(task, ctx);
+ is_active ^= ctx->is_active; /* changed bits */
+
+ if (is_active & EVENT_TIME) {
+ /* start ctx time */
+ now = perf_clock();
+ ctx->timestamp = now;
+ perf_cgroup_set_timestamp(task, ctx);
+ }
+
/*
* First go through the list and put on any pinned groups
* in order to give them the best chance of going on.
*/
- if (!(is_active & EVENT_PINNED) && (event_type & EVENT_PINNED))
+ if (is_active & EVENT_PINNED)
ctx_pinned_sched_in(ctx, cpuctx);
/* Then walk through the lower prio flexible groups */
- if (!(is_active & EVENT_FLEXIBLE) && (event_type & EVENT_FLEXIBLE))
+ if (is_active & EVENT_FLEXIBLE)
ctx_flexible_sched_in(ctx, cpuctx);
}
@@ -3120,6 +3172,7 @@
cpuctx = __get_cpu_context(ctx);
perf_ctx_lock(cpuctx, ctx);
+ ctx_sched_out(ctx, cpuctx, EVENT_TIME);
list_for_each_entry(event, &ctx->event_list, event_entry)
enabled |= event_enable_on_exec(event, ctx);
@@ -3537,12 +3590,22 @@
if (has_branch_stack(event))
dec = true;
- if (dec)
- static_key_slow_dec_deferred(&perf_sched_events);
+ if (dec) {
+ if (!atomic_add_unless(&perf_sched_count, -1, 1))
+ schedule_delayed_work(&perf_sched_work, HZ);
+ }
unaccount_event_cpu(event, event->cpu);
}
+static void perf_sched_delayed(struct work_struct *work)
+{
+ mutex_lock(&perf_sched_mutex);
+ if (atomic_dec_and_test(&perf_sched_count))
+ static_branch_disable(&perf_sched_events);
+ mutex_unlock(&perf_sched_mutex);
+}
+
/*
* The following implement mutual exclusion of events on "exclusive" pmus
* (PERF_PMU_CAP_EXCLUSIVE). Such pmus can only have one event scheduled
@@ -3752,30 +3815,42 @@
*/
int perf_event_release_kernel(struct perf_event *event)
{
- struct perf_event_context *ctx;
+ struct perf_event_context *ctx = event->ctx;
struct perf_event *child, *tmp;
+ /*
+ * If we got here through err_file: fput(event_file); we will not have
+ * attached to a context yet.
+ */
+ if (!ctx) {
+ WARN_ON_ONCE(event->attach_state &
+ (PERF_ATTACH_CONTEXT|PERF_ATTACH_GROUP));
+ goto no_ctx;
+ }
+
if (!is_kernel_event(event))
perf_remove_from_owner(event);
ctx = perf_event_ctx_lock(event);
WARN_ON_ONCE(ctx->parent_ctx);
- perf_remove_from_context(event, DETACH_GROUP | DETACH_STATE);
- perf_event_ctx_unlock(event, ctx);
+ perf_remove_from_context(event, DETACH_GROUP);
+ raw_spin_lock_irq(&ctx->lock);
/*
- * At this point we must have event->state == PERF_EVENT_STATE_EXIT,
- * either from the above perf_remove_from_context() or through
- * perf_event_exit_event().
+ * Mark this even as STATE_DEAD, there is no external reference to it
+ * anymore.
*
- * Therefore, anybody acquiring event->child_mutex after the below
- * loop _must_ also see this, most importantly inherit_event() which
- * will avoid placing more children on the list.
+ * Anybody acquiring event->child_mutex after the below loop _must_
+ * also see this, most importantly inherit_event() which will avoid
+ * placing more children on the list.
*
* Thus this guarantees that we will in fact observe and kill _ALL_
* child events.
*/
- WARN_ON_ONCE(event->state != PERF_EVENT_STATE_EXIT);
+ event->state = PERF_EVENT_STATE_DEAD;
+ raw_spin_unlock_irq(&ctx->lock);
+
+ perf_event_ctx_unlock(event, ctx);
again:
mutex_lock(&event->child_mutex);
@@ -3830,8 +3905,8 @@
}
mutex_unlock(&event->child_mutex);
- /* Must be the last reference */
- put_event(event);
+no_ctx:
+ put_event(event); /* Must be the 'last' reference */
return 0;
}
EXPORT_SYMBOL_GPL(perf_event_release_kernel);
@@ -3988,7 +4063,7 @@
{
bool no_children;
- if (event->state != PERF_EVENT_STATE_EXIT)
+ if (event->state > PERF_EVENT_STATE_EXIT)
return false;
mutex_lock(&event->child_mutex);
@@ -7769,8 +7844,28 @@
if (is_cgroup_event(event))
inc = true;
- if (inc)
- static_key_slow_inc(&perf_sched_events.key);
+ if (inc) {
+ if (atomic_inc_not_zero(&perf_sched_count))
+ goto enabled;
+
+ mutex_lock(&perf_sched_mutex);
+ if (!atomic_read(&perf_sched_count)) {
+ static_branch_enable(&perf_sched_events);
+ /*
+ * Guarantee that all CPUs observe they key change and
+ * call the perf scheduling hooks before proceeding to
+ * install events that need them.
+ */
+ synchronize_sched();
+ }
+ /*
+ * Now that we have waited for the sync_sched(), allow further
+ * increments to by-pass the mutex.
+ */
+ atomic_inc(&perf_sched_count);
+ mutex_unlock(&perf_sched_mutex);
+ }
+enabled:
account_event_cpu(event, event->cpu);
}
@@ -8389,10 +8484,19 @@
if (move_group) {
gctx = group_leader->ctx;
mutex_lock_double(&gctx->mutex, &ctx->mutex);
+ if (gctx->task == TASK_TOMBSTONE) {
+ err = -ESRCH;
+ goto err_locked;
+ }
} else {
mutex_lock(&ctx->mutex);
}
+ if (ctx->task == TASK_TOMBSTONE) {
+ err = -ESRCH;
+ goto err_locked;
+ }
+
if (!perf_event_validate_size(event)) {
err = -E2BIG;
goto err_locked;
@@ -8509,7 +8613,12 @@
perf_unpin_context(ctx);
put_ctx(ctx);
err_alloc:
- free_event(event);
+ /*
+ * If event_file is set, the fput() above will have called ->release()
+ * and that will take care of freeing the event.
+ */
+ if (!event_file)
+ free_event(event);
err_cpus:
put_online_cpus();
err_task:
@@ -8563,12 +8672,14 @@
WARN_ON_ONCE(ctx->parent_ctx);
mutex_lock(&ctx->mutex);
+ if (ctx->task == TASK_TOMBSTONE) {
+ err = -ESRCH;
+ goto err_unlock;
+ }
+
if (!exclusive_event_installable(event, ctx)) {
- mutex_unlock(&ctx->mutex);
- perf_unpin_context(ctx);
- put_ctx(ctx);
err = -EBUSY;
- goto err_free;
+ goto err_unlock;
}
perf_install_in_context(ctx, event, cpu);
@@ -8577,6 +8688,10 @@
return event;
+err_unlock:
+ mutex_unlock(&ctx->mutex);
+ perf_unpin_context(ctx);
+ put_ctx(ctx);
err_free:
free_event(event);
err:
@@ -8695,7 +8810,7 @@
if (parent_event)
perf_group_detach(child_event);
list_del_event(child_event, child_ctx);
- child_event->state = PERF_EVENT_STATE_EXIT; /* see perf_event_release_kernel() */
+ child_event->state = PERF_EVENT_STATE_EXIT; /* is_event_hup() */
raw_spin_unlock_irq(&child_ctx->lock);
/*
@@ -9206,7 +9321,7 @@
struct swevent_htable *swhash = &per_cpu(swevent_htable, cpu);
mutex_lock(&swhash->hlist_mutex);
- if (swhash->hlist_refcount > 0) {
+ if (swhash->hlist_refcount > 0 && !swevent_hlist_deref(swhash)) {
struct swevent_hlist *hlist;
hlist = kzalloc_node(sizeof(*hlist), GFP_KERNEL, cpu_to_node(cpu));
@@ -9282,11 +9397,9 @@
switch (action & ~CPU_TASKS_FROZEN) {
case CPU_UP_PREPARE:
- case CPU_DOWN_FAILED:
perf_event_init_cpu(cpu);
break;
- case CPU_UP_CANCELED:
case CPU_DOWN_PREPARE:
perf_event_exit_cpu(cpu);
break;
@@ -9315,9 +9428,6 @@
ret = init_hw_breakpoint();
WARN(ret, "hw_breakpoint initialization failed with: %d", ret);
- /* do not patch jump label more than once per second */
- jump_label_rate_limit(&perf_sched_events, HZ);
-
/*
* Build time assertion that we keep the data_head at the intended
* location. IOW, validation we got the __reserved[] size right.
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 60ace56..716547f 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -292,7 +292,7 @@
#define __classhashfn(key) hash_long((unsigned long)key, CLASSHASH_BITS)
#define classhashentry(key) (classhash_table + __classhashfn((key)))
-static struct list_head classhash_table[CLASSHASH_SIZE];
+static struct hlist_head classhash_table[CLASSHASH_SIZE];
/*
* We put the lock dependency chains into a hash-table as well, to cache
@@ -303,7 +303,7 @@
#define __chainhashfn(chain) hash_long(chain, CHAINHASH_BITS)
#define chainhashentry(chain) (chainhash_table + __chainhashfn((chain)))
-static struct list_head chainhash_table[CHAINHASH_SIZE];
+static struct hlist_head chainhash_table[CHAINHASH_SIZE];
/*
* The hash key of the lock dependency chains is a hash itself too:
@@ -666,7 +666,7 @@
look_up_lock_class(struct lockdep_map *lock, unsigned int subclass)
{
struct lockdep_subclass_key *key;
- struct list_head *hash_head;
+ struct hlist_head *hash_head;
struct lock_class *class;
#ifdef CONFIG_DEBUG_LOCKDEP
@@ -719,7 +719,7 @@
if (DEBUG_LOCKS_WARN_ON(!irqs_disabled()))
return NULL;
- list_for_each_entry_rcu(class, hash_head, hash_entry) {
+ hlist_for_each_entry_rcu(class, hash_head, hash_entry) {
if (class->key == key) {
/*
* Huh! same key, different name? Did someone trample
@@ -742,7 +742,7 @@
register_lock_class(struct lockdep_map *lock, unsigned int subclass, int force)
{
struct lockdep_subclass_key *key;
- struct list_head *hash_head;
+ struct hlist_head *hash_head;
struct lock_class *class;
DEBUG_LOCKS_WARN_ON(!irqs_disabled());
@@ -774,7 +774,7 @@
* We have to do the hash-walk again, to avoid races
* with another CPU:
*/
- list_for_each_entry_rcu(class, hash_head, hash_entry) {
+ hlist_for_each_entry_rcu(class, hash_head, hash_entry) {
if (class->key == key)
goto out_unlock_set;
}
@@ -805,7 +805,7 @@
* We use RCU's safe list-add method to make
* parallel walking of the hash-list safe:
*/
- list_add_tail_rcu(&class->hash_entry, hash_head);
+ hlist_add_head_rcu(&class->hash_entry, hash_head);
/*
* Add it to the global list of classes:
*/
@@ -1822,7 +1822,7 @@
*/
static int
check_prev_add(struct task_struct *curr, struct held_lock *prev,
- struct held_lock *next, int distance, int trylock_loop)
+ struct held_lock *next, int distance, int *stack_saved)
{
struct lock_list *entry;
int ret;
@@ -1883,8 +1883,11 @@
}
}
- if (!trylock_loop && !save_trace(&trace))
- return 0;
+ if (!*stack_saved) {
+ if (!save_trace(&trace))
+ return 0;
+ *stack_saved = 1;
+ }
/*
* Ok, all validations passed, add the new lock
@@ -1907,6 +1910,8 @@
* Debugging printouts:
*/
if (verbose(hlock_class(prev)) || verbose(hlock_class(next))) {
+ /* We drop graph lock, so another thread can overwrite trace. */
+ *stack_saved = 0;
graph_unlock();
printk("\n new dependency: ");
print_lock_name(hlock_class(prev));
@@ -1929,7 +1934,7 @@
check_prevs_add(struct task_struct *curr, struct held_lock *next)
{
int depth = curr->lockdep_depth;
- int trylock_loop = 0;
+ int stack_saved = 0;
struct held_lock *hlock;
/*
@@ -1956,7 +1961,7 @@
*/
if (hlock->read != 2 && hlock->check) {
if (!check_prev_add(curr, hlock, next,
- distance, trylock_loop))
+ distance, &stack_saved))
return 0;
/*
* Stop after the first non-trylock entry,
@@ -1979,7 +1984,6 @@
if (curr->held_locks[depth].irq_context !=
curr->held_locks[depth-1].irq_context)
break;
- trylock_loop = 1;
}
return 1;
out_bug:
@@ -2017,7 +2021,7 @@
u64 chain_key)
{
struct lock_class *class = hlock_class(hlock);
- struct list_head *hash_head = chainhashentry(chain_key);
+ struct hlist_head *hash_head = chainhashentry(chain_key);
struct lock_chain *chain;
struct held_lock *hlock_curr;
int i, j;
@@ -2033,7 +2037,7 @@
* We can walk it lock-free, because entries only get added
* to the hash:
*/
- list_for_each_entry_rcu(chain, hash_head, entry) {
+ hlist_for_each_entry_rcu(chain, hash_head, entry) {
if (chain->chain_key == chain_key) {
cache_hit:
debug_atomic_inc(chain_lookup_hits);
@@ -2057,7 +2061,7 @@
/*
* We have to walk the chain again locked - to avoid duplicates:
*/
- list_for_each_entry(chain, hash_head, entry) {
+ hlist_for_each_entry(chain, hash_head, entry) {
if (chain->chain_key == chain_key) {
graph_unlock();
goto cache_hit;
@@ -2091,7 +2095,7 @@
}
chain_hlocks[chain->base + j] = class - lock_classes;
}
- list_add_tail_rcu(&chain->entry, hash_head);
+ hlist_add_head_rcu(&chain->entry, hash_head);
debug_atomic_inc(chain_lookup_misses);
inc_chains();
@@ -3875,7 +3879,7 @@
nr_process_chains = 0;
debug_locks = 1;
for (i = 0; i < CHAINHASH_SIZE; i++)
- INIT_LIST_HEAD(chainhash_table + i);
+ INIT_HLIST_HEAD(chainhash_table + i);
raw_local_irq_restore(flags);
}
@@ -3894,7 +3898,7 @@
/*
* Unhash the class and remove it from the all_lock_classes list:
*/
- list_del_rcu(&class->hash_entry);
+ hlist_del_rcu(&class->hash_entry);
list_del_rcu(&class->lock_entry);
RCU_INIT_POINTER(class->key, NULL);
@@ -3917,7 +3921,7 @@
void lockdep_free_key_range(void *start, unsigned long size)
{
struct lock_class *class;
- struct list_head *head;
+ struct hlist_head *head;
unsigned long flags;
int i;
int locked;
@@ -3930,9 +3934,7 @@
*/
for (i = 0; i < CLASSHASH_SIZE; i++) {
head = classhash_table + i;
- if (list_empty(head))
- continue;
- list_for_each_entry_rcu(class, head, hash_entry) {
+ hlist_for_each_entry_rcu(class, head, hash_entry) {
if (within(class->key, start, size))
zap_class(class);
else if (within(class->name, start, size))
@@ -3962,7 +3964,7 @@
void lockdep_reset_lock(struct lockdep_map *lock)
{
struct lock_class *class;
- struct list_head *head;
+ struct hlist_head *head;
unsigned long flags;
int i, j;
int locked;
@@ -3987,9 +3989,7 @@
locked = graph_lock();
for (i = 0; i < CLASSHASH_SIZE; i++) {
head = classhash_table + i;
- if (list_empty(head))
- continue;
- list_for_each_entry_rcu(class, head, hash_entry) {
+ hlist_for_each_entry_rcu(class, head, hash_entry) {
int match = 0;
for (j = 0; j < NR_LOCKDEP_CACHING_CLASSES; j++)
@@ -4027,10 +4027,10 @@
return;
for (i = 0; i < CLASSHASH_SIZE; i++)
- INIT_LIST_HEAD(classhash_table + i);
+ INIT_HLIST_HEAD(classhash_table + i);
for (i = 0; i < CHAINHASH_SIZE; i++)
- INIT_LIST_HEAD(chainhash_table + i);
+ INIT_HLIST_HEAD(chainhash_table + i);
lockdep_initialized = 1;
}
diff --git a/kernel/memremap.c b/kernel/memremap.c
index 70ee377..b981a7b 100644
--- a/kernel/memremap.c
+++ b/kernel/memremap.c
@@ -114,7 +114,7 @@
static void devm_memremap_release(struct device *dev, void *res)
{
- memunmap(res);
+ memunmap(*(void **)res);
}
static int devm_memremap_match(struct device *dev, void *res, void *match_data)
@@ -136,8 +136,10 @@
if (addr) {
*ptr = addr;
devres_add(dev, ptr);
- } else
+ } else {
devres_free(ptr);
+ return ERR_PTR(-ENXIO);
+ }
return addr;
}
@@ -150,7 +152,7 @@
}
EXPORT_SYMBOL(devm_memunmap);
-pfn_t phys_to_pfn_t(phys_addr_t addr, unsigned long flags)
+pfn_t phys_to_pfn_t(phys_addr_t addr, u64 flags)
{
return __pfn_to_pfn_t(addr >> PAGE_SHIFT, flags);
}
diff --git a/kernel/module.c b/kernel/module.c
index 8358f46..794ebe8 100644
--- a/kernel/module.c
+++ b/kernel/module.c
@@ -303,6 +303,9 @@
struct _ddebug *debug;
unsigned int num_debug;
bool sig_ok;
+#ifdef CONFIG_KALLSYMS
+ unsigned long mod_kallsyms_init_off;
+#endif
struct {
unsigned int sym, str, mod, vers, info, pcpu;
} index;
@@ -981,6 +984,8 @@
mod->exit();
blocking_notifier_call_chain(&module_notify_list,
MODULE_STATE_GOING, mod);
+ ftrace_release_mod(mod);
+
async_synchronize_full();
/* Store the name of the last unloaded module for diagnostic purposes */
@@ -2480,10 +2485,21 @@
strsect->sh_flags |= SHF_ALLOC;
strsect->sh_entsize = get_offset(mod, &mod->init_layout.size, strsect,
info->index.str) | INIT_OFFSET_MASK;
- mod->init_layout.size = debug_align(mod->init_layout.size);
pr_debug("\t%s\n", info->secstrings + strsect->sh_name);
+
+ /* We'll tack temporary mod_kallsyms on the end. */
+ mod->init_layout.size = ALIGN(mod->init_layout.size,
+ __alignof__(struct mod_kallsyms));
+ info->mod_kallsyms_init_off = mod->init_layout.size;
+ mod->init_layout.size += sizeof(struct mod_kallsyms);
+ mod->init_layout.size = debug_align(mod->init_layout.size);
}
+/*
+ * We use the full symtab and strtab which layout_symtab arranged to
+ * be appended to the init section. Later we switch to the cut-down
+ * core-only ones.
+ */
static void add_kallsyms(struct module *mod, const struct load_info *info)
{
unsigned int i, ndst;
@@ -2492,29 +2508,34 @@
char *s;
Elf_Shdr *symsec = &info->sechdrs[info->index.sym];
- mod->symtab = (void *)symsec->sh_addr;
- mod->num_symtab = symsec->sh_size / sizeof(Elf_Sym);
+ /* Set up to point into init section. */
+ mod->kallsyms = mod->init_layout.base + info->mod_kallsyms_init_off;
+
+ mod->kallsyms->symtab = (void *)symsec->sh_addr;
+ mod->kallsyms->num_symtab = symsec->sh_size / sizeof(Elf_Sym);
/* Make sure we get permanent strtab: don't use info->strtab. */
- mod->strtab = (void *)info->sechdrs[info->index.str].sh_addr;
+ mod->kallsyms->strtab = (void *)info->sechdrs[info->index.str].sh_addr;
/* Set types up while we still have access to sections. */
- for (i = 0; i < mod->num_symtab; i++)
- mod->symtab[i].st_info = elf_type(&mod->symtab[i], info);
+ for (i = 0; i < mod->kallsyms->num_symtab; i++)
+ mod->kallsyms->symtab[i].st_info
+ = elf_type(&mod->kallsyms->symtab[i], info);
- mod->core_symtab = dst = mod->core_layout.base + info->symoffs;
- mod->core_strtab = s = mod->core_layout.base + info->stroffs;
- src = mod->symtab;
- for (ndst = i = 0; i < mod->num_symtab; i++) {
+ /* Now populate the cut down core kallsyms for after init. */
+ mod->core_kallsyms.symtab = dst = mod->core_layout.base + info->symoffs;
+ mod->core_kallsyms.strtab = s = mod->core_layout.base + info->stroffs;
+ src = mod->kallsyms->symtab;
+ for (ndst = i = 0; i < mod->kallsyms->num_symtab; i++) {
if (i == 0 ||
is_core_symbol(src+i, info->sechdrs, info->hdr->e_shnum,
info->index.pcpu)) {
dst[ndst] = src[i];
- dst[ndst++].st_name = s - mod->core_strtab;
- s += strlcpy(s, &mod->strtab[src[i].st_name],
+ dst[ndst++].st_name = s - mod->core_kallsyms.strtab;
+ s += strlcpy(s, &mod->kallsyms->strtab[src[i].st_name],
KSYM_NAME_LEN) + 1;
}
}
- mod->core_num_syms = ndst;
+ mod->core_kallsyms.num_symtab = ndst;
}
#else
static inline void layout_symtab(struct module *mod, struct load_info *info)
@@ -3263,9 +3284,8 @@
module_put(mod);
trim_init_extable(mod);
#ifdef CONFIG_KALLSYMS
- mod->num_symtab = mod->core_num_syms;
- mod->symtab = mod->core_symtab;
- mod->strtab = mod->core_strtab;
+ /* Switch to core kallsyms now init is done: kallsyms may be walking! */
+ rcu_assign_pointer(mod->kallsyms, &mod->core_kallsyms);
#endif
mod_tree_remove_init(mod);
disable_ro_nx(&mod->init_layout);
@@ -3295,6 +3315,7 @@
module_put(mod);
blocking_notifier_call_chain(&module_notify_list,
MODULE_STATE_GOING, mod);
+ ftrace_release_mod(mod);
free_module(mod);
wake_up_all(&module_wq);
return ret;
@@ -3371,6 +3392,7 @@
mod->state = MODULE_STATE_COMING;
mutex_unlock(&module_mutex);
+ ftrace_module_enable(mod);
blocking_notifier_call_chain(&module_notify_list,
MODULE_STATE_COMING, mod);
return 0;
@@ -3496,7 +3518,7 @@
/* Module is ready to execute: parsing args may do that. */
after_dashes = parse_args(mod->name, mod->args, mod->kp, mod->num_kp,
- -32768, 32767, NULL,
+ -32768, 32767, mod,
unknown_module_param_cb);
if (IS_ERR(after_dashes)) {
err = PTR_ERR(after_dashes);
@@ -3627,6 +3649,11 @@
&& (str[2] == '\0' || str[2] == '.');
}
+static const char *symname(struct mod_kallsyms *kallsyms, unsigned int symnum)
+{
+ return kallsyms->strtab + kallsyms->symtab[symnum].st_name;
+}
+
static const char *get_ksymbol(struct module *mod,
unsigned long addr,
unsigned long *size,
@@ -3634,6 +3661,7 @@
{
unsigned int i, best = 0;
unsigned long nextval;
+ struct mod_kallsyms *kallsyms = rcu_dereference_sched(mod->kallsyms);
/* At worse, next value is at end of module */
if (within_module_init(addr, mod))
@@ -3643,32 +3671,32 @@
/* Scan for closest preceding symbol, and next symbol. (ELF
starts real symbols at 1). */
- for (i = 1; i < mod->num_symtab; i++) {
- if (mod->symtab[i].st_shndx == SHN_UNDEF)
+ for (i = 1; i < kallsyms->num_symtab; i++) {
+ if (kallsyms->symtab[i].st_shndx == SHN_UNDEF)
continue;
/* We ignore unnamed symbols: they're uninformative
* and inserted at a whim. */
- if (mod->symtab[i].st_value <= addr
- && mod->symtab[i].st_value > mod->symtab[best].st_value
- && *(mod->strtab + mod->symtab[i].st_name) != '\0'
- && !is_arm_mapping_symbol(mod->strtab + mod->symtab[i].st_name))
+ if (*symname(kallsyms, i) == '\0'
+ || is_arm_mapping_symbol(symname(kallsyms, i)))
+ continue;
+
+ if (kallsyms->symtab[i].st_value <= addr
+ && kallsyms->symtab[i].st_value > kallsyms->symtab[best].st_value)
best = i;
- if (mod->symtab[i].st_value > addr
- && mod->symtab[i].st_value < nextval
- && *(mod->strtab + mod->symtab[i].st_name) != '\0'
- && !is_arm_mapping_symbol(mod->strtab + mod->symtab[i].st_name))
- nextval = mod->symtab[i].st_value;
+ if (kallsyms->symtab[i].st_value > addr
+ && kallsyms->symtab[i].st_value < nextval)
+ nextval = kallsyms->symtab[i].st_value;
}
if (!best)
return NULL;
if (size)
- *size = nextval - mod->symtab[best].st_value;
+ *size = nextval - kallsyms->symtab[best].st_value;
if (offset)
- *offset = addr - mod->symtab[best].st_value;
- return mod->strtab + mod->symtab[best].st_name;
+ *offset = addr - kallsyms->symtab[best].st_value;
+ return symname(kallsyms, best);
}
/* For kallsyms to ask for address resolution. NULL means not found. Careful
@@ -3758,19 +3786,21 @@
preempt_disable();
list_for_each_entry_rcu(mod, &modules, list) {
+ struct mod_kallsyms *kallsyms;
+
if (mod->state == MODULE_STATE_UNFORMED)
continue;
- if (symnum < mod->num_symtab) {
- *value = mod->symtab[symnum].st_value;
- *type = mod->symtab[symnum].st_info;
- strlcpy(name, mod->strtab + mod->symtab[symnum].st_name,
- KSYM_NAME_LEN);
+ kallsyms = rcu_dereference_sched(mod->kallsyms);
+ if (symnum < kallsyms->num_symtab) {
+ *value = kallsyms->symtab[symnum].st_value;
+ *type = kallsyms->symtab[symnum].st_info;
+ strlcpy(name, symname(kallsyms, symnum), KSYM_NAME_LEN);
strlcpy(module_name, mod->name, MODULE_NAME_LEN);
*exported = is_exported(name, *value, mod);
preempt_enable();
return 0;
}
- symnum -= mod->num_symtab;
+ symnum -= kallsyms->num_symtab;
}
preempt_enable();
return -ERANGE;
@@ -3779,11 +3809,12 @@
static unsigned long mod_find_symname(struct module *mod, const char *name)
{
unsigned int i;
+ struct mod_kallsyms *kallsyms = rcu_dereference_sched(mod->kallsyms);
- for (i = 0; i < mod->num_symtab; i++)
- if (strcmp(name, mod->strtab+mod->symtab[i].st_name) == 0 &&
- mod->symtab[i].st_info != 'U')
- return mod->symtab[i].st_value;
+ for (i = 0; i < kallsyms->num_symtab; i++)
+ if (strcmp(name, symname(kallsyms, i)) == 0 &&
+ kallsyms->symtab[i].st_info != 'U')
+ return kallsyms->symtab[i].st_value;
return 0;
}
@@ -3822,11 +3853,14 @@
module_assert_mutex();
list_for_each_entry(mod, &modules, list) {
+ /* We hold module_mutex: no need for rcu_dereference_sched */
+ struct mod_kallsyms *kallsyms = mod->kallsyms;
+
if (mod->state == MODULE_STATE_UNFORMED)
continue;
- for (i = 0; i < mod->num_symtab; i++) {
- ret = fn(data, mod->strtab + mod->symtab[i].st_name,
- mod, mod->symtab[i].st_value);
+ for (i = 0; i < kallsyms->num_symtab; i++) {
+ ret = fn(data, symname(kallsyms, i),
+ mod, kallsyms->symtab[i].st_value);
if (ret != 0)
return ret;
}
diff --git a/kernel/resource.c b/kernel/resource.c
index 09c0597..3669d1b 100644
--- a/kernel/resource.c
+++ b/kernel/resource.c
@@ -1083,9 +1083,10 @@
if (!conflict)
break;
if (conflict != parent) {
- parent = conflict;
- if (!(conflict->flags & IORESOURCE_BUSY))
+ if (!(conflict->flags & IORESOURCE_BUSY)) {
+ parent = conflict;
continue;
+ }
}
if (conflict->flags & flags & IORESOURCE_MUXED) {
add_wait_queue(&muxed_resource_wait, &wait);
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index cd64c97..57b939c 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -420,7 +420,7 @@
* entity.
*/
if (dl_time_before(dl_se->deadline, rq_clock(rq))) {
- printk_deferred_once("sched: DL replenish lagged to much\n");
+ printk_deferred_once("sched: DL replenish lagged too much\n");
dl_se->deadline = rq_clock(rq) + pi_se->dl_deadline;
dl_se->runtime = pi_se->dl_runtime;
}
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index eca592f..57a6eea 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -4961,7 +4961,7 @@
mutex_unlock(&ftrace_lock);
}
-static void ftrace_module_enable(struct module *mod)
+void ftrace_module_enable(struct module *mod)
{
struct dyn_ftrace *rec;
struct ftrace_page *pg;
@@ -5038,38 +5038,8 @@
ftrace_process_locs(mod, mod->ftrace_callsites,
mod->ftrace_callsites + mod->num_ftrace_callsites);
}
-
-static int ftrace_module_notify(struct notifier_block *self,
- unsigned long val, void *data)
-{
- struct module *mod = data;
-
- switch (val) {
- case MODULE_STATE_COMING:
- ftrace_module_enable(mod);
- break;
- case MODULE_STATE_GOING:
- ftrace_release_mod(mod);
- break;
- default:
- break;
- }
-
- return 0;
-}
-#else
-static int ftrace_module_notify(struct notifier_block *self,
- unsigned long val, void *data)
-{
- return 0;
-}
#endif /* CONFIG_MODULES */
-struct notifier_block ftrace_module_nb = {
- .notifier_call = ftrace_module_notify,
- .priority = INT_MIN, /* Run after anything that can remove kprobes */
-};
-
void __init ftrace_init(void)
{
extern unsigned long __start_mcount_loc[];
@@ -5098,10 +5068,6 @@
__start_mcount_loc,
__stop_mcount_loc);
- ret = register_module_notifier(&ftrace_module_nb);
- if (ret)
- pr_warning("Failed to register trace ftrace module exit notifier\n");
-
set_ftrace_early_filters();
return;
diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
index f333e57..ab09829 100644
--- a/kernel/trace/trace_events.c
+++ b/kernel/trace/trace_events.c
@@ -869,7 +869,8 @@
* The ftrace subsystem is for showing formats only.
* They can not be enabled or disabled via the event files.
*/
- if (call->class && call->class->reg)
+ if (call->class && call->class->reg &&
+ !(call->flags & TRACE_EVENT_FL_IGNORE_ENABLE))
return file;
}
diff --git a/kernel/trace/trace_stack.c b/kernel/trace/trace_stack.c
index 202df6c..2a1abba 100644
--- a/kernel/trace/trace_stack.c
+++ b/kernel/trace/trace_stack.c
@@ -156,7 +156,11 @@
for (; p < top && i < stack_trace_max.nr_entries; p++) {
if (stack_dump_trace[i] == ULONG_MAX)
break;
- if (*p == stack_dump_trace[i]) {
+ /*
+ * The READ_ONCE_NOCHECK is used to let KASAN know that
+ * this is not a stack-out-of-bounds error.
+ */
+ if ((READ_ONCE_NOCHECK(*p)) == stack_dump_trace[i]) {
stack_dump_trace[x] = stack_dump_trace[i++];
this_size = stack_trace_index[x++] =
(top - p) * sizeof(unsigned long);
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 61a0264..7ff5dc7 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -301,7 +301,23 @@
static LIST_HEAD(workqueues); /* PR: list of all workqueues */
static bool workqueue_freezing; /* PL: have wqs started freezing? */
-static cpumask_var_t wq_unbound_cpumask; /* PL: low level cpumask for all unbound wqs */
+/* PL: allowable cpus for unbound wqs and work items */
+static cpumask_var_t wq_unbound_cpumask;
+
+/* CPU where unbound work was last round robin scheduled from this CPU */
+static DEFINE_PER_CPU(int, wq_rr_cpu_last);
+
+/*
+ * Local execution of unbound work items is no longer guaranteed. The
+ * following always forces round-robin CPU selection on unbound work items
+ * to uncover usages which depend on it.
+ */
+#ifdef CONFIG_DEBUG_WQ_FORCE_RR_CPU
+static bool wq_debug_force_rr_cpu = true;
+#else
+static bool wq_debug_force_rr_cpu = false;
+#endif
+module_param_named(debug_force_rr_cpu, wq_debug_force_rr_cpu, bool, 0644);
/* the per-cpu worker pools */
static DEFINE_PER_CPU_SHARED_ALIGNED(struct worker_pool [NR_STD_WORKER_POOLS],
@@ -570,6 +586,16 @@
int node)
{
assert_rcu_or_wq_mutex_or_pool_mutex(wq);
+
+ /*
+ * XXX: @node can be NUMA_NO_NODE if CPU goes offline while a
+ * delayed item is pending. The plan is to keep CPU -> NODE
+ * mapping valid and stable across CPU on/offlines. Once that
+ * happens, this workaround can be removed.
+ */
+ if (unlikely(node == NUMA_NO_NODE))
+ return wq->dfl_pwq;
+
return rcu_dereference_raw(wq->numa_pwq_tbl[node]);
}
@@ -1298,6 +1324,39 @@
return worker && worker->current_pwq->wq == wq;
}
+/*
+ * When queueing an unbound work item to a wq, prefer local CPU if allowed
+ * by wq_unbound_cpumask. Otherwise, round robin among the allowed ones to
+ * avoid perturbing sensitive tasks.
+ */
+static int wq_select_unbound_cpu(int cpu)
+{
+ static bool printed_dbg_warning;
+ int new_cpu;
+
+ if (likely(!wq_debug_force_rr_cpu)) {
+ if (cpumask_test_cpu(cpu, wq_unbound_cpumask))
+ return cpu;
+ } else if (!printed_dbg_warning) {
+ pr_warn("workqueue: round-robin CPU selection forced, expect performance impact\n");
+ printed_dbg_warning = true;
+ }
+
+ if (cpumask_empty(wq_unbound_cpumask))
+ return cpu;
+
+ new_cpu = __this_cpu_read(wq_rr_cpu_last);
+ new_cpu = cpumask_next_and(new_cpu, wq_unbound_cpumask, cpu_online_mask);
+ if (unlikely(new_cpu >= nr_cpu_ids)) {
+ new_cpu = cpumask_first_and(wq_unbound_cpumask, cpu_online_mask);
+ if (unlikely(new_cpu >= nr_cpu_ids))
+ return cpu;
+ }
+ __this_cpu_write(wq_rr_cpu_last, new_cpu);
+
+ return new_cpu;
+}
+
static void __queue_work(int cpu, struct workqueue_struct *wq,
struct work_struct *work)
{
@@ -1323,7 +1382,7 @@
return;
retry:
if (req_cpu == WORK_CPU_UNBOUND)
- cpu = raw_smp_processor_id();
+ cpu = wq_select_unbound_cpu(raw_smp_processor_id());
/* pwq which will be used unless @work is executing elsewhere */
if (!(wq->flags & WQ_UNBOUND))
@@ -1464,13 +1523,13 @@
timer_stats_timer_set_start_info(&dwork->timer);
dwork->wq = wq;
- /* timer isn't guaranteed to run in this cpu, record earlier */
- if (cpu == WORK_CPU_UNBOUND)
- cpu = raw_smp_processor_id();
dwork->cpu = cpu;
timer->expires = jiffies + delay;
- add_timer_on(timer, cpu);
+ if (unlikely(cpu != WORK_CPU_UNBOUND))
+ add_timer_on(timer, cpu);
+ else
+ add_timer(timer);
}
/**
@@ -2355,7 +2414,8 @@
WARN_ONCE(current->flags & PF_MEMALLOC,
"workqueue: PF_MEMALLOC task %d(%s) is flushing !WQ_MEM_RECLAIM %s:%pf",
current->pid, current->comm, target_wq->name, target_func);
- WARN_ONCE(worker && (worker->current_pwq->wq->flags & WQ_MEM_RECLAIM),
+ WARN_ONCE(worker && ((worker->current_pwq->wq->flags &
+ (WQ_MEM_RECLAIM | __WQ_LEGACY)) == WQ_MEM_RECLAIM),
"workqueue: WQ_MEM_RECLAIM %s:%pf is flushing !WQ_MEM_RECLAIM %s:%pf",
worker->current_pwq->wq->name, worker->current_func,
target_wq->name, target_func);
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index ecb9e75..8bfd1ac 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -1400,6 +1400,21 @@
endmenu # "RCU Debugging"
+config DEBUG_WQ_FORCE_RR_CPU
+ bool "Force round-robin CPU selection for unbound work items"
+ depends on DEBUG_KERNEL
+ default n
+ help
+ Workqueue used to implicitly guarantee that work items queued
+ without explicit CPU specified are put on the local CPU. This
+ guarantee is no longer true and while local CPU is still
+ preferred work items may be put on foreign CPUs. Kernel
+ parameter "workqueue.debug_force_rr_cpu" is added to force
+ round-robin CPU selection to flush out usages which depend on the
+ now broken guarantee. This config option enables the debug
+ feature by default. When enabled, memory and cache locality will
+ be impacted.
+
config DEBUG_BLOCK_EXT_DEVT
bool "Force extended block device numbers and spread them"
depends on DEBUG_KERNEL
diff --git a/lib/Kconfig.ubsan b/lib/Kconfig.ubsan
index 49518fb..e07c1ba 100644
--- a/lib/Kconfig.ubsan
+++ b/lib/Kconfig.ubsan
@@ -18,6 +18,8 @@
This option activates instrumentation for the entire kernel.
If you don't enable this option, you have to explicitly specify
UBSAN_SANITIZE := y for the files/directories you want to check for UB.
+ Enabling this option will get kernel image size increased
+ significantly.
config UBSAN_ALIGNMENT
bool "Enable checking of pointers alignment"
@@ -25,5 +27,5 @@
default y if !HAVE_EFFICIENT_UNALIGNED_ACCESS
help
This option enables detection of unaligned memory accesses.
- Enabling this option on architectures that support unalligned
+ Enabling this option on architectures that support unaligned
accesses may produce a lot of false positives.
diff --git a/lib/devres.c b/lib/devres.c
index 8c85672..cb1464c 100644
--- a/lib/devres.c
+++ b/lib/devres.c
@@ -236,7 +236,7 @@
static void pcim_iomap_release(struct device *gendev, void *res)
{
- struct pci_dev *dev = container_of(gendev, struct pci_dev, dev);
+ struct pci_dev *dev = to_pci_dev(gendev);
struct pcim_iomap_devres *this = res;
int i;
diff --git a/lib/klist.c b/lib/klist.c
index d74cf7a..0507fa5 100644
--- a/lib/klist.c
+++ b/lib/klist.c
@@ -282,9 +282,9 @@
struct klist_node *n)
{
i->i_klist = k;
- i->i_cur = n;
- if (n)
- kref_get(&n->n_ref);
+ i->i_cur = NULL;
+ if (n && kref_get_unless_zero(&n->n_ref))
+ i->i_cur = n;
}
EXPORT_SYMBOL_GPL(klist_iter_init_node);
diff --git a/lib/scatterlist.c b/lib/scatterlist.c
index bafa993..004fc70 100644
--- a/lib/scatterlist.c
+++ b/lib/scatterlist.c
@@ -598,9 +598,9 @@
*
* Description:
* Stops mapping iterator @miter. @miter should have been started
- * started using sg_miter_start(). A stopped iteration can be
- * resumed by calling sg_miter_next() on it. This is useful when
- * resources (kmap) need to be released during iteration.
+ * using sg_miter_start(). A stopped iteration can be resumed by
+ * calling sg_miter_next() on it. This is useful when resources (kmap)
+ * need to be released during iteration.
*
* Context:
* Preemption disabled if the SG_MITER_ATOMIC is set. Don't care
diff --git a/lib/ucs2_string.c b/lib/ucs2_string.c
index 6f500ef..f0b323a 100644
--- a/lib/ucs2_string.c
+++ b/lib/ucs2_string.c
@@ -49,3 +49,65 @@
}
}
EXPORT_SYMBOL(ucs2_strncmp);
+
+unsigned long
+ucs2_utf8size(const ucs2_char_t *src)
+{
+ unsigned long i;
+ unsigned long j = 0;
+
+ for (i = 0; i < ucs2_strlen(src); i++) {
+ u16 c = src[i];
+
+ if (c >= 0x800)
+ j += 3;
+ else if (c >= 0x80)
+ j += 2;
+ else
+ j += 1;
+ }
+
+ return j;
+}
+EXPORT_SYMBOL(ucs2_utf8size);
+
+/*
+ * copy at most maxlength bytes of whole utf8 characters to dest from the
+ * ucs2 string src.
+ *
+ * The return value is the number of characters copied, not including the
+ * final NUL character.
+ */
+unsigned long
+ucs2_as_utf8(u8 *dest, const ucs2_char_t *src, unsigned long maxlength)
+{
+ unsigned int i;
+ unsigned long j = 0;
+ unsigned long limit = ucs2_strnlen(src, maxlength);
+
+ for (i = 0; maxlength && i < limit; i++) {
+ u16 c = src[i];
+
+ if (c >= 0x800) {
+ if (maxlength < 3)
+ break;
+ maxlength -= 3;
+ dest[j++] = 0xe0 | (c & 0xf000) >> 12;
+ dest[j++] = 0x80 | (c & 0x0fc0) >> 6;
+ dest[j++] = 0x80 | (c & 0x003f);
+ } else if (c >= 0x80) {
+ if (maxlength < 2)
+ break;
+ maxlength -= 2;
+ dest[j++] = 0xc0 | (c & 0x7c0) >> 6;
+ dest[j++] = 0x80 | (c & 0x03f);
+ } else {
+ maxlength -= 1;
+ dest[j++] = c & 0x7f;
+ }
+ }
+ if (maxlength)
+ dest[j] = '\0';
+ return j;
+}
+EXPORT_SYMBOL(ucs2_as_utf8);
diff --git a/lib/vsprintf.c b/lib/vsprintf.c
index 48ff9c3..f44e178 100644
--- a/lib/vsprintf.c
+++ b/lib/vsprintf.c
@@ -1590,22 +1590,23 @@
return buf;
}
case 'K':
- /*
- * %pK cannot be used in IRQ context because its test
- * for CAP_SYSLOG would be meaningless.
- */
- if (kptr_restrict && (in_irq() || in_serving_softirq() ||
- in_nmi())) {
- if (spec.field_width == -1)
- spec.field_width = default_width;
- return string(buf, end, "pK-error", spec);
- }
-
switch (kptr_restrict) {
case 0:
/* Always print %pK values */
break;
case 1: {
+ const struct cred *cred;
+
+ /*
+ * kptr_restrict==1 cannot be used in IRQ context
+ * because its test for CAP_SYSLOG would be meaningless.
+ */
+ if (in_irq() || in_serving_softirq() || in_nmi()) {
+ if (spec.field_width == -1)
+ spec.field_width = default_width;
+ return string(buf, end, "pK-error", spec);
+ }
+
/*
* Only print the real pointer value if the current
* process has CAP_SYSLOG and is running with the
@@ -1615,8 +1616,7 @@
* leak pointer values if a binary opens a file using
* %pK and then elevates privileges before reading it.
*/
- const struct cred *cred = current_cred();
-
+ cred = current_cred();
if (!has_capability_noaudit(current, CAP_SYSLOG) ||
!uid_eq(cred->euid, cred->uid) ||
!gid_eq(cred->egid, cred->gid))
diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index 926c76d..c554d17 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -328,7 +328,7 @@
return 0;
out_destroy_stat:
- while (--i)
+ while (i--)
percpu_counter_destroy(&wb->stat[i]);
fprop_local_destroy_percpu(&wb->completions);
out_put_cong:
diff --git a/mm/filemap.c b/mm/filemap.c
index bc94386..3461d97 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -446,7 +446,8 @@
{
int err = 0;
- if (mapping->nrpages) {
+ if ((!dax_mapping(mapping) && mapping->nrpages) ||
+ (dax_mapping(mapping) && mapping->nrexceptional)) {
err = filemap_fdatawrite(mapping);
/*
* Even if the above returned error, the pages may be
@@ -482,13 +483,8 @@
{
int err = 0;
- if (dax_mapping(mapping) && mapping->nrexceptional) {
- err = dax_writeback_mapping_range(mapping, lstart, lend);
- if (err)
- return err;
- }
-
- if (mapping->nrpages) {
+ if ((!dax_mapping(mapping) && mapping->nrpages) ||
+ (dax_mapping(mapping) && mapping->nrexceptional)) {
err = __filemap_fdatawrite_range(mapping, lstart, lend,
WB_SYNC_ALL);
/* See comment of filemap_write_and_wait() */
@@ -1890,6 +1886,7 @@
* page_cache_read - adds requested page to the page cache if not already there
* @file: file to read
* @offset: page index
+ * @gfp_mask: memory allocation flags
*
* This adds the requested page to the page cache if it isn't already there,
* and schedules an I/O to read in its contents from disk.
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 08fc0ba..e10a4fe 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1700,7 +1700,8 @@
pmd = pmdp_huge_get_and_clear(mm, old_addr, old_pmd);
VM_BUG_ON(!pmd_none(*new_pmd));
- if (pmd_move_must_withdraw(new_ptl, old_ptl)) {
+ if (pmd_move_must_withdraw(new_ptl, old_ptl) &&
+ vma_is_anonymous(vma)) {
pgtable_t pgtable;
pgtable = pgtable_trans_huge_withdraw(mm, old_pmd);
pgtable_trans_huge_deposit(mm, new_pmd, pgtable);
@@ -2835,6 +2836,7 @@
pgtable_t pgtable;
pmd_t _pmd;
bool young, write, dirty;
+ unsigned long addr;
int i;
VM_BUG_ON(haddr & ~HPAGE_PMD_MASK);
@@ -2860,10 +2862,11 @@
young = pmd_young(*pmd);
dirty = pmd_dirty(*pmd);
+ pmdp_huge_split_prepare(vma, haddr, pmd);
pgtable = pgtable_trans_huge_withdraw(mm, pmd);
pmd_populate(mm, &_pmd, pgtable);
- for (i = 0; i < HPAGE_PMD_NR; i++, haddr += PAGE_SIZE) {
+ for (i = 0, addr = haddr; i < HPAGE_PMD_NR; i++, addr += PAGE_SIZE) {
pte_t entry, *pte;
/*
* Note that NUMA hinting access restrictions are not
@@ -2884,9 +2887,9 @@
}
if (dirty)
SetPageDirty(page + i);
- pte = pte_offset_map(&_pmd, haddr);
+ pte = pte_offset_map(&_pmd, addr);
BUG_ON(!pte_none(*pte));
- set_pte_at(mm, haddr, pte, entry);
+ set_pte_at(mm, addr, pte, entry);
atomic_inc(&page[i]._mapcount);
pte_unmap(pte);
}
@@ -2936,7 +2939,7 @@
pmd_populate(mm, pmd, pgtable);
if (freeze) {
- for (i = 0; i < HPAGE_PMD_NR; i++, haddr += PAGE_SIZE) {
+ for (i = 0; i < HPAGE_PMD_NR; i++) {
page_remove_rmap(page + i, false);
put_page(page + i);
}
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 06ae13e..01f2b48 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -2630,8 +2630,10 @@
hugetlb_add_hstate(HUGETLB_PAGE_ORDER);
}
default_hstate_idx = hstate_index(size_to_hstate(default_hstate_size));
- if (default_hstate_max_huge_pages)
- default_hstate.max_huge_pages = default_hstate_max_huge_pages;
+ if (default_hstate_max_huge_pages) {
+ if (!default_hstate.max_huge_pages)
+ default_hstate.max_huge_pages = default_hstate_max_huge_pages;
+ }
hugetlb_init_hstates();
gather_bootmem_prealloc();
diff --git a/mm/memory.c b/mm/memory.c
index 635451a..8132787 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3404,8 +3404,18 @@
if (unlikely(pmd_none(*pmd)) &&
unlikely(__pte_alloc(mm, vma, pmd, address)))
return VM_FAULT_OOM;
- /* if an huge pmd materialized from under us just retry later */
- if (unlikely(pmd_trans_huge(*pmd) || pmd_devmap(*pmd)))
+ /*
+ * If a huge pmd materialized under us just retry later. Use
+ * pmd_trans_unstable() instead of pmd_trans_huge() to ensure the pmd
+ * didn't become pmd_trans_huge under us and then back to pmd_none, as
+ * a result of MADV_DONTNEED running immediately after a huge pmd fault
+ * in a different thread of this mm, in turn leading to a misleading
+ * pmd_trans_huge() retval. All we have to ensure is that it is a
+ * regular pmd that we can walk with pte_offset_map() and we can do that
+ * through an atomic read in C, which is what pmd_trans_unstable()
+ * provides.
+ */
+ if (unlikely(pmd_trans_unstable(pmd) || pmd_devmap(*pmd)))
return 0;
/*
* A regular pmd is established and it can't morph into a huge pmd
diff --git a/mm/migrate.c b/mm/migrate.c
index b1034f9..3ad0fea 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1582,7 +1582,7 @@
(GFP_HIGHUSER_MOVABLE |
__GFP_THISNODE | __GFP_NOMEMALLOC |
__GFP_NORETRY | __GFP_NOWARN) &
- ~(__GFP_IO | __GFP_FS), 0);
+ ~__GFP_RECLAIM, 0);
return newpage;
}
diff --git a/mm/mmap.c b/mm/mmap.c
index 2f2415a..76d1ec2 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -2664,12 +2664,29 @@
if (!vma || !(vma->vm_flags & VM_SHARED))
goto out;
- if (start < vma->vm_start || start + size > vma->vm_end)
+ if (start < vma->vm_start)
goto out;
- if (pgoff == linear_page_index(vma, start)) {
- ret = 0;
- goto out;
+ if (start + size > vma->vm_end) {
+ struct vm_area_struct *next;
+
+ for (next = vma->vm_next; next; next = next->vm_next) {
+ /* hole between vmas ? */
+ if (next->vm_start != next->vm_prev->vm_end)
+ goto out;
+
+ if (next->vm_file != vma->vm_file)
+ goto out;
+
+ if (next->vm_flags != vma->vm_flags)
+ goto out;
+
+ if (start + size <= next->vm_end)
+ break;
+ }
+
+ if (!next)
+ goto out;
}
prot |= vma->vm_flags & VM_READ ? PROT_READ : 0;
@@ -2679,9 +2696,16 @@
flags &= MAP_NONBLOCK;
flags |= MAP_SHARED | MAP_FIXED | MAP_POPULATE;
if (vma->vm_flags & VM_LOCKED) {
+ struct vm_area_struct *tmp;
flags |= MAP_LOCKED;
+
/* drop PG_Mlocked flag for over-mapped range */
- munlock_vma_pages_range(vma, start, start + size);
+ for (tmp = vma; tmp->vm_start >= start + size;
+ tmp = tmp->vm_next) {
+ munlock_vma_pages_range(tmp,
+ max(tmp->vm_start, start),
+ min(tmp->vm_end, start + size));
+ }
}
file = get_file(vma->vm_file);
diff --git a/mm/mprotect.c b/mm/mprotect.c
index 8eb7bb4..f7cb3d4 100644
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -160,9 +160,11 @@
}
if (pmd_trans_huge(*pmd) || pmd_devmap(*pmd)) {
- if (next - addr != HPAGE_PMD_SIZE)
+ if (next - addr != HPAGE_PMD_SIZE) {
split_huge_pmd(vma, pmd, addr);
- else {
+ if (pmd_none(*pmd))
+ continue;
+ } else {
int nr_ptes = change_huge_pmd(vma, pmd, addr,
newprot, prot_numa);
diff --git a/mm/mremap.c b/mm/mremap.c
index d77946a..8eeba02 100644
--- a/mm/mremap.c
+++ b/mm/mremap.c
@@ -210,6 +210,8 @@
}
}
split_huge_pmd(vma, old_pmd, old_addr);
+ if (pmd_none(*old_pmd))
+ continue;
VM_BUG_ON(pmd_trans_huge(*old_pmd));
}
if (pmd_none(*new_pmd) && __pte_alloc(new_vma->vm_mm, new_vma,
diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c
index 9d47676..06a005b 100644
--- a/mm/pgtable-generic.c
+++ b/mm/pgtable-generic.c
@@ -90,9 +90,9 @@
* ARCHes with special requirements for evicting THP backing TLB entries can
* implement this. Otherwise also, it can help optimize normal TLB flush in
* THP regime. stock flush_tlb_range() typically has optimization to nuke the
- * entire TLB TLB if flush span is greater than a threshhold, which will
+ * entire TLB if flush span is greater than a threshold, which will
* likely be true for a single huge page. Thus a single thp flush will
- * invalidate the entire TLB which is not desitable.
+ * invalidate the entire TLB which is not desirable.
* e.g. see arch/arc: flush_pmd_tlb_range
*/
#define flush_pmd_tlb_range(vma, addr, end) flush_tlb_range(vma, addr, end)
@@ -195,7 +195,9 @@
VM_BUG_ON(address & ~HPAGE_PMD_MASK);
VM_BUG_ON(pmd_trans_huge(*pmdp));
pmd = pmdp_huge_get_and_clear(vma->vm_mm, address, pmdp);
- flush_pmd_tlb_range(vma, address, address + HPAGE_PMD_SIZE);
+
+ /* collapse entails shooting down ptes not pmd */
+ flush_tlb_range(vma, address, address + HPAGE_PMD_SIZE);
return pmd;
}
#endif
diff --git a/mm/slab.c b/mm/slab.c
index 6ecc697..621fbcb 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -2275,7 +2275,7 @@
err = setup_cpu_cache(cachep, gfp);
if (err) {
- __kmem_cache_shutdown(cachep);
+ __kmem_cache_release(cachep);
return err;
}
@@ -2414,12 +2414,13 @@
int __kmem_cache_shutdown(struct kmem_cache *cachep)
{
+ return __kmem_cache_shrink(cachep, false);
+}
+
+void __kmem_cache_release(struct kmem_cache *cachep)
+{
int i;
struct kmem_cache_node *n;
- int rc = __kmem_cache_shrink(cachep, false);
-
- if (rc)
- return rc;
free_percpu(cachep->cpu_cache);
@@ -2430,7 +2431,6 @@
kfree(n);
cachep->node[i] = NULL;
}
- return 0;
}
/*
diff --git a/mm/slab.h b/mm/slab.h
index 834ad24..2eedace 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -140,6 +140,7 @@
#define CACHE_CREATE_MASK (SLAB_CORE_FLAGS | SLAB_DEBUG_FLAGS | SLAB_CACHE_FLAGS)
int __kmem_cache_shutdown(struct kmem_cache *);
+void __kmem_cache_release(struct kmem_cache *);
int __kmem_cache_shrink(struct kmem_cache *, bool);
void slab_kmem_cache_release(struct kmem_cache *);
diff --git a/mm/slab_common.c b/mm/slab_common.c
index b50aef0..065b7bd 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -693,6 +693,7 @@
void slab_kmem_cache_release(struct kmem_cache *s)
{
+ __kmem_cache_release(s);
destroy_memcg_params(s);
kfree_const(s->name);
kmem_cache_free(kmem_cache, s);
diff --git a/mm/slob.c b/mm/slob.c
index 17e8f8c..5ec1580 100644
--- a/mm/slob.c
+++ b/mm/slob.c
@@ -630,6 +630,10 @@
return 0;
}
+void __kmem_cache_release(struct kmem_cache *c)
+{
+}
+
int __kmem_cache_shrink(struct kmem_cache *d, bool deactivate)
{
return 0;
diff --git a/mm/slub.c b/mm/slub.c
index 2e1355a..d8fbd4a 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1592,18 +1592,12 @@
__add_partial(n, page, tail);
}
-static inline void
-__remove_partial(struct kmem_cache_node *n, struct page *page)
-{
- list_del(&page->lru);
- n->nr_partial--;
-}
-
static inline void remove_partial(struct kmem_cache_node *n,
struct page *page)
{
lockdep_assert_held(&n->list_lock);
- __remove_partial(n, page);
+ list_del(&page->lru);
+ n->nr_partial--;
}
/*
@@ -3184,6 +3178,12 @@
}
}
+void __kmem_cache_release(struct kmem_cache *s)
+{
+ free_percpu(s->cpu_slab);
+ free_kmem_cache_nodes(s);
+}
+
static int init_kmem_cache_nodes(struct kmem_cache *s)
{
int node;
@@ -3443,28 +3443,31 @@
/*
* Attempt to free all partial slabs on a node.
- * This is called from kmem_cache_close(). We must be the last thread
- * using the cache and therefore we do not need to lock anymore.
+ * This is called from __kmem_cache_shutdown(). We must take list_lock
+ * because sysfs file might still access partial list after the shutdowning.
*/
static void free_partial(struct kmem_cache *s, struct kmem_cache_node *n)
{
struct page *page, *h;
+ BUG_ON(irqs_disabled());
+ spin_lock_irq(&n->list_lock);
list_for_each_entry_safe(page, h, &n->partial, lru) {
if (!page->inuse) {
- __remove_partial(n, page);
+ remove_partial(n, page);
discard_slab(s, page);
} else {
list_slab_objects(s, page,
- "Objects remaining in %s on kmem_cache_close()");
+ "Objects remaining in %s on __kmem_cache_shutdown()");
}
}
+ spin_unlock_irq(&n->list_lock);
}
/*
* Release all resources used by a slab cache.
*/
-static inline int kmem_cache_close(struct kmem_cache *s)
+int __kmem_cache_shutdown(struct kmem_cache *s)
{
int node;
struct kmem_cache_node *n;
@@ -3476,16 +3479,9 @@
if (n->nr_partial || slabs_node(s, node))
return 1;
}
- free_percpu(s->cpu_slab);
- free_kmem_cache_nodes(s);
return 0;
}
-int __kmem_cache_shutdown(struct kmem_cache *s)
-{
- return kmem_cache_close(s);
-}
-
/********************************************************************
* Kmalloc subsystem
*******************************************************************/
@@ -3980,7 +3976,7 @@
memcg_propagate_slab_attrs(s);
err = sysfs_slab_add(s);
if (err)
- kmem_cache_close(s);
+ __kmem_cache_release(s);
return err;
}
diff --git a/net/appletalk/ddp.c b/net/appletalk/ddp.c
index d5871ac..f066781 100644
--- a/net/appletalk/ddp.c
+++ b/net/appletalk/ddp.c
@@ -1625,7 +1625,7 @@
rt = atrtr_find(&at_hint);
}
- err = ENETUNREACH;
+ err = -ENETUNREACH;
if (!rt)
goto out;
diff --git a/net/batman-adv/gateway_client.c b/net/batman-adv/gateway_client.c
index e6c8382..ccf70be 100644
--- a/net/batman-adv/gateway_client.c
+++ b/net/batman-adv/gateway_client.c
@@ -527,11 +527,12 @@
* gets dereferenced.
*/
spin_lock_bh(&bat_priv->gw.list_lock);
- hlist_del_init_rcu(&gw_node->list);
+ if (!hlist_unhashed(&gw_node->list)) {
+ hlist_del_init_rcu(&gw_node->list);
+ batadv_gw_node_free_ref(gw_node);
+ }
spin_unlock_bh(&bat_priv->gw.list_lock);
- batadv_gw_node_free_ref(gw_node);
-
curr_gw = batadv_gw_get_selected_gw_node(bat_priv);
if (gw_node == curr_gw)
batadv_gw_reselect(bat_priv);
diff --git a/net/batman-adv/hard-interface.c b/net/batman-adv/hard-interface.c
index 01acccc..57f71071 100644
--- a/net/batman-adv/hard-interface.c
+++ b/net/batman-adv/hard-interface.c
@@ -76,6 +76,28 @@
}
/**
+ * batadv_mutual_parents - check if two devices are each others parent
+ * @dev1: 1st net_device
+ * @dev2: 2nd net_device
+ *
+ * veth devices come in pairs and each is the parent of the other!
+ *
+ * Return: true if the devices are each others parent, otherwise false
+ */
+static bool batadv_mutual_parents(const struct net_device *dev1,
+ const struct net_device *dev2)
+{
+ int dev1_parent_iflink = dev_get_iflink(dev1);
+ int dev2_parent_iflink = dev_get_iflink(dev2);
+
+ if (!dev1_parent_iflink || !dev2_parent_iflink)
+ return false;
+
+ return (dev1_parent_iflink == dev2->ifindex) &&
+ (dev2_parent_iflink == dev1->ifindex);
+}
+
+/**
* batadv_is_on_batman_iface - check if a device is a batman iface descendant
* @net_dev: the device to check
*
@@ -108,6 +130,9 @@
if (WARN(!parent_dev, "Cannot find parent device"))
return false;
+ if (batadv_mutual_parents(net_dev, parent_dev))
+ return false;
+
ret = batadv_is_on_batman_iface(parent_dev);
return ret;
diff --git a/net/batman-adv/translation-table.c b/net/batman-adv/translation-table.c
index cdfc85f..0e80fd1 100644
--- a/net/batman-adv/translation-table.c
+++ b/net/batman-adv/translation-table.c
@@ -303,9 +303,11 @@
if (atomic_add_return(v, &vlan->tt.num_entries) == 0) {
spin_lock_bh(&orig_node->vlan_list_lock);
- hlist_del_init_rcu(&vlan->list);
+ if (!hlist_unhashed(&vlan->list)) {
+ hlist_del_init_rcu(&vlan->list);
+ batadv_orig_node_vlan_free_ref(vlan);
+ }
spin_unlock_bh(&orig_node->vlan_list_lock);
- batadv_orig_node_vlan_free_ref(vlan);
}
batadv_orig_node_vlan_free_ref(vlan);
diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
index 47bcef754..883c821 100644
--- a/net/bluetooth/hci_core.c
+++ b/net/bluetooth/hci_core.c
@@ -4112,8 +4112,10 @@
break;
}
- *req_complete = bt_cb(skb)->hci.req_complete;
- *req_complete_skb = bt_cb(skb)->hci.req_complete_skb;
+ if (bt_cb(skb)->hci.req_flags & HCI_REQ_SKB)
+ *req_complete_skb = bt_cb(skb)->hci.req_complete_skb;
+ else
+ *req_complete = bt_cb(skb)->hci.req_complete;
kfree_skb(skb);
}
spin_unlock_irqrestore(&hdev->cmd_q.lock, flags);
diff --git a/net/bridge/br_mdb.c b/net/bridge/br_mdb.c
index 30e105f..74c278e 100644
--- a/net/bridge/br_mdb.c
+++ b/net/bridge/br_mdb.c
@@ -425,8 +425,8 @@
mp = br_mdb_ip_get(mdb, group);
if (!mp) {
mp = br_multicast_new_group(br, port, group);
- err = PTR_ERR(mp);
- if (IS_ERR(mp))
+ err = PTR_ERR_OR_ZERO(mp);
+ if (err)
return err;
}
diff --git a/net/caif/cfrfml.c b/net/caif/cfrfml.c
index 61d7617..b82440e 100644
--- a/net/caif/cfrfml.c
+++ b/net/caif/cfrfml.c
@@ -159,7 +159,7 @@
tmppkt = NULL;
/* Verify that length is correct */
- err = EPROTO;
+ err = -EPROTO;
if (rfml->pdu_size != cfpkt_getlen(pkt) - RFM_HEAD_SIZE + 1)
goto out;
}
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index 9cfedf5..9382619 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -1197,6 +1197,13 @@
return new_piece;
}
+static size_t sizeof_footer(struct ceph_connection *con)
+{
+ return (con->peer_features & CEPH_FEATURE_MSG_AUTH) ?
+ sizeof(struct ceph_msg_footer) :
+ sizeof(struct ceph_msg_footer_old);
+}
+
static void prepare_message_data(struct ceph_msg *msg, u32 data_len)
{
BUG_ON(!msg);
@@ -2335,9 +2342,9 @@
ceph_pr_addr(&con->peer_addr.in_addr),
seq, con->in_seq + 1);
con->in_base_pos = -front_len - middle_len - data_len -
- sizeof(m->footer);
+ sizeof_footer(con);
con->in_tag = CEPH_MSGR_TAG_READY;
- return 0;
+ return 1;
} else if ((s64)seq - (s64)con->in_seq > 1) {
pr_err("read_partial_message bad seq %lld expected %lld\n",
seq, con->in_seq + 1);
@@ -2360,10 +2367,10 @@
/* skip this message */
dout("alloc_msg said skip message\n");
con->in_base_pos = -front_len - middle_len - data_len -
- sizeof(m->footer);
+ sizeof_footer(con);
con->in_tag = CEPH_MSGR_TAG_READY;
con->in_seq++;
- return 0;
+ return 1;
}
BUG_ON(!con->in_msg);
diff --git a/net/ceph/osd_client.c b/net/ceph/osd_client.c
index 3534e12..5bc0537 100644
--- a/net/ceph/osd_client.c
+++ b/net/ceph/osd_client.c
@@ -2853,8 +2853,8 @@
mutex_lock(&osdc->request_mutex);
req = __lookup_request(osdc, tid);
if (!req) {
- pr_warn("%s osd%d tid %llu unknown, skipping\n",
- __func__, osd->o_osd, tid);
+ dout("%s osd%d tid %llu unknown, skipping\n", __func__,
+ osd->o_osd, tid);
m = NULL;
*skip = 1;
goto out;
diff --git a/net/core/dev.c b/net/core/dev.c
index 8cba3d8..0ef061b 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -5379,12 +5379,12 @@
{
struct netdev_adjacent *lower;
- lower = list_entry((*iter)->next, struct netdev_adjacent, list);
+ lower = list_entry(*iter, struct netdev_adjacent, list);
if (&lower->list == &dev->adj_list.lower)
return NULL;
- *iter = &lower->list;
+ *iter = lower->list.next;
return lower->dev;
}
@@ -7422,8 +7422,10 @@
dev->priv_flags = IFF_XMIT_DST_RELEASE | IFF_XMIT_DST_RELEASE_PERM;
setup(dev);
- if (!dev->tx_queue_len)
+ if (!dev->tx_queue_len) {
dev->priv_flags |= IFF_NO_QUEUE;
+ dev->tx_queue_len = 1;
+ }
dev->num_tx_queues = txqs;
dev->real_num_tx_queues = txqs;
diff --git a/net/core/flow_dissector.c b/net/core/flow_dissector.c
index d79699c..12e7003 100644
--- a/net/core/flow_dissector.c
+++ b/net/core/flow_dissector.c
@@ -208,7 +208,6 @@
case htons(ETH_P_IPV6): {
const struct ipv6hdr *iph;
struct ipv6hdr _iph;
- __be32 flow_label;
ipv6:
iph = __skb_header_pointer(skb, nhoff, sizeof(_iph), data, hlen, &_iph);
@@ -230,8 +229,12 @@
key_control->addr_type = FLOW_DISSECTOR_KEY_IPV6_ADDRS;
}
- flow_label = ip6_flowlabel(iph);
- if (flow_label) {
+ if ((dissector_uses_key(flow_dissector,
+ FLOW_DISSECTOR_KEY_FLOW_LABEL) ||
+ (flags & FLOW_DISSECTOR_F_STOP_AT_FLOW_LABEL)) &&
+ ip6_flowlabel(iph)) {
+ __be32 flow_label = ip6_flowlabel(iph);
+
if (dissector_uses_key(flow_dissector,
FLOW_DISSECTOR_KEY_FLOW_LABEL)) {
key_tags = skb_flow_dissector_target(flow_dissector,
@@ -396,6 +399,13 @@
goto out_bad;
proto = eth->h_proto;
nhoff += sizeof(*eth);
+
+ /* Cap headers that we access via pointers at the
+ * end of the Ethernet header as our maximum alignment
+ * at that point is only 2 bytes.
+ */
+ if (NET_IP_ALIGN)
+ hlen = nhoff;
}
key_control->flags |= FLOW_DIS_ENCAPSULATION;
diff --git a/net/core/scm.c b/net/core/scm.c
index 14596fb..2696aef 100644
--- a/net/core/scm.c
+++ b/net/core/scm.c
@@ -87,6 +87,7 @@
*fplp = fpl;
fpl->count = 0;
fpl->max = SCM_MAX_FD;
+ fpl->user = NULL;
}
fpp = &fpl->fp[fpl->count];
@@ -107,6 +108,10 @@
*fpp++ = file;
fpl->count++;
}
+
+ if (!fpl->user)
+ fpl->user = get_uid(current_user());
+
return num;
}
@@ -119,6 +124,7 @@
scm->fp = NULL;
for (i=fpl->count-1; i>=0; i--)
fput(fpl->fp[i]);
+ free_uid(fpl->user);
kfree(fpl);
}
}
@@ -336,6 +342,7 @@
for (i = 0; i < fpl->count; i++)
get_file(fpl->fp[i]);
new_fpl->max = new_fpl->count;
+ new_fpl->user = get_uid(fpl->user);
}
return new_fpl;
}
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index b2df375..5bf88f5 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -79,6 +79,8 @@
struct kmem_cache *skbuff_head_cache __read_mostly;
static struct kmem_cache *skbuff_fclone_cache __read_mostly;
+int sysctl_max_skb_frags __read_mostly = MAX_SKB_FRAGS;
+EXPORT_SYMBOL(sysctl_max_skb_frags);
/**
* skb_panic - private function for out-of-line support
diff --git a/net/core/sysctl_net_core.c b/net/core/sysctl_net_core.c
index 95b6139..a6beb7b 100644
--- a/net/core/sysctl_net_core.c
+++ b/net/core/sysctl_net_core.c
@@ -26,6 +26,7 @@
static int one = 1;
static int min_sndbuf = SOCK_MIN_SNDBUF;
static int min_rcvbuf = SOCK_MIN_RCVBUF;
+static int max_skb_frags = MAX_SKB_FRAGS;
static int net_msg_warn; /* Unused, but still a sysctl */
@@ -392,6 +393,15 @@
.mode = 0644,
.proc_handler = proc_dointvec
},
+ {
+ .procname = "max_skb_frags",
+ .data = &sysctl_max_skb_frags,
+ .maxlen = sizeof(int),
+ .mode = 0644,
+ .proc_handler = proc_dointvec_minmax,
+ .extra1 = &one,
+ .extra2 = &max_skb_frags,
+ },
{ }
};
diff --git a/net/dccp/ipv4.c b/net/dccp/ipv4.c
index 5684e14..902d606 100644
--- a/net/dccp/ipv4.c
+++ b/net/dccp/ipv4.c
@@ -824,26 +824,26 @@
if (sk->sk_state == DCCP_NEW_SYN_RECV) {
struct request_sock *req = inet_reqsk(sk);
- struct sock *nsk = NULL;
+ struct sock *nsk;
sk = req->rsk_listener;
- if (likely(sk->sk_state == DCCP_LISTEN)) {
- nsk = dccp_check_req(sk, skb, req);
- } else {
+ if (unlikely(sk->sk_state != DCCP_LISTEN)) {
inet_csk_reqsk_queue_drop_and_put(sk, req);
goto lookup;
}
+ sock_hold(sk);
+ nsk = dccp_check_req(sk, skb, req);
if (!nsk) {
reqsk_put(req);
- goto discard_it;
+ goto discard_and_relse;
}
if (nsk == sk) {
- sock_hold(sk);
reqsk_put(req);
} else if (dccp_child_process(sk, nsk, skb)) {
dccp_v4_ctl_send_reset(sk, skb);
- goto discard_it;
+ goto discard_and_relse;
} else {
+ sock_put(sk);
return 0;
}
}
diff --git a/net/dccp/ipv6.c b/net/dccp/ipv6.c
index 9c6d050..b8608b7 100644
--- a/net/dccp/ipv6.c
+++ b/net/dccp/ipv6.c
@@ -691,26 +691,26 @@
if (sk->sk_state == DCCP_NEW_SYN_RECV) {
struct request_sock *req = inet_reqsk(sk);
- struct sock *nsk = NULL;
+ struct sock *nsk;
sk = req->rsk_listener;
- if (likely(sk->sk_state == DCCP_LISTEN)) {
- nsk = dccp_check_req(sk, skb, req);
- } else {
+ if (unlikely(sk->sk_state != DCCP_LISTEN)) {
inet_csk_reqsk_queue_drop_and_put(sk, req);
goto lookup;
}
+ sock_hold(sk);
+ nsk = dccp_check_req(sk, skb, req);
if (!nsk) {
reqsk_put(req);
- goto discard_it;
+ goto discard_and_relse;
}
if (nsk == sk) {
- sock_hold(sk);
reqsk_put(req);
} else if (dccp_child_process(sk, nsk, skb)) {
dccp_v6_ctl_send_reset(sk, skb);
- goto discard_it;
+ goto discard_and_relse;
} else {
+ sock_put(sk);
return 0;
}
}
diff --git a/net/dsa/slave.c b/net/dsa/slave.c
index 40b9ca7..ab24521 100644
--- a/net/dsa/slave.c
+++ b/net/dsa/slave.c
@@ -1194,7 +1194,6 @@
if (ret) {
netdev_err(master, "error %d registering interface %s\n",
ret, slave_dev->name);
- phy_disconnect(p->phy);
ds->ports[port] = NULL;
free_netdev(slave_dev);
return ret;
@@ -1205,6 +1204,7 @@
ret = dsa_slave_phy_setup(p, slave_dev);
if (ret) {
netdev_err(master, "error %d setting up slave phy\n", ret);
+ unregister_netdev(slave_dev);
free_netdev(slave_dev);
return ret;
}
diff --git a/net/ipv4/devinet.c b/net/ipv4/devinet.c
index cebd9d3..f6303b1 100644
--- a/net/ipv4/devinet.c
+++ b/net/ipv4/devinet.c
@@ -1847,7 +1847,7 @@
if (err < 0)
goto errout;
- err = EINVAL;
+ err = -EINVAL;
if (!tb[NETCONFA_IFINDEX])
goto errout;
diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
index 46b9c88..6414891 100644
--- a/net/ipv4/inet_connection_sock.c
+++ b/net/ipv4/inet_connection_sock.c
@@ -789,14 +789,16 @@
reqsk_put(req);
}
-void inet_csk_reqsk_queue_add(struct sock *sk, struct request_sock *req,
- struct sock *child)
+struct sock *inet_csk_reqsk_queue_add(struct sock *sk,
+ struct request_sock *req,
+ struct sock *child)
{
struct request_sock_queue *queue = &inet_csk(sk)->icsk_accept_queue;
spin_lock(&queue->rskq_lock);
if (unlikely(sk->sk_state != TCP_LISTEN)) {
inet_child_forget(sk, req, child);
+ child = NULL;
} else {
req->sk = child;
req->dl_next = NULL;
@@ -808,6 +810,7 @@
sk_acceptq_added(sk);
}
spin_unlock(&queue->rskq_lock);
+ return child;
}
EXPORT_SYMBOL(inet_csk_reqsk_queue_add);
@@ -817,11 +820,8 @@
if (own_req) {
inet_csk_reqsk_queue_drop(sk, req);
reqsk_queue_removed(&inet_csk(sk)->icsk_accept_queue, req);
- inet_csk_reqsk_queue_add(sk, req, child);
- /* Warning: caller must not call reqsk_put(req);
- * child stole last reference on it.
- */
- return child;
+ if (inet_csk_reqsk_queue_add(sk, req, child))
+ return child;
}
/* Too bad, another child took ownership of the request, undo. */
bh_unlock_sock(child);
diff --git a/net/ipv4/ip_gre.c b/net/ipv4/ip_gre.c
index 7c51c4e..41ba68d 100644
--- a/net/ipv4/ip_gre.c
+++ b/net/ipv4/ip_gre.c
@@ -1054,8 +1054,9 @@
static void ipgre_tap_setup(struct net_device *dev)
{
ether_setup(dev);
- dev->netdev_ops = &gre_tap_netdev_ops;
- dev->priv_flags |= IFF_LIVE_ADDR_CHANGE;
+ dev->netdev_ops = &gre_tap_netdev_ops;
+ dev->priv_flags &= ~IFF_TX_SKB_SHARING;
+ dev->priv_flags |= IFF_LIVE_ADDR_CHANGE;
ip_tunnel_setup(dev, gre_tap_net_id);
}
@@ -1240,6 +1241,14 @@
err = ipgre_newlink(net, dev, tb, NULL);
if (err < 0)
goto out;
+
+ /* openvswitch users expect packet sizes to be unrestricted,
+ * so set the largest MTU we can.
+ */
+ err = __ip_tunnel_change_mtu(dev, IP_MAX_MTU, false);
+ if (err)
+ goto out;
+
return dev;
out:
free_netdev(dev);
diff --git a/net/ipv4/ip_sockglue.c b/net/ipv4/ip_sockglue.c
index 5f73a7c..a501242 100644
--- a/net/ipv4/ip_sockglue.c
+++ b/net/ipv4/ip_sockglue.c
@@ -249,6 +249,8 @@
switch (cmsg->cmsg_type) {
case IP_RETOPTS:
err = cmsg->cmsg_len - CMSG_ALIGN(sizeof(struct cmsghdr));
+
+ /* Our caller is responsible for freeing ipc->opt */
err = ip_options_get(net, &ipc->opt, CMSG_DATA(cmsg),
err < 40 ? err : 40);
if (err)
diff --git a/net/ipv4/ip_tunnel.c b/net/ipv4/ip_tunnel.c
index c7bd72e..89e8861 100644
--- a/net/ipv4/ip_tunnel.c
+++ b/net/ipv4/ip_tunnel.c
@@ -943,17 +943,31 @@
}
EXPORT_SYMBOL_GPL(ip_tunnel_ioctl);
-int ip_tunnel_change_mtu(struct net_device *dev, int new_mtu)
+int __ip_tunnel_change_mtu(struct net_device *dev, int new_mtu, bool strict)
{
struct ip_tunnel *tunnel = netdev_priv(dev);
int t_hlen = tunnel->hlen + sizeof(struct iphdr);
+ int max_mtu = 0xFFF8 - dev->hard_header_len - t_hlen;
- if (new_mtu < 68 ||
- new_mtu > 0xFFF8 - dev->hard_header_len - t_hlen)
+ if (new_mtu < 68)
return -EINVAL;
+
+ if (new_mtu > max_mtu) {
+ if (strict)
+ return -EINVAL;
+
+ new_mtu = max_mtu;
+ }
+
dev->mtu = new_mtu;
return 0;
}
+EXPORT_SYMBOL_GPL(__ip_tunnel_change_mtu);
+
+int ip_tunnel_change_mtu(struct net_device *dev, int new_mtu)
+{
+ return __ip_tunnel_change_mtu(dev, new_mtu, true);
+}
EXPORT_SYMBOL_GPL(ip_tunnel_change_mtu);
static void ip_tunnel_dev_free(struct net_device *dev)
diff --git a/net/ipv4/ping.c b/net/ipv4/ping.c
index c117b21..d3a2716 100644
--- a/net/ipv4/ping.c
+++ b/net/ipv4/ping.c
@@ -746,8 +746,10 @@
if (msg->msg_controllen) {
err = ip_cmsg_send(sock_net(sk), msg, &ipc, false);
- if (err)
+ if (unlikely(err)) {
+ kfree(ipc.opt);
return err;
+ }
if (ipc.opt)
free = 1;
}
diff --git a/net/ipv4/raw.c b/net/ipv4/raw.c
index bc35f18..7113bae 100644
--- a/net/ipv4/raw.c
+++ b/net/ipv4/raw.c
@@ -547,8 +547,10 @@
if (msg->msg_controllen) {
err = ip_cmsg_send(net, msg, &ipc, false);
- if (err)
+ if (unlikely(err)) {
+ kfree(ipc.opt);
goto out;
+ }
if (ipc.opt)
free = 1;
}
diff --git a/net/ipv4/route.c b/net/ipv4/route.c
index 85f184e..02c6229 100644
--- a/net/ipv4/route.c
+++ b/net/ipv4/route.c
@@ -129,6 +129,7 @@
static int ip_rt_min_pmtu __read_mostly = 512 + 20 + 20;
static int ip_rt_min_advmss __read_mostly = 256;
+static int ip_rt_gc_timeout __read_mostly = RT_GC_TIMEOUT;
/*
* Interface to generic destination cache.
*/
@@ -755,7 +756,7 @@
struct fib_nh *nh = &FIB_RES_NH(res);
update_or_create_fnhe(nh, fl4->daddr, new_gw,
- 0, 0);
+ 0, jiffies + ip_rt_gc_timeout);
}
if (kill_route)
rt->dst.obsolete = DST_OBSOLETE_KILL;
@@ -1556,6 +1557,36 @@
#endif
}
+static void ip_del_fnhe(struct fib_nh *nh, __be32 daddr)
+{
+ struct fnhe_hash_bucket *hash;
+ struct fib_nh_exception *fnhe, __rcu **fnhe_p;
+ u32 hval = fnhe_hashfun(daddr);
+
+ spin_lock_bh(&fnhe_lock);
+
+ hash = rcu_dereference_protected(nh->nh_exceptions,
+ lockdep_is_held(&fnhe_lock));
+ hash += hval;
+
+ fnhe_p = &hash->chain;
+ fnhe = rcu_dereference_protected(*fnhe_p, lockdep_is_held(&fnhe_lock));
+ while (fnhe) {
+ if (fnhe->fnhe_daddr == daddr) {
+ rcu_assign_pointer(*fnhe_p, rcu_dereference_protected(
+ fnhe->fnhe_next, lockdep_is_held(&fnhe_lock)));
+ fnhe_flush_routes(fnhe);
+ kfree_rcu(fnhe, rcu);
+ break;
+ }
+ fnhe_p = &fnhe->fnhe_next;
+ fnhe = rcu_dereference_protected(fnhe->fnhe_next,
+ lockdep_is_held(&fnhe_lock));
+ }
+
+ spin_unlock_bh(&fnhe_lock);
+}
+
/* called in rcu_read_lock() section */
static int __mkroute_input(struct sk_buff *skb,
const struct fib_result *res,
@@ -1609,11 +1640,20 @@
fnhe = find_exception(&FIB_RES_NH(*res), daddr);
if (do_cache) {
- if (fnhe)
+ if (fnhe) {
rth = rcu_dereference(fnhe->fnhe_rth_input);
- else
- rth = rcu_dereference(FIB_RES_NH(*res).nh_rth_input);
+ if (rth && rth->dst.expires &&
+ time_after(jiffies, rth->dst.expires)) {
+ ip_del_fnhe(&FIB_RES_NH(*res), daddr);
+ fnhe = NULL;
+ } else {
+ goto rt_cache;
+ }
+ }
+ rth = rcu_dereference(FIB_RES_NH(*res).nh_rth_input);
+
+rt_cache:
if (rt_cache_valid(rth)) {
skb_dst_set_noref(skb, &rth->dst);
goto out;
@@ -2014,19 +2054,29 @@
struct fib_nh *nh = &FIB_RES_NH(*res);
fnhe = find_exception(nh, fl4->daddr);
- if (fnhe)
+ if (fnhe) {
prth = &fnhe->fnhe_rth_output;
- else {
- if (unlikely(fl4->flowi4_flags &
- FLOWI_FLAG_KNOWN_NH &&
- !(nh->nh_gw &&
- nh->nh_scope == RT_SCOPE_LINK))) {
- do_cache = false;
- goto add;
+ rth = rcu_dereference(*prth);
+ if (rth && rth->dst.expires &&
+ time_after(jiffies, rth->dst.expires)) {
+ ip_del_fnhe(nh, fl4->daddr);
+ fnhe = NULL;
+ } else {
+ goto rt_cache;
}
- prth = raw_cpu_ptr(nh->nh_pcpu_rth_output);
}
+
+ if (unlikely(fl4->flowi4_flags &
+ FLOWI_FLAG_KNOWN_NH &&
+ !(nh->nh_gw &&
+ nh->nh_scope == RT_SCOPE_LINK))) {
+ do_cache = false;
+ goto add;
+ }
+ prth = raw_cpu_ptr(nh->nh_pcpu_rth_output);
rth = rcu_dereference(*prth);
+
+rt_cache:
if (rt_cache_valid(rth)) {
dst_hold(&rth->dst);
return rth;
@@ -2569,7 +2619,6 @@
}
#ifdef CONFIG_SYSCTL
-static int ip_rt_gc_timeout __read_mostly = RT_GC_TIMEOUT;
static int ip_rt_gc_interval __read_mostly = 60 * HZ;
static int ip_rt_gc_min_interval __read_mostly = HZ / 2;
static int ip_rt_gc_elasticity __read_mostly = 8;
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index 19746b3..483ffdf 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -940,7 +940,7 @@
i = skb_shinfo(skb)->nr_frags;
can_coalesce = skb_can_coalesce(skb, i, page, offset);
- if (!can_coalesce && i >= MAX_SKB_FRAGS) {
+ if (!can_coalesce && i >= sysctl_max_skb_frags) {
tcp_mark_push(tp, skb);
goto new_segment;
}
@@ -1213,7 +1213,7 @@
if (!skb_can_coalesce(skb, i, pfrag->page,
pfrag->offset)) {
- if (i == MAX_SKB_FRAGS || !sg) {
+ if (i == sysctl_max_skb_frags || !sg) {
tcp_mark_push(tp, skb);
goto new_segment;
}
@@ -2950,7 +2950,7 @@
struct crypto_hash *hash;
hash = crypto_alloc_hash("md5", 0, CRYPTO_ALG_ASYNC);
- if (IS_ERR_OR_NULL(hash))
+ if (IS_ERR(hash))
return;
per_cpu(tcp_md5sig_pool, cpu).md5_desc.tfm = hash;
}
diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
index 1c2a734..3b2c8e9 100644
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -2896,7 +2896,10 @@
{
const u32 now = tcp_time_stamp, wlen = sysctl_tcp_min_rtt_wlen * HZ;
struct rtt_meas *m = tcp_sk(sk)->rtt_min;
- struct rtt_meas rttm = { .rtt = (rtt_us ? : 1), .ts = now };
+ struct rtt_meas rttm = {
+ .rtt = likely(rtt_us) ? rtt_us : jiffies_to_usecs(1),
+ .ts = now,
+ };
u32 elapsed;
/* Check if the new measurement updates the 1st, 2nd, or 3rd choices */
diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
index a4d5237..487ac67 100644
--- a/net/ipv4/tcp_ipv4.c
+++ b/net/ipv4/tcp_ipv4.c
@@ -311,7 +311,7 @@
/* handle ICMP messages on TCP_NEW_SYN_RECV request sockets */
-void tcp_req_err(struct sock *sk, u32 seq)
+void tcp_req_err(struct sock *sk, u32 seq, bool abort)
{
struct request_sock *req = inet_reqsk(sk);
struct net *net = sock_net(sk);
@@ -323,7 +323,7 @@
if (seq != tcp_rsk(req)->snt_isn) {
NET_INC_STATS_BH(net, LINUX_MIB_OUTOFWINDOWICMPS);
- } else {
+ } else if (abort) {
/*
* Still in SYN_RECV, just remove it silently.
* There is no good way to pass the error to the newly
@@ -383,7 +383,12 @@
}
seq = ntohl(th->seq);
if (sk->sk_state == TCP_NEW_SYN_RECV)
- return tcp_req_err(sk, seq);
+ return tcp_req_err(sk, seq,
+ type == ICMP_PARAMETERPROB ||
+ type == ICMP_TIME_EXCEEDED ||
+ (type == ICMP_DEST_UNREACH &&
+ (code == ICMP_NET_UNREACH ||
+ code == ICMP_HOST_UNREACH)));
bh_lock_sock(sk);
/* If too many ICMPs get dropped on busy
@@ -1592,28 +1597,30 @@
if (sk->sk_state == TCP_NEW_SYN_RECV) {
struct request_sock *req = inet_reqsk(sk);
- struct sock *nsk = NULL;
+ struct sock *nsk;
sk = req->rsk_listener;
- if (tcp_v4_inbound_md5_hash(sk, skb))
- goto discard_and_relse;
- if (likely(sk->sk_state == TCP_LISTEN)) {
- nsk = tcp_check_req(sk, skb, req, false);
- } else {
+ if (unlikely(tcp_v4_inbound_md5_hash(sk, skb))) {
+ reqsk_put(req);
+ goto discard_it;
+ }
+ if (unlikely(sk->sk_state != TCP_LISTEN)) {
inet_csk_reqsk_queue_drop_and_put(sk, req);
goto lookup;
}
+ sock_hold(sk);
+ nsk = tcp_check_req(sk, skb, req, false);
if (!nsk) {
reqsk_put(req);
- goto discard_it;
+ goto discard_and_relse;
}
if (nsk == sk) {
- sock_hold(sk);
reqsk_put(req);
} else if (tcp_child_process(sk, nsk, skb)) {
tcp_v4_send_reset(nsk, skb);
- goto discard_it;
+ goto discard_and_relse;
} else {
+ sock_put(sk);
return 0;
}
}
diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
index be0b218..95d2f19 100644
--- a/net/ipv4/udp.c
+++ b/net/ipv4/udp.c
@@ -1048,8 +1048,10 @@
if (msg->msg_controllen) {
err = ip_cmsg_send(sock_net(sk), msg, &ipc,
sk->sk_family == AF_INET6);
- if (err)
+ if (unlikely(err)) {
+ kfree(ipc.opt);
return err;
+ }
if (ipc.opt)
free = 1;
connected = 0;
diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
index 38eedde..bdd7eac 100644
--- a/net/ipv6/addrconf.c
+++ b/net/ipv6/addrconf.c
@@ -583,7 +583,7 @@
if (err < 0)
goto errout;
- err = EINVAL;
+ err = -EINVAL;
if (!tb[NETCONFA_IFINDEX])
goto errout;
@@ -3538,6 +3538,7 @@
{
struct inet6_dev *idev = ifp->idev;
struct net_device *dev = idev->dev;
+ bool notify = false;
addrconf_join_solict(dev, &ifp->addr);
@@ -3583,7 +3584,7 @@
/* Because optimistic nodes can use this address,
* notify listeners. If DAD fails, RTM_DELADDR is sent.
*/
- ipv6_ifa_notify(RTM_NEWADDR, ifp);
+ notify = true;
}
}
@@ -3591,6 +3592,8 @@
out:
spin_unlock(&ifp->lock);
read_unlock_bh(&idev->lock);
+ if (notify)
+ ipv6_ifa_notify(RTM_NEWADDR, ifp);
}
static void addrconf_dad_start(struct inet6_ifaddr *ifp)
diff --git a/net/ipv6/ip6_flowlabel.c b/net/ipv6/ip6_flowlabel.c
index 1f9ebe3..dc2db4f 100644
--- a/net/ipv6/ip6_flowlabel.c
+++ b/net/ipv6/ip6_flowlabel.c
@@ -540,12 +540,13 @@
}
spin_lock_bh(&ip6_sk_fl_lock);
for (sflp = &np->ipv6_fl_list;
- (sfl = rcu_dereference(*sflp)) != NULL;
+ (sfl = rcu_dereference_protected(*sflp,
+ lockdep_is_held(&ip6_sk_fl_lock))) != NULL;
sflp = &sfl->next) {
if (sfl->fl->label == freq.flr_label) {
if (freq.flr_label == (np->flow_label&IPV6_FLOWLABEL_MASK))
np->flow_label &= ~IPV6_FLOWLABEL_MASK;
- *sflp = rcu_dereference(sfl->next);
+ *sflp = sfl->next;
spin_unlock_bh(&ip6_sk_fl_lock);
fl_release(sfl->fl);
kfree_rcu(sfl, rcu);
diff --git a/net/ipv6/ip6_gre.c b/net/ipv6/ip6_gre.c
index f37f18b..a69aad1 100644
--- a/net/ipv6/ip6_gre.c
+++ b/net/ipv6/ip6_gre.c
@@ -1512,6 +1512,7 @@
dev->destructor = ip6gre_dev_free;
dev->features |= NETIF_F_NETNS_LOCAL;
+ dev->priv_flags &= ~IFF_TX_SKB_SHARING;
}
static int ip6gre_newlink(struct net *src_net, struct net_device *dev,
diff --git a/net/ipv6/netfilter/nf_nat_masquerade_ipv6.c b/net/ipv6/netfilter/nf_nat_masquerade_ipv6.c
index 31ba7ca..051b6a6 100644
--- a/net/ipv6/netfilter/nf_nat_masquerade_ipv6.c
+++ b/net/ipv6/netfilter/nf_nat_masquerade_ipv6.c
@@ -21,6 +21,10 @@
#include <net/ipv6.h>
#include <net/netfilter/ipv6/nf_nat_masquerade.h>
+#define MAX_WORK_COUNT 16
+
+static atomic_t v6_worker_count;
+
unsigned int
nf_nat_masquerade_ipv6(struct sk_buff *skb, const struct nf_nat_range *range,
const struct net_device *out)
@@ -78,14 +82,78 @@
.notifier_call = masq_device_event,
};
+struct masq_dev_work {
+ struct work_struct work;
+ struct net *net;
+ int ifindex;
+};
+
+static void iterate_cleanup_work(struct work_struct *work)
+{
+ struct masq_dev_work *w;
+ long index;
+
+ w = container_of(work, struct masq_dev_work, work);
+
+ index = w->ifindex;
+ nf_ct_iterate_cleanup(w->net, device_cmp, (void *)index, 0, 0);
+
+ put_net(w->net);
+ kfree(w);
+ atomic_dec(&v6_worker_count);
+ module_put(THIS_MODULE);
+}
+
+/* ipv6 inet notifier is an atomic notifier, i.e. we cannot
+ * schedule.
+ *
+ * Unfortunately, nf_ct_iterate_cleanup can run for a long
+ * time if there are lots of conntracks and the system
+ * handles high softirq load, so it frequently calls cond_resched
+ * while iterating the conntrack table.
+ *
+ * So we defer nf_ct_iterate_cleanup walk to the system workqueue.
+ *
+ * As we can have 'a lot' of inet_events (depending on amount
+ * of ipv6 addresses being deleted), we also need to add an upper
+ * limit to the number of queued work items.
+ */
static int masq_inet_event(struct notifier_block *this,
unsigned long event, void *ptr)
{
struct inet6_ifaddr *ifa = ptr;
- struct netdev_notifier_info info;
+ const struct net_device *dev;
+ struct masq_dev_work *w;
+ struct net *net;
- netdev_notifier_info_init(&info, ifa->idev->dev);
- return masq_device_event(this, event, &info);
+ if (event != NETDEV_DOWN ||
+ atomic_read(&v6_worker_count) >= MAX_WORK_COUNT)
+ return NOTIFY_DONE;
+
+ dev = ifa->idev->dev;
+ net = maybe_get_net(dev_net(dev));
+ if (!net)
+ return NOTIFY_DONE;
+
+ if (!try_module_get(THIS_MODULE))
+ goto err_module;
+
+ w = kmalloc(sizeof(*w), GFP_ATOMIC);
+ if (w) {
+ atomic_inc(&v6_worker_count);
+
+ INIT_WORK(&w->work, iterate_cleanup_work);
+ w->ifindex = dev->ifindex;
+ w->net = net;
+ schedule_work(&w->work);
+
+ return NOTIFY_DONE;
+ }
+
+ module_put(THIS_MODULE);
+ err_module:
+ put_net(net);
+ return NOTIFY_DONE;
}
static struct notifier_block masq_inet_notifier = {
diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
index 006396e..5c8c842 100644
--- a/net/ipv6/tcp_ipv6.c
+++ b/net/ipv6/tcp_ipv6.c
@@ -327,6 +327,7 @@
struct tcp_sock *tp;
__u32 seq, snd_una;
struct sock *sk;
+ bool fatal;
int err;
sk = __inet6_lookup_established(net, &tcp_hashinfo,
@@ -345,8 +346,9 @@
return;
}
seq = ntohl(th->seq);
+ fatal = icmpv6_err_convert(type, code, &err);
if (sk->sk_state == TCP_NEW_SYN_RECV)
- return tcp_req_err(sk, seq);
+ return tcp_req_err(sk, seq, fatal);
bh_lock_sock(sk);
if (sock_owned_by_user(sk) && type != ICMPV6_PKT_TOOBIG)
@@ -400,7 +402,6 @@
goto out;
}
- icmpv6_err_convert(type, code, &err);
/* Might be for an request_sock */
switch (sk->sk_state) {
@@ -1386,7 +1387,7 @@
if (sk->sk_state == TCP_NEW_SYN_RECV) {
struct request_sock *req = inet_reqsk(sk);
- struct sock *nsk = NULL;
+ struct sock *nsk;
sk = req->rsk_listener;
tcp_v6_fill_cb(skb, hdr, th);
@@ -1394,24 +1395,24 @@
reqsk_put(req);
goto discard_it;
}
- if (likely(sk->sk_state == TCP_LISTEN)) {
- nsk = tcp_check_req(sk, skb, req, false);
- } else {
+ if (unlikely(sk->sk_state != TCP_LISTEN)) {
inet_csk_reqsk_queue_drop_and_put(sk, req);
goto lookup;
}
+ sock_hold(sk);
+ nsk = tcp_check_req(sk, skb, req, false);
if (!nsk) {
reqsk_put(req);
- goto discard_it;
+ goto discard_and_relse;
}
if (nsk == sk) {
- sock_hold(sk);
reqsk_put(req);
tcp_v6_restore_cb(skb);
} else if (tcp_child_process(sk, nsk, skb)) {
tcp_v6_send_reset(nsk, skb);
- goto discard_it;
+ goto discard_and_relse;
} else {
+ sock_put(sk);
return 0;
}
}
diff --git a/net/l2tp/l2tp_netlink.c b/net/l2tp/l2tp_netlink.c
index f93c5be..2caaa84 100644
--- a/net/l2tp/l2tp_netlink.c
+++ b/net/l2tp/l2tp_netlink.c
@@ -124,8 +124,13 @@
ret = l2tp_nl_tunnel_send(msg, info->snd_portid, info->snd_seq,
NLM_F_ACK, tunnel, cmd);
- if (ret >= 0)
- return genlmsg_multicast_allns(family, msg, 0, 0, GFP_ATOMIC);
+ if (ret >= 0) {
+ ret = genlmsg_multicast_allns(family, msg, 0, 0, GFP_ATOMIC);
+ /* We don't care if no one is listening */
+ if (ret == -ESRCH)
+ ret = 0;
+ return ret;
+ }
nlmsg_free(msg);
@@ -147,8 +152,13 @@
ret = l2tp_nl_session_send(msg, info->snd_portid, info->snd_seq,
NLM_F_ACK, session, cmd);
- if (ret >= 0)
- return genlmsg_multicast_allns(family, msg, 0, 0, GFP_ATOMIC);
+ if (ret >= 0) {
+ ret = genlmsg_multicast_allns(family, msg, 0, 0, GFP_ATOMIC);
+ /* We don't care if no one is listening */
+ if (ret == -ESRCH)
+ ret = 0;
+ return ret;
+ }
nlmsg_free(msg);
diff --git a/net/netfilter/Kconfig b/net/netfilter/Kconfig
index 8c067e6..95e757c 100644
--- a/net/netfilter/Kconfig
+++ b/net/netfilter/Kconfig
@@ -891,7 +891,7 @@
depends on IPV6 || IPV6=n
depends on !NF_CONNTRACK || NF_CONNTRACK
select NF_DUP_IPV4
- select NF_DUP_IPV6 if IP6_NF_IPTABLES != n
+ select NF_DUP_IPV6 if IPV6
---help---
This option adds a "TEE" target with which a packet can be cloned and
this clone be rerouted to another nexthop.
diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c
index 58882de..f60b4fd 100644
--- a/net/netfilter/nf_conntrack_core.c
+++ b/net/netfilter/nf_conntrack_core.c
@@ -1412,6 +1412,7 @@
}
spin_unlock(lockp);
local_bh_enable();
+ cond_resched();
}
for_each_possible_cpu(cpu) {
@@ -1424,6 +1425,7 @@
set_bit(IPS_DYING_BIT, &ct->status);
}
spin_unlock_bh(&pcpu->lock);
+ cond_resched();
}
return NULL;
found:
@@ -1440,6 +1442,8 @@
struct nf_conn *ct;
unsigned int bucket = 0;
+ might_sleep();
+
while ((ct = get_next_corpse(net, iter, data, &bucket)) != NULL) {
/* Time to push up daises... */
if (del_timer(&ct->timeout))
@@ -1448,6 +1452,7 @@
/* ... else the timer will get him soon. */
nf_ct_put(ct);
+ cond_resched();
}
}
EXPORT_SYMBOL_GPL(nf_ct_iterate_cleanup);
diff --git a/net/netfilter/nfnetlink.c b/net/netfilter/nfnetlink.c
index a7ba233..857ae89 100644
--- a/net/netfilter/nfnetlink.c
+++ b/net/netfilter/nfnetlink.c
@@ -311,14 +311,14 @@
#endif
{
nfnl_unlock(subsys_id);
- netlink_ack(skb, nlh, -EOPNOTSUPP);
+ netlink_ack(oskb, nlh, -EOPNOTSUPP);
return kfree_skb(skb);
}
}
if (!ss->commit || !ss->abort) {
nfnl_unlock(subsys_id);
- netlink_ack(skb, nlh, -EOPNOTSUPP);
+ netlink_ack(oskb, nlh, -EOPNOTSUPP);
return kfree_skb(skb);
}
@@ -328,10 +328,12 @@
nlh = nlmsg_hdr(skb);
err = 0;
- if (nlmsg_len(nlh) < sizeof(struct nfgenmsg) ||
- skb->len < nlh->nlmsg_len) {
- err = -EINVAL;
- goto ack;
+ if (nlh->nlmsg_len < NLMSG_HDRLEN ||
+ skb->len < nlh->nlmsg_len ||
+ nlmsg_len(nlh) < sizeof(struct nfgenmsg)) {
+ nfnl_err_reset(&err_list);
+ status |= NFNL_BATCH_FAILURE;
+ goto done;
}
/* Only requests are handled by the kernel */
@@ -406,7 +408,7 @@
* pointing to the batch header.
*/
nfnl_err_reset(&err_list);
- netlink_ack(skb, nlmsg_hdr(oskb), -ENOMEM);
+ netlink_ack(oskb, nlmsg_hdr(oskb), -ENOMEM);
status |= NFNL_BATCH_FAILURE;
goto done;
}
diff --git a/net/netfilter/nfnetlink_cttimeout.c b/net/netfilter/nfnetlink_cttimeout.c
index 94837d2..2671b9d 100644
--- a/net/netfilter/nfnetlink_cttimeout.c
+++ b/net/netfilter/nfnetlink_cttimeout.c
@@ -312,7 +312,7 @@
hlist_nulls_for_each_entry(h, nn, &net->ct.hash[i], hnnode)
untimeout(h, timeout);
}
- nf_conntrack_lock(&nf_conntrack_locks[i % CONNTRACK_LOCKS]);
+ spin_unlock(&nf_conntrack_locks[i % CONNTRACK_LOCKS]);
}
local_bh_enable();
}
diff --git a/net/netfilter/nft_counter.c b/net/netfilter/nft_counter.c
index c7808fc..c9743f7 100644
--- a/net/netfilter/nft_counter.c
+++ b/net/netfilter/nft_counter.c
@@ -100,7 +100,7 @@
cpu_stats = netdev_alloc_pcpu_stats(struct nft_counter_percpu);
if (cpu_stats == NULL)
- return ENOMEM;
+ return -ENOMEM;
preempt_disable();
this_cpu = this_cpu_ptr(cpu_stats);
@@ -138,7 +138,7 @@
cpu_stats = __netdev_alloc_pcpu_stats(struct nft_counter_percpu,
GFP_ATOMIC);
if (cpu_stats == NULL)
- return ENOMEM;
+ return -ENOMEM;
preempt_disable();
this_cpu = this_cpu_ptr(cpu_stats);
diff --git a/net/netfilter/xt_TEE.c b/net/netfilter/xt_TEE.c
index 3eff7b6..6e57a39 100644
--- a/net/netfilter/xt_TEE.c
+++ b/net/netfilter/xt_TEE.c
@@ -38,7 +38,7 @@
return XT_CONTINUE;
}
-#if IS_ENABLED(CONFIG_NF_DUP_IPV6)
+#if IS_ENABLED(CONFIG_IPV6)
static unsigned int
tee_tg6(struct sk_buff *skb, const struct xt_action_param *par)
{
@@ -131,7 +131,7 @@
.destroy = tee_tg_destroy,
.me = THIS_MODULE,
},
-#if IS_ENABLED(CONFIG_NF_DUP_IPV6)
+#if IS_ENABLED(CONFIG_IPV6)
{
.name = "TEE",
.revision = 1,
diff --git a/net/openvswitch/vport-vxlan.c b/net/openvswitch/vport-vxlan.c
index 1605691..5eb7694 100644
--- a/net/openvswitch/vport-vxlan.c
+++ b/net/openvswitch/vport-vxlan.c
@@ -90,7 +90,9 @@
int err;
struct vxlan_config conf = {
.no_share = true,
- .flags = VXLAN_F_COLLECT_METADATA,
+ .flags = VXLAN_F_COLLECT_METADATA | VXLAN_F_UDP_ZERO_CSUM6_RX,
+ /* Don't restrict the packets that can be sent by MTU */
+ .mtu = IP_MAX_MTU,
};
if (!options) {
diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c
index b5c2cf2..af1acf0 100644
--- a/net/sched/sch_api.c
+++ b/net/sched/sch_api.c
@@ -1852,6 +1852,7 @@
}
tp = old_tp;
+ protocol = tc_skb_protocol(skb);
goto reclassify;
#endif
}
diff --git a/net/sctp/protocol.c b/net/sctp/protocol.c
index ab0d538..1099e99 100644
--- a/net/sctp/protocol.c
+++ b/net/sctp/protocol.c
@@ -60,6 +60,8 @@
#include <net/inet_common.h>
#include <net/inet_ecn.h>
+#define MAX_SCTP_PORT_HASH_ENTRIES (64 * 1024)
+
/* Global data structures. */
struct sctp_globals sctp_globals __read_mostly;
@@ -1355,6 +1357,8 @@
unsigned long limit;
int max_share;
int order;
+ int num_entries;
+ int max_entry_order;
sock_skb_cb_check_size(sizeof(struct sctp_ulpevent));
@@ -1407,14 +1411,24 @@
/* Size and allocate the association hash table.
* The methodology is similar to that of the tcp hash tables.
+ * Though not identical. Start by getting a goal size
*/
if (totalram_pages >= (128 * 1024))
goal = totalram_pages >> (22 - PAGE_SHIFT);
else
goal = totalram_pages >> (24 - PAGE_SHIFT);
- for (order = 0; (1UL << order) < goal; order++)
- ;
+ /* Then compute the page order for said goal */
+ order = get_order(goal);
+
+ /* Now compute the required page order for the maximum sized table we
+ * want to create
+ */
+ max_entry_order = get_order(MAX_SCTP_PORT_HASH_ENTRIES *
+ sizeof(struct sctp_bind_hashbucket));
+
+ /* Limit the page order by that maximum hash table size */
+ order = min(order, max_entry_order);
/* Allocate and initialize the endpoint hash table. */
sctp_ep_hashsize = 64;
@@ -1430,20 +1444,35 @@
INIT_HLIST_HEAD(&sctp_ep_hashtable[i].chain);
}
- /* Allocate and initialize the SCTP port hash table. */
+ /* Allocate and initialize the SCTP port hash table.
+ * Note that order is initalized to start at the max sized
+ * table we want to support. If we can't get that many pages
+ * reduce the order and try again
+ */
do {
- sctp_port_hashsize = (1UL << order) * PAGE_SIZE /
- sizeof(struct sctp_bind_hashbucket);
- if ((sctp_port_hashsize > (64 * 1024)) && order > 0)
- continue;
sctp_port_hashtable = (struct sctp_bind_hashbucket *)
__get_free_pages(GFP_KERNEL | __GFP_NOWARN, order);
} while (!sctp_port_hashtable && --order > 0);
+
if (!sctp_port_hashtable) {
pr_err("Failed bind hash alloc\n");
status = -ENOMEM;
goto err_bhash_alloc;
}
+
+ /* Now compute the number of entries that will fit in the
+ * port hash space we allocated
+ */
+ num_entries = (1UL << order) * PAGE_SIZE /
+ sizeof(struct sctp_bind_hashbucket);
+
+ /* And finish by rounding it down to the nearest power of two
+ * this wastes some memory of course, but its needed because
+ * the hash function operates based on the assumption that
+ * that the number of entries is a power of two
+ */
+ sctp_port_hashsize = rounddown_pow_of_two(num_entries);
+
for (i = 0; i < sctp_port_hashsize; i++) {
spin_lock_init(&sctp_port_hashtable[i].lock);
INIT_HLIST_HEAD(&sctp_port_hashtable[i].chain);
@@ -1452,7 +1481,8 @@
if (sctp_transport_hashtable_init())
goto err_thash_alloc;
- pr_info("Hash tables configured (bind %d)\n", sctp_port_hashsize);
+ pr_info("Hash tables configured (bind %d/%d)\n", sctp_port_hashsize,
+ num_entries);
sctp_sysctl_register();
diff --git a/net/sctp/socket.c b/net/sctp/socket.c
index 5ca2ebf..e878da0 100644
--- a/net/sctp/socket.c
+++ b/net/sctp/socket.c
@@ -5538,6 +5538,7 @@
struct sctp_hmac_algo_param *hmacs;
__u16 data_len = 0;
u32 num_idents;
+ int i;
if (!ep->auth_enable)
return -EACCES;
@@ -5555,8 +5556,12 @@
return -EFAULT;
if (put_user(num_idents, &p->shmac_num_idents))
return -EFAULT;
- if (copy_to_user(p->shmac_idents, hmacs->hmac_ids, data_len))
- return -EFAULT;
+ for (i = 0; i < num_idents; i++) {
+ __u16 hmacid = ntohs(hmacs->hmac_ids[i]);
+
+ if (copy_to_user(&p->shmac_idents[i], &hmacid, sizeof(__u16)))
+ return -EFAULT;
+ }
return 0;
}
diff --git a/net/sunrpc/auth_gss/auth_gss.c b/net/sunrpc/auth_gss/auth_gss.c
index 799e65b..cabf586 100644
--- a/net/sunrpc/auth_gss/auth_gss.c
+++ b/net/sunrpc/auth_gss/auth_gss.c
@@ -740,7 +740,7 @@
default:
printk(KERN_CRIT "%s: bad return from "
"gss_fill_context: %zd\n", __func__, err);
- BUG();
+ gss_msg->msg.errno = -EIO;
}
goto err_release_msg;
}
diff --git a/net/sunrpc/cache.c b/net/sunrpc/cache.c
index 2b32fd6..273bc3a 100644
--- a/net/sunrpc/cache.c
+++ b/net/sunrpc/cache.c
@@ -1225,7 +1225,7 @@
if (bp[0] == '\\' && bp[1] == 'x') {
/* HEX STRING */
bp += 2;
- while (len < bufsize) {
+ while (len < bufsize - 1) {
int h, l;
h = hex_to_bin(bp[0]);
diff --git a/net/sunrpc/xprtrdma/backchannel.c b/net/sunrpc/xprtrdma/backchannel.c
index cc1251d..2dcd764 100644
--- a/net/sunrpc/xprtrdma/backchannel.c
+++ b/net/sunrpc/xprtrdma/backchannel.c
@@ -341,6 +341,8 @@
rqst->rq_reply_bytes_recvd = 0;
rqst->rq_bytes_sent = 0;
rqst->rq_xid = headerp->rm_xid;
+
+ rqst->rq_private_buf.len = size;
set_bit(RPC_BC_PA_IN_USE, &rqst->rq_bc_pa_state);
buf = &rqst->rq_rcv_buf;
diff --git a/net/tipc/link.c b/net/tipc/link.c
index 0c2944f..347cdc9 100644
--- a/net/tipc/link.c
+++ b/net/tipc/link.c
@@ -1973,8 +1973,10 @@
hdr = genlmsg_put(msg->skb, msg->portid, msg->seq, &tipc_genl_family,
NLM_F_MULTI, TIPC_NL_LINK_GET);
- if (!hdr)
+ if (!hdr) {
+ tipc_bcast_unlock(net);
return -EMSGSIZE;
+ }
attrs = nla_nest_start(msg->skb, TIPC_NLA_LINK);
if (!attrs)
diff --git a/net/tipc/node.c b/net/tipc/node.c
index fa97d96..9d7a16f 100644
--- a/net/tipc/node.c
+++ b/net/tipc/node.c
@@ -346,12 +346,6 @@
skb_queue_head_init(&n->bc_entry.inputq2);
for (i = 0; i < MAX_BEARERS; i++)
spin_lock_init(&n->links[i].lock);
- hlist_add_head_rcu(&n->hash, &tn->node_htable[tipc_hashfn(addr)]);
- list_for_each_entry_rcu(temp_node, &tn->node_list, list) {
- if (n->addr < temp_node->addr)
- break;
- }
- list_add_tail_rcu(&n->list, &temp_node->list);
n->state = SELF_DOWN_PEER_LEAVING;
n->signature = INVALID_NODE_SIG;
n->active_links[0] = INVALID_BEARER_ID;
@@ -372,6 +366,12 @@
tipc_node_get(n);
setup_timer(&n->timer, tipc_node_timeout, (unsigned long)n);
n->keepalive_intv = U32_MAX;
+ hlist_add_head_rcu(&n->hash, &tn->node_htable[tipc_hashfn(addr)]);
+ list_for_each_entry_rcu(temp_node, &tn->node_list, list) {
+ if (n->addr < temp_node->addr)
+ break;
+ }
+ list_add_tail_rcu(&n->list, &temp_node->list);
exit:
spin_unlock_bh(&tn->node_list_lock);
return n;
diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
index 49d5093..f75f847 100644
--- a/net/unix/af_unix.c
+++ b/net/unix/af_unix.c
@@ -1496,7 +1496,7 @@
UNIXCB(skb).fp = NULL;
for (i = scm->fp->count-1; i >= 0; i--)
- unix_notinflight(scm->fp->fp[i]);
+ unix_notinflight(scm->fp->user, scm->fp->fp[i]);
}
static void unix_destruct_scm(struct sk_buff *skb)
@@ -1561,7 +1561,7 @@
return -ENOMEM;
for (i = scm->fp->count - 1; i >= 0; i--)
- unix_inflight(scm->fp->fp[i]);
+ unix_inflight(scm->fp->user, scm->fp->fp[i]);
return max_level;
}
@@ -1781,7 +1781,12 @@
goto out_unlock;
}
- if (unlikely(unix_peer(other) != sk && unix_recvq_full(other))) {
+ /* other == sk && unix_peer(other) != sk if
+ * - unix_peer(sk) == NULL, destination address bound to sk
+ * - unix_peer(sk) == sk by time of get but disconnected before lock
+ */
+ if (other != sk &&
+ unlikely(unix_peer(other) != sk && unix_recvq_full(other))) {
if (timeo) {
timeo = unix_wait_for_peer(other, timeo);
@@ -2277,13 +2282,15 @@
size_t size = state->size;
unsigned int last_len;
- err = -EINVAL;
- if (sk->sk_state != TCP_ESTABLISHED)
+ if (unlikely(sk->sk_state != TCP_ESTABLISHED)) {
+ err = -EINVAL;
goto out;
+ }
- err = -EOPNOTSUPP;
- if (flags & MSG_OOB)
+ if (unlikely(flags & MSG_OOB)) {
+ err = -EOPNOTSUPP;
goto out;
+ }
target = sock_rcvlowat(sk, flags & MSG_WAITALL, size);
timeo = sock_rcvtimeo(sk, noblock);
@@ -2305,6 +2312,7 @@
bool drop_skb;
struct sk_buff *skb, *last;
+redo:
unix_state_lock(sk);
if (sock_flag(sk, SOCK_DEAD)) {
err = -ECONNRESET;
@@ -2329,9 +2337,11 @@
goto unlock;
unix_state_unlock(sk);
- err = -EAGAIN;
- if (!timeo)
+ if (!timeo) {
+ err = -EAGAIN;
break;
+ }
+
mutex_unlock(&u->readlock);
timeo = unix_stream_data_wait(sk, timeo, last,
@@ -2344,7 +2354,7 @@
}
mutex_lock(&u->readlock);
- continue;
+ goto redo;
unlock:
unix_state_unlock(sk);
break;
diff --git a/net/unix/diag.c b/net/unix/diag.c
index c512f64..4d96797 100644
--- a/net/unix/diag.c
+++ b/net/unix/diag.c
@@ -220,7 +220,7 @@
return skb->len;
}
-static struct sock *unix_lookup_by_ino(int ino)
+static struct sock *unix_lookup_by_ino(unsigned int ino)
{
int i;
struct sock *sk;
diff --git a/net/unix/garbage.c b/net/unix/garbage.c
index 8fcdc22..6a0d485 100644
--- a/net/unix/garbage.c
+++ b/net/unix/garbage.c
@@ -116,7 +116,7 @@
* descriptor if it is for an AF_UNIX socket.
*/
-void unix_inflight(struct file *fp)
+void unix_inflight(struct user_struct *user, struct file *fp)
{
struct sock *s = unix_get_socket(fp);
@@ -133,11 +133,11 @@
}
unix_tot_inflight++;
}
- fp->f_cred->user->unix_inflight++;
+ user->unix_inflight++;
spin_unlock(&unix_gc_lock);
}
-void unix_notinflight(struct file *fp)
+void unix_notinflight(struct user_struct *user, struct file *fp)
{
struct sock *s = unix_get_socket(fp);
@@ -152,7 +152,7 @@
list_del_init(&u->link);
unix_tot_inflight--;
}
- fp->f_cred->user->unix_inflight--;
+ user->unix_inflight--;
spin_unlock(&unix_gc_lock);
}
diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
index 7fd1220..bbe65dc 100644
--- a/net/vmw_vsock/af_vsock.c
+++ b/net/vmw_vsock/af_vsock.c
@@ -1557,8 +1557,6 @@
if (err < 0)
goto out;
- prepare_to_wait(sk_sleep(sk), &wait, TASK_INTERRUPTIBLE);
-
while (total_written < len) {
ssize_t written;
@@ -1578,7 +1576,9 @@
goto out_wait;
release_sock(sk);
+ prepare_to_wait(sk_sleep(sk), &wait, TASK_INTERRUPTIBLE);
timeout = schedule_timeout(timeout);
+ finish_wait(sk_sleep(sk), &wait);
lock_sock(sk);
if (signal_pending(current)) {
err = sock_intr_errno(timeout);
@@ -1588,8 +1588,6 @@
goto out_wait;
}
- prepare_to_wait(sk_sleep(sk), &wait,
- TASK_INTERRUPTIBLE);
}
/* These checks occur both as part of and after the loop
@@ -1635,7 +1633,6 @@
out_wait:
if (total_written > 0)
err = total_written;
- finish_wait(sk_sleep(sk), &wait);
out:
release_sock(sk);
return err;
@@ -1716,7 +1713,6 @@
if (err < 0)
goto out;
- prepare_to_wait(sk_sleep(sk), &wait, TASK_INTERRUPTIBLE);
while (1) {
s64 ready = vsock_stream_has_data(vsk);
@@ -1727,7 +1723,7 @@
*/
err = -ENOMEM;
- goto out_wait;
+ goto out;
} else if (ready > 0) {
ssize_t read;
@@ -1750,7 +1746,7 @@
vsk, target, read,
!(flags & MSG_PEEK), &recv_data);
if (err < 0)
- goto out_wait;
+ goto out;
if (read >= target || flags & MSG_PEEK)
break;
@@ -1773,7 +1769,9 @@
break;
release_sock(sk);
+ prepare_to_wait(sk_sleep(sk), &wait, TASK_INTERRUPTIBLE);
timeout = schedule_timeout(timeout);
+ finish_wait(sk_sleep(sk), &wait);
lock_sock(sk);
if (signal_pending(current)) {
@@ -1783,9 +1781,6 @@
err = -EAGAIN;
break;
}
-
- prepare_to_wait(sk_sleep(sk), &wait,
- TASK_INTERRUPTIBLE);
}
}
@@ -1816,8 +1811,6 @@
err = copied;
}
-out_wait:
- finish_wait(sk_sleep(sk), &wait);
out:
release_sock(sk);
return err;
diff --git a/scripts/prune-kernel b/scripts/prune-kernel
new file mode 100755
index 0000000..ab5034e
--- /dev/null
+++ b/scripts/prune-kernel
@@ -0,0 +1,20 @@
+#!/bin/bash
+
+# because I use CONFIG_LOCALVERSION_AUTO, not the same version again and
+# again, /boot and /lib/modules/ eventually fill up.
+# Dumb script to purge that stuff:
+
+for f in "$@"
+do
+ if rpm -qf "/lib/modules/$f" >/dev/null; then
+ echo "keeping $f (installed from rpm)"
+ elif [ $(uname -r) = "$f" ]; then
+ echo "keeping $f (running kernel) "
+ else
+ echo "removing $f"
+ rm -f "/boot/initramfs-$f.img" "/boot/System.map-$f"
+ rm -f "/boot/vmlinuz-$f" "/boot/config-$f"
+ rm -rf "/lib/modules/$f"
+ new-kernel-pkg --remove $f
+ fi
+done
diff --git a/scripts/ver_linux b/scripts/ver_linux
index 024a11a..0d8bd29 100755
--- a/scripts/ver_linux
+++ b/scripts/ver_linux
@@ -1,6 +1,6 @@
#!/bin/sh
# Before running this script please ensure that your PATH is
-# typical as you use for compilation/istallation. I use
+# typical as you use for compilation/installation. I use
# /bin /sbin /usr/bin /usr/sbin /usr/local/bin, but it may
# differ on your system.
#
diff --git a/security/integrity/evm/evm_main.c b/security/integrity/evm/evm_main.c
index f716025..e6ea9d4 100644
--- a/security/integrity/evm/evm_main.c
+++ b/security/integrity/evm/evm_main.c
@@ -23,6 +23,7 @@
#include <linux/integrity.h>
#include <linux/evm.h>
#include <crypto/hash.h>
+#include <crypto/algapi.h>
#include "evm.h"
int evm_initialized;
@@ -148,7 +149,7 @@
xattr_value_len, calc.digest);
if (rc)
break;
- rc = memcmp(xattr_data->digest, calc.digest,
+ rc = crypto_memneq(xattr_data->digest, calc.digest,
sizeof(calc.digest));
if (rc)
rc = -EINVAL;
diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c
index f8110cf..f1ab715 100644
--- a/security/selinux/hooks.c
+++ b/security/selinux/hooks.c
@@ -3249,7 +3249,7 @@
static void selinux_inode_getsecid(struct inode *inode, u32 *secid)
{
- struct inode_security_struct *isec = inode_security(inode);
+ struct inode_security_struct *isec = inode_security_novalidate(inode);
*secid = isec->sid;
}
diff --git a/security/selinux/nlmsgtab.c b/security/selinux/nlmsgtab.c
index 2bbb418..8495b93 100644
--- a/security/selinux/nlmsgtab.c
+++ b/security/selinux/nlmsgtab.c
@@ -83,6 +83,7 @@
{ TCPDIAG_GETSOCK, NETLINK_TCPDIAG_SOCKET__NLMSG_READ },
{ DCCPDIAG_GETSOCK, NETLINK_TCPDIAG_SOCKET__NLMSG_READ },
{ SOCK_DIAG_BY_FAMILY, NETLINK_TCPDIAG_SOCKET__NLMSG_READ },
+ { SOCK_DESTROY, NETLINK_TCPDIAG_SOCKET__NLMSG_WRITE },
};
static struct nlmsg_perm nlmsg_xfrm_perms[] =
diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c
index fadd3eb..9106d8e 100644
--- a/sound/core/pcm_native.c
+++ b/sound/core/pcm_native.c
@@ -74,6 +74,18 @@
static DEFINE_RWLOCK(snd_pcm_link_rwlock);
static DECLARE_RWSEM(snd_pcm_link_rwsem);
+/* Writer in rwsem may block readers even during its waiting in queue,
+ * and this may lead to a deadlock when the code path takes read sem
+ * twice (e.g. one in snd_pcm_action_nonatomic() and another in
+ * snd_pcm_stream_lock()). As a (suboptimal) workaround, let writer to
+ * spin until it gets the lock.
+ */
+static inline void down_write_nonblock(struct rw_semaphore *lock)
+{
+ while (!down_write_trylock(lock))
+ cond_resched();
+}
+
/**
* snd_pcm_stream_lock - Lock the PCM stream
* @substream: PCM substream
@@ -1813,7 +1825,7 @@
res = -ENOMEM;
goto _nolock;
}
- down_write(&snd_pcm_link_rwsem);
+ down_write_nonblock(&snd_pcm_link_rwsem);
write_lock_irq(&snd_pcm_link_rwlock);
if (substream->runtime->status->state == SNDRV_PCM_STATE_OPEN ||
substream->runtime->status->state != substream1->runtime->status->state ||
@@ -1860,7 +1872,7 @@
struct snd_pcm_substream *s;
int res = 0;
- down_write(&snd_pcm_link_rwsem);
+ down_write_nonblock(&snd_pcm_link_rwsem);
write_lock_irq(&snd_pcm_link_rwlock);
if (!snd_pcm_stream_linked(substream)) {
res = -EALREADY;
diff --git a/sound/core/seq/seq_memory.c b/sound/core/seq/seq_memory.c
index 8010766..c850345 100644
--- a/sound/core/seq/seq_memory.c
+++ b/sound/core/seq/seq_memory.c
@@ -383,15 +383,20 @@
if (snd_BUG_ON(!pool))
return -EINVAL;
- if (pool->ptr) /* should be atomic? */
- return 0;
- pool->ptr = vmalloc(sizeof(struct snd_seq_event_cell) * pool->size);
- if (!pool->ptr)
+ cellptr = vmalloc(sizeof(struct snd_seq_event_cell) * pool->size);
+ if (!cellptr)
return -ENOMEM;
/* add new cells to the free cell list */
spin_lock_irqsave(&pool->lock, flags);
+ if (pool->ptr) {
+ spin_unlock_irqrestore(&pool->lock, flags);
+ vfree(cellptr);
+ return 0;
+ }
+
+ pool->ptr = cellptr;
pool->free = NULL;
for (cell = 0; cell < pool->size; cell++) {
diff --git a/sound/core/seq/seq_ports.c b/sound/core/seq/seq_ports.c
index 921fb2b..fe686ee 100644
--- a/sound/core/seq/seq_ports.c
+++ b/sound/core/seq/seq_ports.c
@@ -535,19 +535,22 @@
bool is_src, bool ack)
{
struct snd_seq_port_subs_info *grp;
+ struct list_head *list;
+ bool empty;
grp = is_src ? &port->c_src : &port->c_dest;
+ list = is_src ? &subs->src_list : &subs->dest_list;
down_write(&grp->list_mutex);
write_lock_irq(&grp->list_lock);
- if (is_src)
- list_del(&subs->src_list);
- else
- list_del(&subs->dest_list);
+ empty = list_empty(list);
+ if (!empty)
+ list_del_init(list);
grp->exclusive = 0;
write_unlock_irq(&grp->list_lock);
up_write(&grp->list_mutex);
- unsubscribe_port(client, port, grp, &subs->info, ack);
+ if (!empty)
+ unsubscribe_port(client, port, grp, &subs->info, ack);
}
/* connect two ports */
diff --git a/sound/core/timer.c b/sound/core/timer.c
index 9b513a0..dca817f 100644
--- a/sound/core/timer.c
+++ b/sound/core/timer.c
@@ -422,7 +422,7 @@
spin_lock_irqsave(&timer->lock, flags);
list_for_each_entry(ts, &ti->slave_active_head, active_list)
if (ts->ccallback)
- ts->ccallback(ti, event + 100, &tstamp, resolution);
+ ts->ccallback(ts, event + 100, &tstamp, resolution);
spin_unlock_irqrestore(&timer->lock, flags);
}
@@ -518,9 +518,13 @@
spin_unlock_irqrestore(&slave_active_lock, flags);
return -EBUSY;
}
+ if (timeri->timer)
+ spin_lock(&timeri->timer->lock);
timeri->flags &= ~SNDRV_TIMER_IFLG_RUNNING;
list_del_init(&timeri->ack_list);
list_del_init(&timeri->active_list);
+ if (timeri->timer)
+ spin_unlock(&timeri->timer->lock);
spin_unlock_irqrestore(&slave_active_lock, flags);
goto __end;
}
@@ -1929,6 +1933,7 @@
{
struct snd_timer_user *tu;
long result = 0, unit;
+ int qhead;
int err = 0;
tu = file->private_data;
@@ -1940,7 +1945,7 @@
if ((file->f_flags & O_NONBLOCK) != 0 || result > 0) {
err = -EAGAIN;
- break;
+ goto _error;
}
set_current_state(TASK_INTERRUPTIBLE);
@@ -1955,42 +1960,37 @@
if (tu->disconnected) {
err = -ENODEV;
- break;
+ goto _error;
}
if (signal_pending(current)) {
err = -ERESTARTSYS;
- break;
+ goto _error;
}
}
+ qhead = tu->qhead++;
+ tu->qhead %= tu->queue_size;
spin_unlock_irq(&tu->qlock);
- if (err < 0)
- goto _error;
if (tu->tread) {
- if (copy_to_user(buffer, &tu->tqueue[tu->qhead++],
- sizeof(struct snd_timer_tread))) {
+ if (copy_to_user(buffer, &tu->tqueue[qhead],
+ sizeof(struct snd_timer_tread)))
err = -EFAULT;
- goto _error;
- }
} else {
- if (copy_to_user(buffer, &tu->queue[tu->qhead++],
- sizeof(struct snd_timer_read))) {
+ if (copy_to_user(buffer, &tu->queue[qhead],
+ sizeof(struct snd_timer_read)))
err = -EFAULT;
- goto _error;
- }
}
- tu->qhead %= tu->queue_size;
-
- result += unit;
- buffer += unit;
-
spin_lock_irq(&tu->qlock);
tu->qused--;
+ if (err < 0)
+ goto _error;
+ result += unit;
+ buffer += unit;
}
- spin_unlock_irq(&tu->qlock);
_error:
+ spin_unlock_irq(&tu->qlock);
return result > 0 ? result : err;
}
diff --git a/sound/drivers/dummy.c b/sound/drivers/dummy.c
index bde3330..c0f8f61 100644
--- a/sound/drivers/dummy.c
+++ b/sound/drivers/dummy.c
@@ -87,7 +87,7 @@
module_param(fake_buffer, bool, 0444);
MODULE_PARM_DESC(fake_buffer, "Fake buffer allocations.");
#ifdef CONFIG_HIGH_RES_TIMERS
-module_param(hrtimer, bool, 0444);
+module_param(hrtimer, bool, 0644);
MODULE_PARM_DESC(hrtimer, "Use hrtimer as the timer source.");
#endif
@@ -109,6 +109,9 @@
snd_pcm_uframes_t (*pointer)(struct snd_pcm_substream *);
};
+#define get_dummy_ops(substream) \
+ (*(const struct dummy_timer_ops **)(substream)->runtime->private_data)
+
struct dummy_model {
const char *name;
int (*playback_constraints)(struct snd_pcm_runtime *runtime);
@@ -137,7 +140,6 @@
int iobox;
struct snd_kcontrol *cd_volume_ctl;
struct snd_kcontrol *cd_switch_ctl;
- const struct dummy_timer_ops *timer_ops;
};
/*
@@ -231,6 +233,8 @@
*/
struct dummy_systimer_pcm {
+ /* ops must be the first item */
+ const struct dummy_timer_ops *timer_ops;
spinlock_t lock;
struct timer_list timer;
unsigned long base_time;
@@ -366,6 +370,8 @@
*/
struct dummy_hrtimer_pcm {
+ /* ops must be the first item */
+ const struct dummy_timer_ops *timer_ops;
ktime_t base_time;
ktime_t period_time;
atomic_t running;
@@ -492,31 +498,25 @@
static int dummy_pcm_trigger(struct snd_pcm_substream *substream, int cmd)
{
- struct snd_dummy *dummy = snd_pcm_substream_chip(substream);
-
switch (cmd) {
case SNDRV_PCM_TRIGGER_START:
case SNDRV_PCM_TRIGGER_RESUME:
- return dummy->timer_ops->start(substream);
+ return get_dummy_ops(substream)->start(substream);
case SNDRV_PCM_TRIGGER_STOP:
case SNDRV_PCM_TRIGGER_SUSPEND:
- return dummy->timer_ops->stop(substream);
+ return get_dummy_ops(substream)->stop(substream);
}
return -EINVAL;
}
static int dummy_pcm_prepare(struct snd_pcm_substream *substream)
{
- struct snd_dummy *dummy = snd_pcm_substream_chip(substream);
-
- return dummy->timer_ops->prepare(substream);
+ return get_dummy_ops(substream)->prepare(substream);
}
static snd_pcm_uframes_t dummy_pcm_pointer(struct snd_pcm_substream *substream)
{
- struct snd_dummy *dummy = snd_pcm_substream_chip(substream);
-
- return dummy->timer_ops->pointer(substream);
+ return get_dummy_ops(substream)->pointer(substream);
}
static struct snd_pcm_hardware dummy_pcm_hardware = {
@@ -562,17 +562,19 @@
struct snd_dummy *dummy = snd_pcm_substream_chip(substream);
struct dummy_model *model = dummy->model;
struct snd_pcm_runtime *runtime = substream->runtime;
+ const struct dummy_timer_ops *ops;
int err;
- dummy->timer_ops = &dummy_systimer_ops;
+ ops = &dummy_systimer_ops;
#ifdef CONFIG_HIGH_RES_TIMERS
if (hrtimer)
- dummy->timer_ops = &dummy_hrtimer_ops;
+ ops = &dummy_hrtimer_ops;
#endif
- err = dummy->timer_ops->create(substream);
+ err = ops->create(substream);
if (err < 0)
return err;
+ get_dummy_ops(substream) = ops;
runtime->hw = dummy->pcm_hw;
if (substream->pcm->device & 1) {
@@ -594,7 +596,7 @@
err = model->capture_constraints(substream->runtime);
}
if (err < 0) {
- dummy->timer_ops->free(substream);
+ get_dummy_ops(substream)->free(substream);
return err;
}
return 0;
@@ -602,8 +604,7 @@
static int dummy_pcm_close(struct snd_pcm_substream *substream)
{
- struct snd_dummy *dummy = snd_pcm_substream_chip(substream);
- dummy->timer_ops->free(substream);
+ get_dummy_ops(substream)->free(substream);
return 0;
}
diff --git a/sound/firewire/digi00x/amdtp-dot.c b/sound/firewire/digi00x/amdtp-dot.c
index b02a5e8c..0ac92ab 100644
--- a/sound/firewire/digi00x/amdtp-dot.c
+++ b/sound/firewire/digi00x/amdtp-dot.c
@@ -63,7 +63,7 @@
#define BYTE_PER_SAMPLE (4)
#define MAGIC_DOT_BYTE (2)
#define MAGIC_BYTE_OFF(x) (((x) * BYTE_PER_SAMPLE) + MAGIC_DOT_BYTE)
-static const u8 dot_scrt(const u8 idx, const unsigned int off)
+static u8 dot_scrt(const u8 idx, const unsigned int off)
{
/*
* the length of the added pattern only depends on the lower nibble
diff --git a/sound/firewire/tascam/tascam-transaction.c b/sound/firewire/tascam/tascam-transaction.c
index 904ce03..040a96d 100644
--- a/sound/firewire/tascam/tascam-transaction.c
+++ b/sound/firewire/tascam/tascam-transaction.c
@@ -230,6 +230,7 @@
return err;
error:
fw_core_remove_address_handler(&tscm->async_handler);
+ tscm->async_handler.callback_data = NULL;
return err;
}
@@ -276,6 +277,9 @@
__be32 reg;
unsigned int i;
+ if (tscm->async_handler.callback_data == NULL)
+ return;
+
/* Turn off FireWire LED. */
reg = cpu_to_be32(0x0000008e);
snd_fw_transaction(tscm->unit, TCODE_WRITE_QUADLET_REQUEST,
@@ -297,6 +301,8 @@
®, sizeof(reg), 0);
fw_core_remove_address_handler(&tscm->async_handler);
+ tscm->async_handler.callback_data = NULL;
+
for (i = 0; i < TSCM_MIDI_OUT_PORT_MAX; i++)
snd_fw_async_midi_port_destroy(&tscm->out_ports[i]);
}
diff --git a/sound/firewire/tascam/tascam.c b/sound/firewire/tascam/tascam.c
index ee0bc18..e281c33 100644
--- a/sound/firewire/tascam/tascam.c
+++ b/sound/firewire/tascam/tascam.c
@@ -21,7 +21,6 @@
.pcm_playback_analog_channels = 8,
.midi_capture_ports = 4,
.midi_playback_ports = 4,
- .is_controller = true,
},
{
.name = "FW-1082",
@@ -31,9 +30,16 @@
.pcm_playback_analog_channels = 2,
.midi_capture_ports = 2,
.midi_playback_ports = 2,
- .is_controller = true,
},
- /* FW-1804 may be supported. */
+ {
+ .name = "FW-1804",
+ .has_adat = true,
+ .has_spdif = true,
+ .pcm_capture_analog_channels = 8,
+ .pcm_playback_analog_channels = 2,
+ .midi_capture_ports = 2,
+ .midi_playback_ports = 4,
+ },
};
static int identify_model(struct snd_tscm *tscm)
diff --git a/sound/firewire/tascam/tascam.h b/sound/firewire/tascam/tascam.h
index 2d028d2..30ab77e 100644
--- a/sound/firewire/tascam/tascam.h
+++ b/sound/firewire/tascam/tascam.h
@@ -39,7 +39,6 @@
unsigned int pcm_playback_analog_channels;
unsigned int midi_capture_ports;
unsigned int midi_playback_ports;
- bool is_controller;
};
#define TSCM_MIDI_IN_PORT_MAX 4
@@ -72,9 +71,6 @@
struct snd_fw_async_midi_port out_ports[TSCM_MIDI_OUT_PORT_MAX];
u8 running_status[TSCM_MIDI_OUT_PORT_MAX];
bool on_sysex[TSCM_MIDI_OUT_PORT_MAX];
-
- /* For control messages. */
- struct snd_firewire_tascam_status *status;
};
#define TSCM_ADDR_BASE 0xffff00000000ull
diff --git a/sound/hda/hdac_controller.c b/sound/hda/hdac_controller.c
index b5a17cb..8c48623 100644
--- a/sound/hda/hdac_controller.c
+++ b/sound/hda/hdac_controller.c
@@ -426,18 +426,22 @@
* @bus: HD-audio core bus
* @status: INTSTS register value
* @ask: callback to be called for woken streams
+ *
+ * Returns the bits of handled streams, or zero if no stream is handled.
*/
-void snd_hdac_bus_handle_stream_irq(struct hdac_bus *bus, unsigned int status,
+int snd_hdac_bus_handle_stream_irq(struct hdac_bus *bus, unsigned int status,
void (*ack)(struct hdac_bus *,
struct hdac_stream *))
{
struct hdac_stream *azx_dev;
u8 sd_status;
+ int handled = 0;
list_for_each_entry(azx_dev, &bus->stream_list, list) {
if (status & azx_dev->sd_int_sta_mask) {
sd_status = snd_hdac_stream_readb(azx_dev, SD_STS);
snd_hdac_stream_writeb(azx_dev, SD_STS, SD_INT_MASK);
+ handled |= 1 << azx_dev->index;
if (!azx_dev->substream || !azx_dev->running ||
!(sd_status & SD_INT_COMPLETE))
continue;
@@ -445,6 +449,7 @@
ack(bus, azx_dev);
}
}
+ return handled;
}
EXPORT_SYMBOL_GPL(snd_hdac_bus_handle_stream_irq);
diff --git a/sound/pci/hda/hda_controller.c b/sound/pci/hda/hda_controller.c
index 37cf9ce..27de801 100644
--- a/sound/pci/hda/hda_controller.c
+++ b/sound/pci/hda/hda_controller.c
@@ -930,6 +930,8 @@
struct azx *chip = dev_id;
struct hdac_bus *bus = azx_bus(chip);
u32 status;
+ bool active, handled = false;
+ int repeat = 0; /* count for avoiding endless loop */
#ifdef CONFIG_PM
if (azx_has_pm_runtime(chip))
@@ -939,33 +941,36 @@
spin_lock(&bus->reg_lock);
- if (chip->disabled) {
- spin_unlock(&bus->reg_lock);
- return IRQ_NONE;
- }
+ if (chip->disabled)
+ goto unlock;
- status = azx_readl(chip, INTSTS);
- if (status == 0 || status == 0xffffffff) {
- spin_unlock(&bus->reg_lock);
- return IRQ_NONE;
- }
+ do {
+ status = azx_readl(chip, INTSTS);
+ if (status == 0 || status == 0xffffffff)
+ break;
- snd_hdac_bus_handle_stream_irq(bus, status, stream_update);
+ handled = true;
+ active = false;
+ if (snd_hdac_bus_handle_stream_irq(bus, status, stream_update))
+ active = true;
- /* clear rirb int */
- status = azx_readb(chip, RIRBSTS);
- if (status & RIRB_INT_MASK) {
- if (status & RIRB_INT_RESPONSE) {
- if (chip->driver_caps & AZX_DCAPS_CTX_WORKAROUND)
- udelay(80);
- snd_hdac_bus_update_rirb(bus);
+ /* clear rirb int */
+ status = azx_readb(chip, RIRBSTS);
+ if (status & RIRB_INT_MASK) {
+ active = true;
+ if (status & RIRB_INT_RESPONSE) {
+ if (chip->driver_caps & AZX_DCAPS_CTX_WORKAROUND)
+ udelay(80);
+ snd_hdac_bus_update_rirb(bus);
+ }
+ azx_writeb(chip, RIRBSTS, RIRB_INT_MASK);
}
- azx_writeb(chip, RIRBSTS, RIRB_INT_MASK);
- }
+ } while (active && ++repeat < 10);
+ unlock:
spin_unlock(&bus->reg_lock);
- return IRQ_HANDLED;
+ return IRQ_RETVAL(handled);
}
EXPORT_SYMBOL_GPL(azx_interrupt);
diff --git a/sound/pci/hda/hda_generic.c b/sound/pci/hda/hda_generic.c
index 30c8efe..7ca5b89 100644
--- a/sound/pci/hda/hda_generic.c
+++ b/sound/pci/hda/hda_generic.c
@@ -4028,9 +4028,9 @@
struct hda_jack_callback *jack,
bool on)
{
- if (jack && jack->tbl->nid)
+ if (jack && jack->nid)
sync_power_state_change(codec,
- set_pin_power_jack(codec, jack->tbl->nid, on));
+ set_pin_power_jack(codec, jack->nid, on));
}
/* callback only doing power up -- called at first */
diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
index 4045dca..e5240cb 100644
--- a/sound/pci/hda/hda_intel.c
+++ b/sound/pci/hda/hda_intel.c
@@ -363,7 +363,10 @@
((pci)->device == 0x0d0c) || \
((pci)->device == 0x160c))
-#define IS_BROXTON(pci) ((pci)->device == 0x5a98)
+#define IS_SKL(pci) ((pci)->vendor == 0x8086 && (pci)->device == 0xa170)
+#define IS_SKL_LP(pci) ((pci)->vendor == 0x8086 && (pci)->device == 0x9d70)
+#define IS_BXT(pci) ((pci)->vendor == 0x8086 && (pci)->device == 0x5a98)
+#define IS_SKL_PLUS(pci) (IS_SKL(pci) || IS_SKL_LP(pci) || IS_BXT(pci))
static char *driver_short_names[] = {
[AZX_DRIVER_ICH] = "HDA Intel",
@@ -540,13 +543,13 @@
if (chip->driver_caps & AZX_DCAPS_I915_POWERWELL)
snd_hdac_set_codec_wakeup(bus, true);
- if (IS_BROXTON(pci)) {
+ if (IS_SKL_PLUS(pci)) {
pci_read_config_dword(pci, INTEL_HDA_CGCTL, &val);
val = val & ~INTEL_HDA_CGCTL_MISCBDCGE;
pci_write_config_dword(pci, INTEL_HDA_CGCTL, val);
}
azx_init_chip(chip, full_reset);
- if (IS_BROXTON(pci)) {
+ if (IS_SKL_PLUS(pci)) {
pci_read_config_dword(pci, INTEL_HDA_CGCTL, &val);
val = val | INTEL_HDA_CGCTL_MISCBDCGE;
pci_write_config_dword(pci, INTEL_HDA_CGCTL, val);
@@ -555,7 +558,7 @@
snd_hdac_set_codec_wakeup(bus, false);
/* reduce dma latency to avoid noise */
- if (IS_BROXTON(pci))
+ if (IS_BXT(pci))
bxt_reduce_dma_latency(chip);
}
@@ -977,11 +980,6 @@
/* put codec down to D3 at hibernation for Intel SKL+;
* otherwise BIOS may still access the codec and screw up the driver
*/
-#define IS_SKL(pci) ((pci)->vendor == 0x8086 && (pci)->device == 0xa170)
-#define IS_SKL_LP(pci) ((pci)->vendor == 0x8086 && (pci)->device == 0x9d70)
-#define IS_BXT(pci) ((pci)->vendor == 0x8086 && (pci)->device == 0x5a98)
-#define IS_SKL_PLUS(pci) (IS_SKL(pci) || IS_SKL_LP(pci) || IS_BXT(pci))
-
static int azx_freeze_noirq(struct device *dev)
{
struct pci_dev *pci = to_pci_dev(dev);
@@ -2168,10 +2166,10 @@
struct hda_intel *hda;
if (card) {
- /* flush the pending probing work */
+ /* cancel the pending probing work */
chip = card->private_data;
hda = container_of(chip, struct hda_intel, chip);
- flush_work(&hda->probe_work);
+ cancel_work_sync(&hda->probe_work);
snd_card_free(card);
}
diff --git a/sound/pci/hda/hda_jack.c b/sound/pci/hda/hda_jack.c
index c945e25..a33234e 100644
--- a/sound/pci/hda/hda_jack.c
+++ b/sound/pci/hda/hda_jack.c
@@ -259,7 +259,7 @@
if (!callback)
return ERR_PTR(-ENOMEM);
callback->func = func;
- callback->tbl = jack;
+ callback->nid = jack->nid;
callback->next = jack->callback;
jack->callback = callback;
}
diff --git a/sound/pci/hda/hda_jack.h b/sound/pci/hda/hda_jack.h
index 858708a..e9814c0 100644
--- a/sound/pci/hda/hda_jack.h
+++ b/sound/pci/hda/hda_jack.h
@@ -21,7 +21,7 @@
typedef void (*hda_jack_callback_fn) (struct hda_codec *, struct hda_jack_callback *);
struct hda_jack_callback {
- struct hda_jack_tbl *tbl;
+ hda_nid_t nid;
hda_jack_callback_fn func;
unsigned int private_data; /* arbitrary data */
struct hda_jack_callback *next;
diff --git a/sound/pci/hda/patch_ca0132.c b/sound/pci/hda/patch_ca0132.c
index 4ef2259..9ceb2bc 100644
--- a/sound/pci/hda/patch_ca0132.c
+++ b/sound/pci/hda/patch_ca0132.c
@@ -4427,13 +4427,16 @@
static void hp_callback(struct hda_codec *codec, struct hda_jack_callback *cb)
{
struct ca0132_spec *spec = codec->spec;
+ struct hda_jack_tbl *tbl;
/* Delay enabling the HP amp, to let the mic-detection
* state machine run.
*/
cancel_delayed_work_sync(&spec->unsol_hp_work);
schedule_delayed_work(&spec->unsol_hp_work, msecs_to_jiffies(500));
- cb->tbl->block_report = 1;
+ tbl = snd_hda_jack_tbl_get(codec, cb->nid);
+ if (tbl)
+ tbl->block_report = 1;
}
static void amic_callback(struct hda_codec *codec, struct hda_jack_callback *cb)
diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
index 1f52b55..8ee78db 100644
--- a/sound/pci/hda/patch_hdmi.c
+++ b/sound/pci/hda/patch_hdmi.c
@@ -448,7 +448,8 @@
eld = &per_pin->sink_eld;
mutex_lock(&per_pin->lock);
- if (eld->eld_size > ARRAY_SIZE(ucontrol->value.bytes.data)) {
+ if (eld->eld_size > ARRAY_SIZE(ucontrol->value.bytes.data) ||
+ eld->eld_size > ELD_MAX_SIZE) {
mutex_unlock(&per_pin->lock);
snd_BUG();
return -EINVAL;
@@ -1193,7 +1194,7 @@
static void jack_callback(struct hda_codec *codec,
struct hda_jack_callback *jack)
{
- check_presence_and_report(codec, jack->tbl->nid);
+ check_presence_and_report(codec, jack->nid);
}
static void hdmi_intrinsic_event(struct hda_codec *codec, unsigned int res)
diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
index 21992fb..1f357cd 100644
--- a/sound/pci/hda/patch_realtek.c
+++ b/sound/pci/hda/patch_realtek.c
@@ -282,7 +282,7 @@
uctl = kzalloc(sizeof(*uctl), GFP_KERNEL);
if (!uctl)
return;
- val = snd_hda_codec_read(codec, jack->tbl->nid, 0,
+ val = snd_hda_codec_read(codec, jack->nid, 0,
AC_VERB_GET_VOLUME_KNOB_CONTROL, 0);
val &= HDA_AMP_VOLMASK;
uctl->value.integer.value[0] = val;
@@ -1787,7 +1787,6 @@
ALC882_FIXUP_NO_PRIMARY_HP,
ALC887_FIXUP_ASUS_BASS,
ALC887_FIXUP_BASS_CHMAP,
- ALC882_FIXUP_DISABLE_AAMIX,
};
static void alc889_fixup_coef(struct hda_codec *codec,
@@ -1949,8 +1948,6 @@
static void alc_fixup_bass_chmap(struct hda_codec *codec,
const struct hda_fixup *fix, int action);
-static void alc_fixup_disable_aamix(struct hda_codec *codec,
- const struct hda_fixup *fix, int action);
static const struct hda_fixup alc882_fixups[] = {
[ALC882_FIXUP_ABIT_AW9D_MAX] = {
@@ -2188,10 +2185,6 @@
.type = HDA_FIXUP_FUNC,
.v.func = alc_fixup_bass_chmap,
},
- [ALC882_FIXUP_DISABLE_AAMIX] = {
- .type = HDA_FIXUP_FUNC,
- .v.func = alc_fixup_disable_aamix,
- },
};
static const struct snd_pci_quirk alc882_fixup_tbl[] = {
@@ -2230,6 +2223,7 @@
SND_PCI_QUIRK(0x104d, 0x9047, "Sony Vaio TT", ALC889_FIXUP_VAIO_TT),
SND_PCI_QUIRK(0x104d, 0x905a, "Sony Vaio Z", ALC882_FIXUP_NO_PRIMARY_HP),
SND_PCI_QUIRK(0x104d, 0x9043, "Sony Vaio VGC-LN51JGB", ALC882_FIXUP_NO_PRIMARY_HP),
+ SND_PCI_QUIRK(0x104d, 0x9044, "Sony VAIO AiO", ALC882_FIXUP_NO_PRIMARY_HP),
/* All Apple entries are in codec SSIDs */
SND_PCI_QUIRK(0x106b, 0x00a0, "MacBookPro 3,1", ALC889_FIXUP_MBP_VREF),
@@ -2259,7 +2253,6 @@
SND_PCI_QUIRK(0x1462, 0x7350, "MSI-7350", ALC889_FIXUP_CD),
SND_PCI_QUIRK_VENDOR(0x1462, "MSI", ALC882_FIXUP_GPIO3),
SND_PCI_QUIRK(0x1458, 0xa002, "Gigabyte EP45-DS3/Z87X-UD3H", ALC889_FIXUP_FRONT_HP_NO_PRESENCE),
- SND_PCI_QUIRK(0x1458, 0xa182, "Gigabyte Z170X-UD3", ALC882_FIXUP_DISABLE_AAMIX),
SND_PCI_QUIRK(0x147b, 0x107a, "Abit AW9D-MAX", ALC882_FIXUP_ABIT_AW9D_MAX),
SND_PCI_QUIRK_VENDOR(0x1558, "Clevo laptop", ALC882_FIXUP_EAPD),
SND_PCI_QUIRK(0x161f, 0x2054, "Medion laptop", ALC883_FIXUP_EAPD),
@@ -3808,6 +3801,10 @@
static void alc_headset_mode_default(struct hda_codec *codec)
{
+ static struct coef_fw coef0225[] = {
+ UPDATE_COEF(0x45, 0x3f<<10, 0x34<<10),
+ {}
+ };
static struct coef_fw coef0255[] = {
WRITE_COEF(0x45, 0xc089),
WRITE_COEF(0x45, 0xc489),
@@ -3849,6 +3846,9 @@
};
switch (codec->core.vendor_id) {
+ case 0x10ec0225:
+ alc_process_coef_fw(codec, coef0225);
+ break;
case 0x10ec0255:
case 0x10ec0256:
alc_process_coef_fw(codec, coef0255);
@@ -4756,6 +4756,9 @@
ALC256_FIXUP_DELL_XPS_13_HEADPHONE_NOISE,
ALC293_FIXUP_LENOVO_SPK_NOISE,
ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY,
+ ALC255_FIXUP_DELL_SPK_NOISE,
+ ALC225_FIXUP_DELL1_MIC_NO_PRESENCE,
+ ALC280_FIXUP_HP_HEADSET_MIC,
};
static const struct hda_fixup alc269_fixups[] = {
@@ -5375,6 +5378,29 @@
.type = HDA_FIXUP_FUNC,
.v.func = alc233_fixup_lenovo_line2_mic_hotkey,
},
+ [ALC255_FIXUP_DELL_SPK_NOISE] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc_fixup_disable_aamix,
+ .chained = true,
+ .chain_id = ALC255_FIXUP_DELL1_MIC_NO_PRESENCE
+ },
+ [ALC225_FIXUP_DELL1_MIC_NO_PRESENCE] = {
+ .type = HDA_FIXUP_VERBS,
+ .v.verbs = (const struct hda_verb[]) {
+ /* Disable pass-through path for FRONT 14h */
+ { 0x20, AC_VERB_SET_COEF_INDEX, 0x36 },
+ { 0x20, AC_VERB_SET_PROC_COEF, 0x57d7 },
+ {}
+ },
+ .chained = true,
+ .chain_id = ALC269_FIXUP_DELL1_MIC_NO_PRESENCE
+ },
+ [ALC280_FIXUP_HP_HEADSET_MIC] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc_fixup_disable_aamix,
+ .chained = true,
+ .chain_id = ALC269_FIXUP_HEADSET_MIC,
+ },
};
static const struct snd_pci_quirk alc269_fixup_tbl[] = {
@@ -5417,6 +5443,7 @@
SND_PCI_QUIRK(0x1028, 0x06df, "Dell", ALC293_FIXUP_DISABLE_AAMIX_MULTIJACK),
SND_PCI_QUIRK(0x1028, 0x06e0, "Dell", ALC293_FIXUP_DISABLE_AAMIX_MULTIJACK),
SND_PCI_QUIRK(0x1028, 0x0704, "Dell XPS 13", ALC256_FIXUP_DELL_XPS_13_HEADPHONE_NOISE),
+ SND_PCI_QUIRK(0x1028, 0x0725, "Dell Inspiron 3162", ALC255_FIXUP_DELL_SPK_NOISE),
SND_PCI_QUIRK(0x1028, 0x164a, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1028, 0x164b, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x103c, 0x1586, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC2),
@@ -5477,6 +5504,7 @@
SND_PCI_QUIRK(0x103c, 0x2335, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
SND_PCI_QUIRK(0x103c, 0x2336, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
SND_PCI_QUIRK(0x103c, 0x2337, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+ SND_PCI_QUIRK(0x103c, 0x221c, "HP EliteBook 755 G2", ALC280_FIXUP_HP_HEADSET_MIC),
SND_PCI_QUIRK(0x1043, 0x103f, "ASUS TX300", ALC282_FIXUP_ASUS_TX300),
SND_PCI_QUIRK(0x1043, 0x106d, "Asus K53BE", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
SND_PCI_QUIRK(0x1043, 0x115d, "Asus 1015E", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
@@ -5645,10 +5673,10 @@
{0x21, 0x03211020}
static const struct snd_hda_pin_quirk alc269_pin_fixup_tbl[] = {
- SND_HDA_PIN_QUIRK(0x10ec0225, 0x1028, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE,
+ SND_HDA_PIN_QUIRK(0x10ec0225, 0x1028, "Dell", ALC225_FIXUP_DELL1_MIC_NO_PRESENCE,
ALC225_STANDARD_PINS,
{0x14, 0x901701a0}),
- SND_HDA_PIN_QUIRK(0x10ec0225, 0x1028, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE,
+ SND_HDA_PIN_QUIRK(0x10ec0225, 0x1028, "Dell", ALC225_FIXUP_DELL1_MIC_NO_PRESENCE,
ALC225_STANDARD_PINS,
{0x14, 0x901701b0}),
SND_HDA_PIN_QUIRK(0x10ec0255, 0x1028, "Dell", ALC255_FIXUP_DELL2_MIC_NO_PRESENCE,
diff --git a/sound/pci/hda/patch_sigmatel.c b/sound/pci/hda/patch_sigmatel.c
index 2c7c5eb..37b70f8 100644
--- a/sound/pci/hda/patch_sigmatel.c
+++ b/sound/pci/hda/patch_sigmatel.c
@@ -493,9 +493,9 @@
if (!spec->num_pwrs)
return;
- if (jack && jack->tbl->nid) {
- stac_toggle_power_map(codec, jack->tbl->nid,
- snd_hda_jack_detect(codec, jack->tbl->nid),
+ if (jack && jack->nid) {
+ stac_toggle_power_map(codec, jack->nid,
+ snd_hda_jack_detect(codec, jack->nid),
true);
return;
}
diff --git a/sound/soc/amd/acp-pcm-dma.c b/sound/soc/amd/acp-pcm-dma.c
index 3191e0a..d1fb035 100644
--- a/sound/soc/amd/acp-pcm-dma.c
+++ b/sound/soc/amd/acp-pcm-dma.c
@@ -635,6 +635,7 @@
SNDRV_PCM_HW_PARAM_PERIODS);
if (ret < 0) {
dev_err(prtd->platform->dev, "set integer constraint failed\n");
+ kfree(adata);
return ret;
}
diff --git a/sound/soc/codecs/arizona.c b/sound/soc/codecs/arizona.c
index 33143fe..9178531 100644
--- a/sound/soc/codecs/arizona.c
+++ b/sound/soc/codecs/arizona.c
@@ -1929,6 +1929,25 @@
{ 1000000, 13500000, 0, 1 },
};
+static const unsigned int pseudo_fref_max[ARIZONA_FLL_MAX_FRATIO] = {
+ 13500000,
+ 6144000,
+ 6144000,
+ 3072000,
+ 3072000,
+ 2822400,
+ 2822400,
+ 1536000,
+ 1536000,
+ 1536000,
+ 1536000,
+ 1536000,
+ 1536000,
+ 1536000,
+ 1536000,
+ 768000,
+};
+
static struct {
unsigned int min;
unsigned int max;
@@ -2042,16 +2061,32 @@
/* Adjust FRATIO/refdiv to avoid integer mode if possible */
refdiv = cfg->refdiv;
+ arizona_fll_dbg(fll, "pseudo: initial ratio=%u fref=%u refdiv=%u\n",
+ init_ratio, Fref, refdiv);
+
while (div <= ARIZONA_FLL_MAX_REFDIV) {
for (ratio = init_ratio; ratio <= ARIZONA_FLL_MAX_FRATIO;
ratio++) {
if ((ARIZONA_FLL_VCO_CORNER / 2) /
- (fll->vco_mult * ratio) < Fref)
+ (fll->vco_mult * ratio) < Fref) {
+ arizona_fll_dbg(fll, "pseudo: hit VCO corner\n");
break;
+ }
+
+ if (Fref > pseudo_fref_max[ratio - 1]) {
+ arizona_fll_dbg(fll,
+ "pseudo: exceeded max fref(%u) for ratio=%u\n",
+ pseudo_fref_max[ratio - 1],
+ ratio);
+ break;
+ }
if (target % (ratio * Fref)) {
cfg->refdiv = refdiv;
cfg->fratio = ratio - 1;
+ arizona_fll_dbg(fll,
+ "pseudo: found fref=%u refdiv=%d(%d) ratio=%d\n",
+ Fref, refdiv, div, ratio);
return ratio;
}
}
@@ -2060,6 +2095,9 @@
if (target % (ratio * Fref)) {
cfg->refdiv = refdiv;
cfg->fratio = ratio - 1;
+ arizona_fll_dbg(fll,
+ "pseudo: found fref=%u refdiv=%d(%d) ratio=%d\n",
+ Fref, refdiv, div, ratio);
return ratio;
}
}
@@ -2068,6 +2106,9 @@
Fref /= 2;
refdiv++;
init_ratio = arizona_find_fratio(Fref, NULL);
+ arizona_fll_dbg(fll,
+ "pseudo: change fref=%u refdiv=%d(%d) ratio=%u\n",
+ Fref, refdiv, div, init_ratio);
}
arizona_fll_warn(fll, "Falling back to integer mode operation\n");
diff --git a/sound/soc/codecs/rt286.c b/sound/soc/codecs/rt286.c
index bc08f0c..1bd3164 100644
--- a/sound/soc/codecs/rt286.c
+++ b/sound/soc/codecs/rt286.c
@@ -266,6 +266,8 @@
} else {
*mic = false;
regmap_write(rt286->regmap, RT286_SET_MIC1, 0x20);
+ regmap_update_bits(rt286->regmap,
+ RT286_CBJ_CTRL1, 0x0400, 0x0000);
}
} else {
regmap_read(rt286->regmap, RT286_GET_HP_SENSE, &buf);
@@ -470,24 +472,6 @@
return 0;
}
-static int rt286_vref_event(struct snd_soc_dapm_widget *w,
- struct snd_kcontrol *kcontrol, int event)
-{
- struct snd_soc_codec *codec = snd_soc_dapm_to_codec(w->dapm);
-
- switch (event) {
- case SND_SOC_DAPM_PRE_PMU:
- snd_soc_update_bits(codec,
- RT286_CBJ_CTRL1, 0x0400, 0x0000);
- mdelay(50);
- break;
- default:
- return 0;
- }
-
- return 0;
-}
-
static int rt286_ldo2_event(struct snd_soc_dapm_widget *w,
struct snd_kcontrol *kcontrol, int event)
{
@@ -536,7 +520,7 @@
SND_SOC_DAPM_SUPPLY_S("HV", 1, RT286_POWER_CTRL1,
12, 1, NULL, 0),
SND_SOC_DAPM_SUPPLY("VREF", RT286_POWER_CTRL1,
- 0, 1, rt286_vref_event, SND_SOC_DAPM_PRE_PMU),
+ 0, 1, NULL, 0),
SND_SOC_DAPM_SUPPLY_S("LDO1", 1, RT286_POWER_CTRL2,
2, 0, NULL, 0),
SND_SOC_DAPM_SUPPLY_S("LDO2", 2, RT286_POWER_CTRL1,
@@ -911,8 +895,6 @@
case SND_SOC_BIAS_ON:
mdelay(10);
snd_soc_update_bits(codec,
- RT286_CBJ_CTRL1, 0x0400, 0x0400);
- snd_soc_update_bits(codec,
RT286_DC_GAIN, 0x200, 0x0);
break;
@@ -920,8 +902,6 @@
case SND_SOC_BIAS_STANDBY:
snd_soc_write(codec,
RT286_SET_AUDIO_POWER, AC_PWRST_D3);
- snd_soc_update_bits(codec,
- RT286_CBJ_CTRL1, 0x0400, 0x0000);
break;
default:
diff --git a/sound/soc/codecs/rt5645.c b/sound/soc/codecs/rt5645.c
index c61d38b..93e8c90 100644
--- a/sound/soc/codecs/rt5645.c
+++ b/sound/soc/codecs/rt5645.c
@@ -776,7 +776,7 @@
/* IN1/IN2 Control */
SOC_SINGLE_TLV("IN1 Boost", RT5645_IN1_CTRL1,
- RT5645_BST_SFT1, 8, 0, bst_tlv),
+ RT5645_BST_SFT1, 12, 0, bst_tlv),
SOC_SINGLE_TLV("IN2 Boost", RT5645_IN2_CTRL,
RT5645_BST_SFT2, 8, 0, bst_tlv),
diff --git a/sound/soc/codecs/rt5659.c b/sound/soc/codecs/rt5659.c
index 820d8fa..fb8ea05 100644
--- a/sound/soc/codecs/rt5659.c
+++ b/sound/soc/codecs/rt5659.c
@@ -3985,7 +3985,6 @@
if (rt5659 == NULL)
return -ENOMEM;
- rt5659->i2c = i2c;
i2c_set_clientdata(i2c, rt5659);
if (pdata)
@@ -4157,24 +4156,17 @@
INIT_DELAYED_WORK(&rt5659->jack_detect_work, rt5659_jack_detect_work);
- if (rt5659->i2c->irq) {
- ret = request_threaded_irq(rt5659->i2c->irq, NULL, rt5659_irq,
- IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING
+ if (i2c->irq) {
+ ret = devm_request_threaded_irq(&i2c->dev, i2c->irq, NULL,
+ rt5659_irq, IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING
| IRQF_ONESHOT, "rt5659", rt5659);
if (ret)
dev_err(&i2c->dev, "Failed to reguest IRQ: %d\n", ret);
}
- ret = snd_soc_register_codec(&i2c->dev, &soc_codec_dev_rt5659,
+ return snd_soc_register_codec(&i2c->dev, &soc_codec_dev_rt5659,
rt5659_dai, ARRAY_SIZE(rt5659_dai));
-
- if (ret) {
- if (rt5659->i2c->irq)
- free_irq(rt5659->i2c->irq, rt5659);
- }
-
- return 0;
}
static int rt5659_i2c_remove(struct i2c_client *i2c)
@@ -4191,24 +4183,29 @@
regmap_write(rt5659->regmap, RT5659_RESET, 0);
}
+#ifdef CONFIG_OF
static const struct of_device_id rt5659_of_match[] = {
{ .compatible = "realtek,rt5658", },
{ .compatible = "realtek,rt5659", },
- {},
+ { },
};
+MODULE_DEVICE_TABLE(of, rt5659_of_match);
+#endif
+#ifdef CONFIG_ACPI
static struct acpi_device_id rt5659_acpi_match[] = {
- { "10EC5658", 0},
- { "10EC5659", 0},
- { },
+ { "10EC5658", 0, },
+ { "10EC5659", 0, },
+ { },
};
MODULE_DEVICE_TABLE(acpi, rt5659_acpi_match);
+#endif
struct i2c_driver rt5659_i2c_driver = {
.driver = {
.name = "rt5659",
.owner = THIS_MODULE,
- .of_match_table = rt5659_of_match,
+ .of_match_table = of_match_ptr(rt5659_of_match),
.acpi_match_table = ACPI_PTR(rt5659_acpi_match),
},
.probe = rt5659_i2c_probe,
diff --git a/sound/soc/codecs/rt5659.h b/sound/soc/codecs/rt5659.h
index 8f07ee9..d31c9e5 100644
--- a/sound/soc/codecs/rt5659.h
+++ b/sound/soc/codecs/rt5659.h
@@ -1792,7 +1792,6 @@
struct snd_soc_codec *codec;
struct rt5659_platform_data pdata;
struct regmap *regmap;
- struct i2c_client *i2c;
struct gpio_desc *gpiod_ldo1_en;
struct gpio_desc *gpiod_reset;
struct snd_soc_jack *hs_jack;
diff --git a/sound/soc/codecs/sigmadsp-i2c.c b/sound/soc/codecs/sigmadsp-i2c.c
index 21ca3a5..d374c18 100644
--- a/sound/soc/codecs/sigmadsp-i2c.c
+++ b/sound/soc/codecs/sigmadsp-i2c.c
@@ -31,7 +31,10 @@
kfree(buf);
- return ret;
+ if (ret < 0)
+ return ret;
+
+ return 0;
}
static int sigmadsp_read_i2c(void *control_data,
diff --git a/sound/soc/codecs/wm5110.c b/sound/soc/codecs/wm5110.c
index 6088d30..97c0f1e 100644
--- a/sound/soc/codecs/wm5110.c
+++ b/sound/soc/codecs/wm5110.c
@@ -2382,6 +2382,7 @@
static int wm5110_remove(struct platform_device *pdev)
{
+ snd_soc_unregister_platform(&pdev->dev);
snd_soc_unregister_codec(&pdev->dev);
pm_runtime_disable(&pdev->dev);
diff --git a/sound/soc/codecs/wm8960.c b/sound/soc/codecs/wm8960.c
index ff23772..d7f444f 100644
--- a/sound/soc/codecs/wm8960.c
+++ b/sound/soc/codecs/wm8960.c
@@ -240,13 +240,13 @@
SOC_DOUBLE_R("Capture Switch", WM8960_LINVOL, WM8960_RINVOL,
7, 1, 1),
-SOC_SINGLE_TLV("Right Input Boost Mixer RINPUT3 Volume",
- WM8960_INBMIX1, 4, 7, 0, lineinboost_tlv),
-SOC_SINGLE_TLV("Right Input Boost Mixer RINPUT2 Volume",
- WM8960_INBMIX1, 1, 7, 0, lineinboost_tlv),
SOC_SINGLE_TLV("Left Input Boost Mixer LINPUT3 Volume",
- WM8960_INBMIX2, 4, 7, 0, lineinboost_tlv),
+ WM8960_INBMIX1, 4, 7, 0, lineinboost_tlv),
SOC_SINGLE_TLV("Left Input Boost Mixer LINPUT2 Volume",
+ WM8960_INBMIX1, 1, 7, 0, lineinboost_tlv),
+SOC_SINGLE_TLV("Right Input Boost Mixer RINPUT3 Volume",
+ WM8960_INBMIX2, 4, 7, 0, lineinboost_tlv),
+SOC_SINGLE_TLV("Right Input Boost Mixer RINPUT2 Volume",
WM8960_INBMIX2, 1, 7, 0, lineinboost_tlv),
SOC_SINGLE_TLV("Right Input Boost Mixer RINPUT1 Volume",
WM8960_RINPATH, 4, 3, 0, micboost_tlv),
@@ -643,29 +643,31 @@
return -EINVAL;
}
- /* check if the sysclk frequency is available. */
- for (i = 0; i < ARRAY_SIZE(sysclk_divs); ++i) {
- if (sysclk_divs[i] == -1)
- continue;
- sysclk = freq_out / sysclk_divs[i];
- for (j = 0; j < ARRAY_SIZE(dac_divs); ++j) {
- if (sysclk == dac_divs[j] * lrclk) {
+ if (wm8960->clk_id != WM8960_SYSCLK_PLL) {
+ /* check if the sysclk frequency is available. */
+ for (i = 0; i < ARRAY_SIZE(sysclk_divs); ++i) {
+ if (sysclk_divs[i] == -1)
+ continue;
+ sysclk = freq_out / sysclk_divs[i];
+ for (j = 0; j < ARRAY_SIZE(dac_divs); ++j) {
+ if (sysclk != dac_divs[j] * lrclk)
+ continue;
for (k = 0; k < ARRAY_SIZE(bclk_divs); ++k)
if (sysclk == bclk * bclk_divs[k] / 10)
break;
if (k != ARRAY_SIZE(bclk_divs))
break;
}
+ if (j != ARRAY_SIZE(dac_divs))
+ break;
}
- if (j != ARRAY_SIZE(dac_divs))
- break;
- }
- if (i != ARRAY_SIZE(sysclk_divs)) {
- goto configure_clock;
- } else if (wm8960->clk_id != WM8960_SYSCLK_AUTO) {
- dev_err(codec->dev, "failed to configure clock\n");
- return -EINVAL;
+ if (i != ARRAY_SIZE(sysclk_divs)) {
+ goto configure_clock;
+ } else if (wm8960->clk_id != WM8960_SYSCLK_AUTO) {
+ dev_err(codec->dev, "failed to configure clock\n");
+ return -EINVAL;
+ }
}
/* get a available pll out frequency and set pll */
for (i = 0; i < ARRAY_SIZE(sysclk_divs); ++i) {
diff --git a/sound/soc/dwc/designware_i2s.c b/sound/soc/dwc/designware_i2s.c
index ce664c2..bff258d 100644
--- a/sound/soc/dwc/designware_i2s.c
+++ b/sound/soc/dwc/designware_i2s.c
@@ -645,6 +645,8 @@
dev->dev = &pdev->dev;
+ dev->i2s_reg_comp1 = I2S_COMP_PARAM_1;
+ dev->i2s_reg_comp2 = I2S_COMP_PARAM_2;
if (pdata) {
dev->capability = pdata->cap;
clk_id = NULL;
@@ -652,9 +654,6 @@
if (dev->quirks & DW_I2S_QUIRK_COMP_REG_OFFSET) {
dev->i2s_reg_comp1 = pdata->i2s_reg_comp1;
dev->i2s_reg_comp2 = pdata->i2s_reg_comp2;
- } else {
- dev->i2s_reg_comp1 = I2S_COMP_PARAM_1;
- dev->i2s_reg_comp2 = I2S_COMP_PARAM_2;
}
ret = dw_configure_dai_by_pd(dev, dw_i2s_dai, res, pdata);
} else {
diff --git a/sound/soc/fsl/fsl_ssi.c b/sound/soc/fsl/fsl_ssi.c
index 40dfd8a..ed8de10 100644
--- a/sound/soc/fsl/fsl_ssi.c
+++ b/sound/soc/fsl/fsl_ssi.c
@@ -112,20 +112,6 @@
struct fsl_ssi_reg_val tx;
};
-static const struct reg_default fsl_ssi_reg_defaults[] = {
- {CCSR_SSI_SCR, 0x00000000},
- {CCSR_SSI_SIER, 0x00003003},
- {CCSR_SSI_STCR, 0x00000200},
- {CCSR_SSI_SRCR, 0x00000200},
- {CCSR_SSI_STCCR, 0x00040000},
- {CCSR_SSI_SRCCR, 0x00040000},
- {CCSR_SSI_SACNT, 0x00000000},
- {CCSR_SSI_STMSK, 0x00000000},
- {CCSR_SSI_SRMSK, 0x00000000},
- {CCSR_SSI_SACCEN, 0x00000000},
- {CCSR_SSI_SACCDIS, 0x00000000},
-};
-
static bool fsl_ssi_readable_reg(struct device *dev, unsigned int reg)
{
switch (reg) {
@@ -190,8 +176,7 @@
.val_bits = 32,
.reg_stride = 4,
.val_format_endian = REGMAP_ENDIAN_NATIVE,
- .reg_defaults = fsl_ssi_reg_defaults,
- .num_reg_defaults = ARRAY_SIZE(fsl_ssi_reg_defaults),
+ .num_reg_defaults_raw = CCSR_SSI_SACCDIS / sizeof(uint32_t) + 1,
.readable_reg = fsl_ssi_readable_reg,
.volatile_reg = fsl_ssi_volatile_reg,
.precious_reg = fsl_ssi_precious_reg,
@@ -201,6 +186,7 @@
struct fsl_ssi_soc_data {
bool imx;
+ bool imx21regs; /* imx21-class SSI - no SACC{ST,EN,DIS} regs */
bool offline_config;
u32 sisr_write_mask;
};
@@ -303,6 +289,7 @@
static struct fsl_ssi_soc_data fsl_ssi_imx21 = {
.imx = true,
+ .imx21regs = true,
.offline_config = true,
.sisr_write_mask = 0,
};
@@ -586,8 +573,12 @@
*/
regmap_write(regs, CCSR_SSI_SACNT,
CCSR_SSI_SACNT_AC97EN | CCSR_SSI_SACNT_FV);
- regmap_write(regs, CCSR_SSI_SACCDIS, 0xff);
- regmap_write(regs, CCSR_SSI_SACCEN, 0x300);
+
+ /* no SACC{ST,EN,DIS} regs on imx21-class SSI */
+ if (!ssi_private->soc->imx21regs) {
+ regmap_write(regs, CCSR_SSI_SACCDIS, 0xff);
+ regmap_write(regs, CCSR_SSI_SACCEN, 0x300);
+ }
/*
* Enable SSI, Transmit and Receive. AC97 has to communicate with the
@@ -1397,6 +1388,7 @@
struct resource *res;
void __iomem *iomem;
char name[64];
+ struct regmap_config regconfig = fsl_ssi_regconfig;
of_id = of_match_device(fsl_ssi_ids, &pdev->dev);
if (!of_id || !of_id->data)
@@ -1444,15 +1436,25 @@
return PTR_ERR(iomem);
ssi_private->ssi_phys = res->start;
+ if (ssi_private->soc->imx21regs) {
+ /*
+ * According to datasheet imx21-class SSI
+ * don't have SACC{ST,EN,DIS} regs.
+ */
+ regconfig.max_register = CCSR_SSI_SRMSK;
+ regconfig.num_reg_defaults_raw =
+ CCSR_SSI_SRMSK / sizeof(uint32_t) + 1;
+ }
+
ret = of_property_match_string(np, "clock-names", "ipg");
if (ret < 0) {
ssi_private->has_ipg_clk_name = false;
ssi_private->regs = devm_regmap_init_mmio(&pdev->dev, iomem,
- &fsl_ssi_regconfig);
+ ®config);
} else {
ssi_private->has_ipg_clk_name = true;
ssi_private->regs = devm_regmap_init_mmio_clk(&pdev->dev,
- "ipg", iomem, &fsl_ssi_regconfig);
+ "ipg", iomem, ®config);
}
if (IS_ERR(ssi_private->regs)) {
dev_err(&pdev->dev, "Failed to init register map\n");
diff --git a/sound/soc/fsl/imx-spdif.c b/sound/soc/fsl/imx-spdif.c
index a407e83..fb896b2 100644
--- a/sound/soc/fsl/imx-spdif.c
+++ b/sound/soc/fsl/imx-spdif.c
@@ -72,8 +72,6 @@
goto end;
}
- platform_set_drvdata(pdev, data);
-
end:
of_node_put(spdif_np);
diff --git a/sound/soc/generic/simple-card.c b/sound/soc/generic/simple-card.c
index 1ded881..2389ab4 100644
--- a/sound/soc/generic/simple-card.c
+++ b/sound/soc/generic/simple-card.c
@@ -99,7 +99,7 @@
if (ret && ret != -ENOTSUPP)
goto err;
}
-
+ return 0;
err:
return ret;
}
diff --git a/sound/soc/intel/Kconfig b/sound/soc/intel/Kconfig
index 803f95e..7d7c872 100644
--- a/sound/soc/intel/Kconfig
+++ b/sound/soc/intel/Kconfig
@@ -30,11 +30,15 @@
config SND_SOC_INTEL_SST
tristate
select SND_SOC_INTEL_SST_ACPI if ACPI
+ select SND_SOC_INTEL_SST_MATCH if ACPI
depends on (X86 || COMPILE_TEST)
config SND_SOC_INTEL_SST_ACPI
tristate
+config SND_SOC_INTEL_SST_MATCH
+ tristate
+
config SND_SOC_INTEL_HASWELL
tristate
@@ -57,7 +61,7 @@
config SND_SOC_INTEL_BYT_RT5640_MACH
tristate "ASoC Audio driver for Intel Baytrail with RT5640 codec"
depends on X86_INTEL_LPSS && I2C
- depends on DW_DMAC_CORE=y && (SND_SOC_INTEL_BYTCR_RT5640_MACH = n)
+ depends on DW_DMAC_CORE=y && (SND_SST_IPC_ACPI = n)
select SND_SOC_INTEL_SST
select SND_SOC_INTEL_BAYTRAIL
select SND_SOC_RT5640
@@ -69,7 +73,7 @@
config SND_SOC_INTEL_BYT_MAX98090_MACH
tristate "ASoC Audio driver for Intel Baytrail with MAX98090 codec"
depends on X86_INTEL_LPSS && I2C
- depends on DW_DMAC_CORE=y
+ depends on DW_DMAC_CORE=y && (SND_SST_IPC_ACPI = n)
select SND_SOC_INTEL_SST
select SND_SOC_INTEL_BAYTRAIL
select SND_SOC_MAX98090
@@ -97,6 +101,7 @@
select SND_SOC_RT5640
select SND_SST_MFLD_PLATFORM
select SND_SST_IPC_ACPI
+ select SND_SOC_INTEL_SST_MATCH if ACPI
help
This adds support for ASoC machine driver for Intel(R) Baytrail and Baytrail-CR
platforms with RT5640 audio codec.
@@ -109,6 +114,7 @@
select SND_SOC_RT5651
select SND_SST_MFLD_PLATFORM
select SND_SST_IPC_ACPI
+ select SND_SOC_INTEL_SST_MATCH if ACPI
help
This adds support for ASoC machine driver for Intel(R) Baytrail and Baytrail-CR
platforms with RT5651 audio codec.
@@ -121,6 +127,7 @@
select SND_SOC_RT5670
select SND_SST_MFLD_PLATFORM
select SND_SST_IPC_ACPI
+ select SND_SOC_INTEL_SST_MATCH if ACPI
help
This adds support for ASoC machine driver for Intel(R) Cherrytrail & Braswell
platforms with RT5672 audio codec.
@@ -133,6 +140,7 @@
select SND_SOC_RT5645
select SND_SST_MFLD_PLATFORM
select SND_SST_IPC_ACPI
+ select SND_SOC_INTEL_SST_MATCH if ACPI
help
This adds support for ASoC machine driver for Intel(R) Cherrytrail & Braswell
platforms with RT5645/5650 audio codec.
@@ -145,6 +153,7 @@
select SND_SOC_TS3A227E
select SND_SST_MFLD_PLATFORM
select SND_SST_IPC_ACPI
+ select SND_SOC_INTEL_SST_MATCH if ACPI
help
This adds support for ASoC machine driver for Intel(R) Cherrytrail & Braswell
platforms with MAX98090 audio codec it also can support TI jack chip as aux device.
diff --git a/sound/soc/intel/atom/sst-mfld-platform-pcm.c b/sound/soc/intel/atom/sst-mfld-platform-pcm.c
index 55c33dc7..52ed434 100644
--- a/sound/soc/intel/atom/sst-mfld-platform-pcm.c
+++ b/sound/soc/intel/atom/sst-mfld-platform-pcm.c
@@ -528,6 +528,7 @@
.ops = &sst_compr_dai_ops,
.playback = {
.stream_name = "Compress Playback",
+ .channels_min = 1,
},
},
/* BE CPU Dais */
diff --git a/sound/soc/intel/boards/skl_rt286.c b/sound/soc/intel/boards/skl_rt286.c
index 7396ddb..2cbcbe4 100644
--- a/sound/soc/intel/boards/skl_rt286.c
+++ b/sound/soc/intel/boards/skl_rt286.c
@@ -212,7 +212,10 @@
{
struct snd_interval *channels = hw_param_interval(params,
SNDRV_PCM_HW_PARAM_CHANNELS);
- channels->min = channels->max = 4;
+ if (params_channels(params) == 2)
+ channels->min = channels->max = 2;
+ else
+ channels->min = channels->max = 4;
return 0;
}
diff --git a/sound/soc/intel/common/Makefile b/sound/soc/intel/common/Makefile
index 668fdee..fbbb25c 100644
--- a/sound/soc/intel/common/Makefile
+++ b/sound/soc/intel/common/Makefile
@@ -1,13 +1,10 @@
snd-soc-sst-dsp-objs := sst-dsp.o
-ifneq ($(CONFIG_SND_SST_IPC_ACPI),)
-snd-soc-sst-acpi-objs := sst-match-acpi.o
-else
-snd-soc-sst-acpi-objs := sst-acpi.o sst-match-acpi.o
-endif
-
+snd-soc-sst-acpi-objs := sst-acpi.o
+snd-soc-sst-match-objs := sst-match-acpi.o
snd-soc-sst-ipc-objs := sst-ipc.o
snd-soc-sst-dsp-$(CONFIG_DW_DMAC_CORE) += sst-firmware.o
obj-$(CONFIG_SND_SOC_INTEL_SST) += snd-soc-sst-dsp.o snd-soc-sst-ipc.o
obj-$(CONFIG_SND_SOC_INTEL_SST_ACPI) += snd-soc-sst-acpi.o
+obj-$(CONFIG_SND_SOC_INTEL_SST_MATCH) += snd-soc-sst-match.o
diff --git a/sound/soc/intel/common/sst-acpi.c b/sound/soc/intel/common/sst-acpi.c
index 7a85c57..2c5eda1 100644
--- a/sound/soc/intel/common/sst-acpi.c
+++ b/sound/soc/intel/common/sst-acpi.c
@@ -215,6 +215,7 @@
.dma_size = SST_LPT_DSP_DMA_SIZE,
};
+#if !IS_ENABLED(CONFIG_SND_SST_IPC_ACPI)
static struct sst_acpi_mach baytrail_machines[] = {
{ "10EC5640", "byt-rt5640", "intel/fw_sst_0f28.bin-48kHz_i2s_master", NULL, NULL, NULL },
{ "193C9890", "byt-max98090", "intel/fw_sst_0f28.bin-48kHz_i2s_master", NULL, NULL, NULL },
@@ -231,11 +232,14 @@
.sst_id = SST_DEV_ID_BYT,
.resindex_dma_base = -1,
};
+#endif
static const struct acpi_device_id sst_acpi_match[] = {
{ "INT33C8", (unsigned long)&sst_acpi_haswell_desc },
{ "INT3438", (unsigned long)&sst_acpi_broadwell_desc },
+#if !IS_ENABLED(CONFIG_SND_SST_IPC_ACPI)
{ "80860F28", (unsigned long)&sst_acpi_baytrail_desc },
+#endif
{ }
};
MODULE_DEVICE_TABLE(acpi, sst_acpi_match);
diff --git a/sound/soc/intel/common/sst-match-acpi.c b/sound/soc/intel/common/sst-match-acpi.c
index dd077e1..3b4539d 100644
--- a/sound/soc/intel/common/sst-match-acpi.c
+++ b/sound/soc/intel/common/sst-match-acpi.c
@@ -41,3 +41,6 @@
return NULL;
}
EXPORT_SYMBOL_GPL(sst_acpi_find_machine);
+
+MODULE_LICENSE("GPL v2");
+MODULE_DESCRIPTION("Intel Common ACPI Match module");
diff --git a/sound/soc/intel/skylake/skl-messages.c b/sound/soc/intel/skylake/skl-messages.c
index de6dac4..4629372 100644
--- a/sound/soc/intel/skylake/skl-messages.c
+++ b/sound/soc/intel/skylake/skl-messages.c
@@ -688,14 +688,14 @@
/* get src queue index */
src_index = skl_get_queue_index(src_mcfg->m_out_pin, dst_id, out_max);
if (src_index < 0)
- return -EINVAL;
+ return 0;
msg.src_queue = src_index;
/* get dst queue index */
dst_index = skl_get_queue_index(dst_mcfg->m_in_pin, src_id, in_max);
if (dst_index < 0)
- return -EINVAL;
+ return 0;
msg.dst_queue = dst_index;
@@ -747,7 +747,7 @@
skl_dump_bind_info(ctx, src_mcfg, dst_mcfg);
- if (src_mcfg->m_state < SKL_MODULE_INIT_DONE &&
+ if (src_mcfg->m_state < SKL_MODULE_INIT_DONE ||
dst_mcfg->m_state < SKL_MODULE_INIT_DONE)
return 0;
diff --git a/sound/soc/intel/skylake/skl-pcm.c b/sound/soc/intel/skylake/skl-pcm.c
index f355325..b6e6b61 100644
--- a/sound/soc/intel/skylake/skl-pcm.c
+++ b/sound/soc/intel/skylake/skl-pcm.c
@@ -863,6 +863,7 @@
else
delay += hstream->bufsize;
}
+ delay = (hstream->bufsize == delay) ? 0 : delay;
if (delay >= hstream->period_bytes) {
dev_info(bus->dev,
diff --git a/sound/soc/intel/skylake/skl-topology.c b/sound/soc/intel/skylake/skl-topology.c
index 4624556..a294fee 100644
--- a/sound/soc/intel/skylake/skl-topology.c
+++ b/sound/soc/intel/skylake/skl-topology.c
@@ -54,12 +54,9 @@
/*
* Each pipelines needs memory to be allocated. Check if we have free memory
- * from available pool. Then only add this to pool
- * This is freed when pipe is deleted
- * Note: DSP does actual memory management we only keep track for complete
- * pool
+ * from available pool.
*/
-static bool skl_tplg_alloc_pipe_mem(struct skl *skl,
+static bool skl_is_pipe_mem_avail(struct skl *skl,
struct skl_module_cfg *mconfig)
{
struct skl_sst *ctx = skl->skl_sst;
@@ -74,10 +71,20 @@
"exceeds ppl memory available %d mem %d\n",
skl->resource.max_mem, skl->resource.mem);
return false;
+ } else {
+ return true;
}
+}
+/*
+ * Add the mem to the mem pool. This is freed when pipe is deleted.
+ * Note: DSP does actual memory management we only keep track for complete
+ * pool
+ */
+static void skl_tplg_alloc_pipe_mem(struct skl *skl,
+ struct skl_module_cfg *mconfig)
+{
skl->resource.mem += mconfig->pipe->memory_pages;
- return true;
}
/*
@@ -85,10 +92,10 @@
* quantified in MCPS (Million Clocks Per Second) required for module/pipe
*
* Each pipelines needs mcps to be allocated. Check if we have mcps for this
- * pipe. This adds the mcps to driver counter
- * This is removed on pipeline delete
+ * pipe.
*/
-static bool skl_tplg_alloc_pipe_mcps(struct skl *skl,
+
+static bool skl_is_pipe_mcps_avail(struct skl *skl,
struct skl_module_cfg *mconfig)
{
struct skl_sst *ctx = skl->skl_sst;
@@ -98,13 +105,18 @@
"%s: module_id %d instance %d\n", __func__,
mconfig->id.module_id, mconfig->id.instance_id);
dev_err(ctx->dev,
- "exceeds ppl memory available %d > mem %d\n",
+ "exceeds ppl mcps available %d > mem %d\n",
skl->resource.max_mcps, skl->resource.mcps);
return false;
+ } else {
+ return true;
}
+}
+static void skl_tplg_alloc_pipe_mcps(struct skl *skl,
+ struct skl_module_cfg *mconfig)
+{
skl->resource.mcps += mconfig->mcps;
- return true;
}
/*
@@ -411,7 +423,7 @@
mconfig = w->priv;
/* check resource available */
- if (!skl_tplg_alloc_pipe_mcps(skl, mconfig))
+ if (!skl_is_pipe_mcps_avail(skl, mconfig))
return -ENOMEM;
if (mconfig->is_loadable && ctx->dsp->fw_ops.load_mod) {
@@ -435,6 +447,7 @@
ret = skl_tplg_set_module_params(w, ctx);
if (ret < 0)
return ret;
+ skl_tplg_alloc_pipe_mcps(skl, mconfig);
}
return 0;
@@ -477,10 +490,10 @@
struct skl_sst *ctx = skl->skl_sst;
/* check resource available */
- if (!skl_tplg_alloc_pipe_mcps(skl, mconfig))
+ if (!skl_is_pipe_mcps_avail(skl, mconfig))
return -EBUSY;
- if (!skl_tplg_alloc_pipe_mem(skl, mconfig))
+ if (!skl_is_pipe_mem_avail(skl, mconfig))
return -ENOMEM;
/*
@@ -526,11 +539,15 @@
src_module = dst_module;
}
+ skl_tplg_alloc_pipe_mem(skl, mconfig);
+ skl_tplg_alloc_pipe_mcps(skl, mconfig);
+
return 0;
}
static int skl_tplg_bind_sinks(struct snd_soc_dapm_widget *w,
struct skl *skl,
+ struct snd_soc_dapm_widget *src_w,
struct skl_module_cfg *src_mconfig)
{
struct snd_soc_dapm_path *p;
@@ -547,6 +564,10 @@
dev_dbg(ctx->dev, "%s: sink widget=%s\n", __func__, p->sink->name);
next_sink = p->sink;
+
+ if (!is_skl_dsp_widget_type(p->sink))
+ return skl_tplg_bind_sinks(p->sink, skl, src_w, src_mconfig);
+
/*
* here we will check widgets in sink pipelines, so that
* can be any widgets type and we are only interested if
@@ -576,7 +597,7 @@
}
if (!sink)
- return skl_tplg_bind_sinks(next_sink, skl, src_mconfig);
+ return skl_tplg_bind_sinks(next_sink, skl, src_w, src_mconfig);
return 0;
}
@@ -605,7 +626,7 @@
* if sink is not started, start sink pipe first, then start
* this pipe
*/
- ret = skl_tplg_bind_sinks(w, skl, src_mconfig);
+ ret = skl_tplg_bind_sinks(w, skl, w, src_mconfig);
if (ret)
return ret;
@@ -773,10 +794,7 @@
continue;
}
- ret = skl_unbind_modules(ctx, src_module, dst_module);
- if (ret < 0)
- return ret;
-
+ skl_unbind_modules(ctx, src_module, dst_module);
src_module = dst_module;
}
@@ -814,9 +832,6 @@
* This is a connecter and if path is found that means
* unbind between source and sink has not happened yet
*/
- ret = skl_stop_pipe(ctx, sink_mconfig->pipe);
- if (ret < 0)
- return ret;
ret = skl_unbind_modules(ctx, src_mconfig,
sink_mconfig);
}
@@ -842,6 +857,12 @@
case SND_SOC_DAPM_PRE_PMU:
return skl_tplg_mixer_dapm_pre_pmu_event(w, skl);
+ case SND_SOC_DAPM_POST_PMU:
+ return skl_tplg_mixer_dapm_post_pmu_event(w, skl);
+
+ case SND_SOC_DAPM_PRE_PMD:
+ return skl_tplg_mixer_dapm_pre_pmd_event(w, skl);
+
case SND_SOC_DAPM_POST_PMD:
return skl_tplg_mixer_dapm_post_pmd_event(w, skl);
}
@@ -916,6 +937,13 @@
skl_get_module_params(skl->skl_sst, (u32 *)bc->params,
bc->max, bc->param_id, mconfig);
+ /* decrement size for TLV header */
+ size -= 2 * sizeof(u32);
+
+ /* check size as we don't want to send kernel data */
+ if (size > bc->max)
+ size = bc->max;
+
if (bc->params) {
if (copy_to_user(data, &bc->param_id, sizeof(u32)))
return -EFAULT;
@@ -1510,6 +1538,7 @@
&skl_tplg_ops, fw, 0);
if (ret < 0) {
dev_err(bus->dev, "tplg component load failed%d\n", ret);
+ release_firmware(fw);
return -EINVAL;
}
diff --git a/sound/soc/intel/skylake/skl.c b/sound/soc/intel/skylake/skl.c
index 443a15d..092705e 100644
--- a/sound/soc/intel/skylake/skl.c
+++ b/sound/soc/intel/skylake/skl.c
@@ -614,8 +614,6 @@
goto out_unregister;
/*configure PM */
- pm_runtime_set_autosuspend_delay(bus->dev, SKL_SUSPEND_DELAY);
- pm_runtime_use_autosuspend(bus->dev);
pm_runtime_put_noidle(bus->dev);
pm_runtime_allow(bus->dev);
diff --git a/sound/soc/mediatek/Kconfig b/sound/soc/mediatek/Kconfig
index 15c04e2..9769676 100644
--- a/sound/soc/mediatek/Kconfig
+++ b/sound/soc/mediatek/Kconfig
@@ -9,7 +9,7 @@
config SND_SOC_MT8173_MAX98090
tristate "ASoC Audio driver for MT8173 with MAX98090 codec"
- depends on SND_SOC_MEDIATEK
+ depends on SND_SOC_MEDIATEK && I2C
select SND_SOC_MAX98090
help
This adds ASoC driver for Mediatek MT8173 boards
@@ -19,7 +19,7 @@
config SND_SOC_MT8173_RT5650_RT5676
tristate "ASoC Audio driver for MT8173 with RT5650 RT5676 codecs"
- depends on SND_SOC_MEDIATEK
+ depends on SND_SOC_MEDIATEK && I2C
select SND_SOC_RT5645
select SND_SOC_RT5677
help
diff --git a/sound/soc/mxs/mxs-saif.c b/sound/soc/mxs/mxs-saif.c
index c866ade..a6c7b8d 100644
--- a/sound/soc/mxs/mxs-saif.c
+++ b/sound/soc/mxs/mxs-saif.c
@@ -381,9 +381,19 @@
__raw_writel(BM_SAIF_CTRL_CLKGATE,
saif->base + SAIF_CTRL + MXS_CLR_ADDR);
+ clk_prepare(saif->clk);
+
return 0;
}
+static void mxs_saif_shutdown(struct snd_pcm_substream *substream,
+ struct snd_soc_dai *cpu_dai)
+{
+ struct mxs_saif *saif = snd_soc_dai_get_drvdata(cpu_dai);
+
+ clk_unprepare(saif->clk);
+}
+
/*
* Should only be called when port is inactive.
* although can be called multiple times by upper layers.
@@ -424,8 +434,6 @@
return ret;
}
- /* prepare clk in hw_param, enable in trigger */
- clk_prepare(saif->clk);
if (saif != master_saif) {
/*
* Set an initial clock rate for the saif internal logic to work
@@ -611,6 +619,7 @@
static const struct snd_soc_dai_ops mxs_saif_dai_ops = {
.startup = mxs_saif_startup,
+ .shutdown = mxs_saif_shutdown,
.trigger = mxs_saif_trigger,
.prepare = mxs_saif_prepare,
.hw_params = mxs_saif_hw_params,
diff --git a/sound/soc/qcom/lpass-platform.c b/sound/soc/qcom/lpass-platform.c
index 79688aa..4aeb8e1 100644
--- a/sound/soc/qcom/lpass-platform.c
+++ b/sound/soc/qcom/lpass-platform.c
@@ -440,18 +440,18 @@
}
static int lpass_platform_alloc_buffer(struct snd_pcm_substream *substream,
- struct snd_soc_pcm_runtime *soc_runtime)
+ struct snd_soc_pcm_runtime *rt)
{
struct snd_dma_buffer *buf = &substream->dma_buffer;
size_t size = lpass_platform_pcm_hardware.buffer_bytes_max;
buf->dev.type = SNDRV_DMA_TYPE_DEV;
- buf->dev.dev = soc_runtime->dev;
+ buf->dev.dev = rt->platform->dev;
buf->private_data = NULL;
- buf->area = dma_alloc_coherent(soc_runtime->dev, size, &buf->addr,
+ buf->area = dma_alloc_coherent(rt->platform->dev, size, &buf->addr,
GFP_KERNEL);
if (!buf->area) {
- dev_err(soc_runtime->dev, "%s: Could not allocate DMA buffer\n",
+ dev_err(rt->platform->dev, "%s: Could not allocate DMA buffer\n",
__func__);
return -ENOMEM;
}
@@ -461,12 +461,12 @@
}
static void lpass_platform_free_buffer(struct snd_pcm_substream *substream,
- struct snd_soc_pcm_runtime *soc_runtime)
+ struct snd_soc_pcm_runtime *rt)
{
struct snd_dma_buffer *buf = &substream->dma_buffer;
if (buf->area) {
- dma_free_coherent(soc_runtime->dev, buf->bytes, buf->area,
+ dma_free_coherent(rt->dev, buf->bytes, buf->area,
buf->addr);
}
buf->area = NULL;
@@ -499,9 +499,6 @@
snd_soc_pcm_set_drvdata(soc_runtime, data);
- soc_runtime->dev->coherent_dma_mask = DMA_BIT_MASK(32);
- soc_runtime->dev->dma_mask = &soc_runtime->dev->coherent_dma_mask;
-
ret = lpass_platform_alloc_buffer(substream, soc_runtime);
if (ret)
return ret;
diff --git a/sound/soc/soc-dapm.c b/sound/soc/soc-dapm.c
index 5a2812f..0d37079 100644
--- a/sound/soc/soc-dapm.c
+++ b/sound/soc/soc-dapm.c
@@ -310,7 +310,7 @@
};
static int dapm_kcontrol_data_alloc(struct snd_soc_dapm_widget *widget,
- struct snd_kcontrol *kcontrol)
+ struct snd_kcontrol *kcontrol, const char *ctrl_name)
{
struct dapm_kcontrol_data *data;
struct soc_mixer_control *mc;
@@ -333,7 +333,7 @@
if (mc->autodisable) {
struct snd_soc_dapm_widget template;
- name = kasprintf(GFP_KERNEL, "%s %s", kcontrol->id.name,
+ name = kasprintf(GFP_KERNEL, "%s %s", ctrl_name,
"Autodisable");
if (!name) {
ret = -ENOMEM;
@@ -371,7 +371,7 @@
if (e->autodisable) {
struct snd_soc_dapm_widget template;
- name = kasprintf(GFP_KERNEL, "%s %s", kcontrol->id.name,
+ name = kasprintf(GFP_KERNEL, "%s %s", ctrl_name,
"Autodisable");
if (!name) {
ret = -ENOMEM;
@@ -871,7 +871,7 @@
kcontrol->private_free = dapm_kcontrol_free;
- ret = dapm_kcontrol_data_alloc(w, kcontrol);
+ ret = dapm_kcontrol_data_alloc(w, kcontrol, name);
if (ret) {
snd_ctl_free_one(kcontrol);
goto exit_free;
diff --git a/sound/soc/soc-pcm.c b/sound/soc/soc-pcm.c
index e898b42..1af4f23 100644
--- a/sound/soc/soc-pcm.c
+++ b/sound/soc/soc-pcm.c
@@ -1810,7 +1810,8 @@
(be->dpcm[stream].state != SND_SOC_DPCM_STATE_PREPARE) &&
(be->dpcm[stream].state != SND_SOC_DPCM_STATE_HW_FREE) &&
(be->dpcm[stream].state != SND_SOC_DPCM_STATE_PAUSED) &&
- (be->dpcm[stream].state != SND_SOC_DPCM_STATE_STOP))
+ (be->dpcm[stream].state != SND_SOC_DPCM_STATE_STOP) &&
+ (be->dpcm[stream].state != SND_SOC_DPCM_STATE_SUSPEND))
continue;
dev_dbg(be->dev, "ASoC: hw_free BE %s\n",
diff --git a/sound/usb/midi.c b/sound/usb/midi.c
index cc39f63..007cf58 100644
--- a/sound/usb/midi.c
+++ b/sound/usb/midi.c
@@ -2455,7 +2455,6 @@
else
err = snd_usbmidi_create_endpoints(umidi, endpoints);
if (err < 0) {
- snd_usbmidi_free(umidi);
return err;
}
diff --git a/tools/hv/Makefile b/tools/hv/Makefile
index a8ab795..a8c4644 100644
--- a/tools/hv/Makefile
+++ b/tools/hv/Makefile
@@ -5,6 +5,8 @@
WARNINGS = -Wall -Wextra
CFLAGS = $(WARNINGS) -g $(PTHREAD_LIBS) $(shell getconf LFS_CFLAGS)
+CFLAGS += -D__EXPORTED_HEADERS__ -I../../include/uapi -I../../include
+
all: hv_kvp_daemon hv_vss_daemon hv_fcopy_daemon
%: %.c
$(CC) $(CFLAGS) -o $@ $^
diff --git a/tools/perf/util/intel-pt.c b/tools/perf/util/intel-pt.c
index 81a2eb7..05d8158 100644
--- a/tools/perf/util/intel-pt.c
+++ b/tools/perf/util/intel-pt.c
@@ -2068,6 +2068,15 @@
err = -ENOMEM;
goto err_free_queues;
}
+
+ /*
+ * Since this thread will not be kept in any rbtree not in a
+ * list, initialize its list node so that at thread__put() the
+ * current thread lifetime assuption is kept and we don't segfault
+ * at list_del_init().
+ */
+ INIT_LIST_HEAD(&pt->unknown_thread->node);
+
err = thread__set_comm(pt->unknown_thread, "unknown", 0);
if (err)
goto err_delete_thread;
diff --git a/tools/perf/util/parse-events.c b/tools/perf/util/parse-events.c
index 4f7b0ef..813d9b2 100644
--- a/tools/perf/util/parse-events.c
+++ b/tools/perf/util/parse-events.c
@@ -399,6 +399,9 @@
{
char help[BUFSIZ];
+ if (!e)
+ return;
+
/*
* We get error directly from syscall errno ( > 0),
* or from encoded pointer's error ( < 0).
diff --git a/tools/perf/util/probe-finder.c b/tools/perf/util/probe-finder.c
index 2be10fb..4ce5c5e 100644
--- a/tools/perf/util/probe-finder.c
+++ b/tools/perf/util/probe-finder.c
@@ -686,8 +686,9 @@
pf->fb_ops = NULL;
#if _ELFUTILS_PREREQ(0, 142)
} else if (nops == 1 && pf->fb_ops[0].atom == DW_OP_call_frame_cfa &&
- pf->cfi != NULL) {
- if (dwarf_cfi_addrframe(pf->cfi, pf->addr, &frame) != 0 ||
+ (pf->cfi_eh != NULL || pf->cfi_dbg != NULL)) {
+ if ((dwarf_cfi_addrframe(pf->cfi_eh, pf->addr, &frame) != 0 &&
+ (dwarf_cfi_addrframe(pf->cfi_dbg, pf->addr, &frame) != 0)) ||
dwarf_frame_cfa(frame, &pf->fb_ops, &nops) != 0) {
pr_warning("Failed to get call frame on 0x%jx\n",
(uintmax_t)pf->addr);
@@ -1015,8 +1016,7 @@
return DWARF_CB_OK;
}
-/* Find probe points from debuginfo */
-static int debuginfo__find_probes(struct debuginfo *dbg,
+static int debuginfo__find_probe_location(struct debuginfo *dbg,
struct probe_finder *pf)
{
struct perf_probe_point *pp = &pf->pev->point;
@@ -1025,27 +1025,6 @@
Dwarf_Die *diep;
int ret = 0;
-#if _ELFUTILS_PREREQ(0, 142)
- Elf *elf;
- GElf_Ehdr ehdr;
- GElf_Shdr shdr;
-
- /* Get the call frame information from this dwarf */
- elf = dwarf_getelf(dbg->dbg);
- if (elf == NULL)
- return -EINVAL;
-
- if (gelf_getehdr(elf, &ehdr) == NULL)
- return -EINVAL;
-
- if (elf_section_by_name(elf, &ehdr, &shdr, ".eh_frame", NULL) &&
- shdr.sh_type == SHT_PROGBITS) {
- pf->cfi = dwarf_getcfi_elf(elf);
- } else {
- pf->cfi = dwarf_getcfi(dbg->dbg);
- }
-#endif
-
off = 0;
pf->lcache = intlist__new(NULL);
if (!pf->lcache)
@@ -1108,6 +1087,39 @@
return ret;
}
+/* Find probe points from debuginfo */
+static int debuginfo__find_probes(struct debuginfo *dbg,
+ struct probe_finder *pf)
+{
+ int ret = 0;
+
+#if _ELFUTILS_PREREQ(0, 142)
+ Elf *elf;
+ GElf_Ehdr ehdr;
+ GElf_Shdr shdr;
+
+ if (pf->cfi_eh || pf->cfi_dbg)
+ return debuginfo__find_probe_location(dbg, pf);
+
+ /* Get the call frame information from this dwarf */
+ elf = dwarf_getelf(dbg->dbg);
+ if (elf == NULL)
+ return -EINVAL;
+
+ if (gelf_getehdr(elf, &ehdr) == NULL)
+ return -EINVAL;
+
+ if (elf_section_by_name(elf, &ehdr, &shdr, ".eh_frame", NULL) &&
+ shdr.sh_type == SHT_PROGBITS)
+ pf->cfi_eh = dwarf_getcfi_elf(elf);
+
+ pf->cfi_dbg = dwarf_getcfi(dbg->dbg);
+#endif
+
+ ret = debuginfo__find_probe_location(dbg, pf);
+ return ret;
+}
+
struct local_vars_finder {
struct probe_finder *pf;
struct perf_probe_arg *args;
diff --git a/tools/perf/util/probe-finder.h b/tools/perf/util/probe-finder.h
index bed8271..0aec770 100644
--- a/tools/perf/util/probe-finder.h
+++ b/tools/perf/util/probe-finder.h
@@ -76,7 +76,10 @@
/* For variable searching */
#if _ELFUTILS_PREREQ(0, 142)
- Dwarf_CFI *cfi; /* Call Frame Information */
+ /* Call Frame Information from .eh_frame */
+ Dwarf_CFI *cfi_eh;
+ /* Call Frame Information from .debug_frame */
+ Dwarf_CFI *cfi_dbg;
#endif
Dwarf_Op *fb_ops; /* Frame base attribute */
struct perf_probe_arg *pvar; /* Current target variable */
diff --git a/tools/perf/util/stat.c b/tools/perf/util/stat.c
index 2b58edc..afb0c45 100644
--- a/tools/perf/util/stat.c
+++ b/tools/perf/util/stat.c
@@ -311,6 +311,16 @@
aggr->val = aggr->ena = aggr->run = 0;
+ /*
+ * We calculate counter's data every interval,
+ * and the display code shows ps->res_stats
+ * avg value. We need to zero the stats for
+ * interval mode, otherwise overall avg running
+ * averages will be shown for each interval.
+ */
+ if (config->interval)
+ init_stats(ps->res_stats);
+
if (counter->per_pkg)
zero_per_pkg(counter);
diff --git a/tools/testing/nvdimm/test/nfit.c b/tools/testing/nvdimm/test/nfit.c
index 90bd2ea..b3281dc 100644
--- a/tools/testing/nvdimm/test/nfit.c
+++ b/tools/testing/nvdimm/test/nfit.c
@@ -217,13 +217,16 @@
return rc;
}
+#define NFIT_TEST_ARS_RECORDS 4
+
static int nfit_test_cmd_ars_cap(struct nd_cmd_ars_cap *nd_cmd,
unsigned int buf_len)
{
if (buf_len < sizeof(*nd_cmd))
return -EINVAL;
- nd_cmd->max_ars_out = 256;
+ nd_cmd->max_ars_out = sizeof(struct nd_cmd_ars_status)
+ + NFIT_TEST_ARS_RECORDS * sizeof(struct nd_ars_record);
nd_cmd->status = (ND_ARS_PERSISTENT | ND_ARS_VOLATILE) << 16;
return 0;
@@ -246,7 +249,8 @@
if (buf_len < sizeof(*nd_cmd))
return -EINVAL;
- nd_cmd->out_length = 256;
+ nd_cmd->out_length = sizeof(struct nd_cmd_ars_status);
+ /* TODO: emit error records */
nd_cmd->num_records = 0;
nd_cmd->address = 0;
nd_cmd->length = -1ULL;
diff --git a/tools/testing/selftests/efivarfs/efivarfs.sh b/tools/testing/selftests/efivarfs/efivarfs.sh
index 77edcdc..0572784 100755
--- a/tools/testing/selftests/efivarfs/efivarfs.sh
+++ b/tools/testing/selftests/efivarfs/efivarfs.sh
@@ -88,7 +88,11 @@
exit 1
fi
- rm $file
+ rm $file 2>/dev/null
+ if [ $? -ne 0 ]; then
+ chattr -i $file
+ rm $file
+ fi
if [ -e $file ]; then
echo "$file couldn't be deleted" >&2
@@ -111,6 +115,7 @@
exit 1
fi
+ chattr -i $file
printf "$attrs" > $file
if [ -e $file ]; then
@@ -141,7 +146,11 @@
echo "$file could not be created" >&2
ret=1
else
- rm $file
+ rm $file 2>/dev/null
+ if [ $? -ne 0 ]; then
+ chattr -i $file
+ rm $file
+ fi
fi
done
@@ -174,7 +183,11 @@
if [ -e $file ]; then
echo "Creating $file should have failed" >&2
- rm $file
+ rm $file 2>/dev/null
+ if [ $? -ne 0 ]; then
+ chattr -i $file
+ rm $file
+ fi
ret=1
fi
done
diff --git a/tools/testing/selftests/efivarfs/open-unlink.c b/tools/testing/selftests/efivarfs/open-unlink.c
index 8c07644..4af74f7 100644
--- a/tools/testing/selftests/efivarfs/open-unlink.c
+++ b/tools/testing/selftests/efivarfs/open-unlink.c
@@ -1,10 +1,68 @@
+#include <errno.h>
#include <stdio.h>
#include <stdint.h>
#include <stdlib.h>
#include <unistd.h>
+#include <sys/ioctl.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
+#include <linux/fs.h>
+
+static int set_immutable(const char *path, int immutable)
+{
+ unsigned int flags;
+ int fd;
+ int rc;
+ int error;
+
+ fd = open(path, O_RDONLY);
+ if (fd < 0)
+ return fd;
+
+ rc = ioctl(fd, FS_IOC_GETFLAGS, &flags);
+ if (rc < 0) {
+ error = errno;
+ close(fd);
+ errno = error;
+ return rc;
+ }
+
+ if (immutable)
+ flags |= FS_IMMUTABLE_FL;
+ else
+ flags &= ~FS_IMMUTABLE_FL;
+
+ rc = ioctl(fd, FS_IOC_SETFLAGS, &flags);
+ error = errno;
+ close(fd);
+ errno = error;
+ return rc;
+}
+
+static int get_immutable(const char *path)
+{
+ unsigned int flags;
+ int fd;
+ int rc;
+ int error;
+
+ fd = open(path, O_RDONLY);
+ if (fd < 0)
+ return fd;
+
+ rc = ioctl(fd, FS_IOC_GETFLAGS, &flags);
+ if (rc < 0) {
+ error = errno;
+ close(fd);
+ errno = error;
+ return rc;
+ }
+ close(fd);
+ if (flags & FS_IMMUTABLE_FL)
+ return 1;
+ return 0;
+}
int main(int argc, char **argv)
{
@@ -27,7 +85,7 @@
buf[4] = 0;
/* create a test variable */
- fd = open(path, O_WRONLY | O_CREAT);
+ fd = open(path, O_WRONLY | O_CREAT, 0600);
if (fd < 0) {
perror("open(O_WRONLY)");
return EXIT_FAILURE;
@@ -41,6 +99,18 @@
close(fd);
+ rc = get_immutable(path);
+ if (rc < 0) {
+ perror("ioctl(FS_IOC_GETFLAGS)");
+ return EXIT_FAILURE;
+ } else if (rc) {
+ rc = set_immutable(path, 0);
+ if (rc < 0) {
+ perror("ioctl(FS_IOC_SETFLAGS)");
+ return EXIT_FAILURE;
+ }
+ }
+
fd = open(path, O_RDONLY);
if (fd < 0) {
perror("open");
diff --git a/tools/testing/selftests/ftrace/test.d/instances/instance.tc b/tools/testing/selftests/ftrace/test.d/instances/instance.tc
index 773e276..1e1abe0 100644
--- a/tools/testing/selftests/ftrace/test.d/instances/instance.tc
+++ b/tools/testing/selftests/ftrace/test.d/instances/instance.tc
@@ -39,28 +39,23 @@
}
instance_slam &
-x=`jobs -l`
-p1=`echo $x | cut -d' ' -f2`
+p1=$!
echo $p1
instance_slam &
-x=`jobs -l | tail -1`
-p2=`echo $x | cut -d' ' -f2`
+p2=$!
echo $p2
instance_slam &
-x=`jobs -l | tail -1`
-p3=`echo $x | cut -d' ' -f2`
+p3=$!
echo $p3
instance_slam &
-x=`jobs -l | tail -1`
-p4=`echo $x | cut -d' ' -f2`
+p4=$!
echo $p4
instance_slam &
-x=`jobs -l | tail -1`
-p5=`echo $x | cut -d' ' -f2`
+p5=$!
echo $p5
ls -lR >/dev/null
diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
index 69bca18..ea60646 100644
--- a/virt/kvm/arm/arch_timer.c
+++ b/virt/kvm/arm/arch_timer.c
@@ -143,7 +143,7 @@
* Check if there was a change in the timer state (should we raise or lower
* the line level to the GIC).
*/
-static void kvm_timer_update_state(struct kvm_vcpu *vcpu)
+static int kvm_timer_update_state(struct kvm_vcpu *vcpu)
{
struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
@@ -154,10 +154,12 @@
* until we call this function from kvm_timer_flush_hwstate.
*/
if (!vgic_initialized(vcpu->kvm))
- return;
+ return -ENODEV;
if (kvm_timer_should_fire(vcpu) != timer->irq.level)
kvm_timer_update_irq(vcpu, !timer->irq.level);
+
+ return 0;
}
/*
@@ -218,7 +220,8 @@
bool phys_active;
int ret;
- kvm_timer_update_state(vcpu);
+ if (kvm_timer_update_state(vcpu))
+ return;
/*
* If we enter the guest with the virtual input level to the VGIC
diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c
index 043032c..00429b3 100644
--- a/virt/kvm/arm/vgic.c
+++ b/virt/kvm/arm/vgic.c
@@ -1875,8 +1875,8 @@
static int vgic_vcpu_init_maps(struct kvm_vcpu *vcpu, int nr_irqs)
{
struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu;
-
- int sz = (nr_irqs - VGIC_NR_PRIVATE_IRQS) / 8;
+ int nr_longs = BITS_TO_LONGS(nr_irqs - VGIC_NR_PRIVATE_IRQS);
+ int sz = nr_longs * sizeof(unsigned long);
vgic_cpu->pending_shared = kzalloc(sz, GFP_KERNEL);
vgic_cpu->active_shared = kzalloc(sz, GFP_KERNEL);
vgic_cpu->pend_act_shared = kzalloc(sz, GFP_KERNEL);
diff --git a/virt/kvm/async_pf.c b/virt/kvm/async_pf.c
index 3531599..db2dd33 100644
--- a/virt/kvm/async_pf.c
+++ b/virt/kvm/async_pf.c
@@ -172,7 +172,7 @@
* do alloc nowait since if we are going to sleep anyway we
* may as well sleep faulting in page
*/
- work = kmem_cache_zalloc(async_pf_cache, GFP_NOWAIT);
+ work = kmem_cache_zalloc(async_pf_cache, GFP_NOWAIT | __GFP_NOWARN);
if (!work)
return 0;