Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiederm/user-namespace
Pull userns fixes from Eric W Biederman:
"The bulk of the changes are fixing the worst consequences of the user
namespace design oversight in not considering what happens when one
namespace starts off as a clone of another namespace, as happens with
the mount namespace.
The rest of the changes are just plain bug fixes.
Many thanks to Andy Lutomirski for pointing out many of these issues."
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiederm/user-namespace:
userns: Restrict when proc and sysfs can be mounted
ipc: Restrict mounting the mqueue filesystem
vfs: Carefully propogate mounts across user namespaces
vfs: Add a mount flag to lock read only bind mounts
userns: Don't allow creation if the user is chrooted
yama: Better permission check for ptraceme
pid: Handle the exit of a multi-threaded init.
scm: Require CAP_SYS_ADMIN over the current pidns to spoof pids.
diff --git a/CREDITS b/CREDITS
index 78163cb..afaa7ce 100644
--- a/CREDITS
+++ b/CREDITS
@@ -1510,6 +1510,14 @@
D: Cobalt Networks (x86) support
D: This-and-That
+N: Mark M. Hoffman
+E: mhoffman@lightlink.com
+D: asb100, lm93 and smsc47b397 hardware monitoring drivers
+D: hwmon subsystem core
+D: hwmon subsystem maintainer
+D: i2c-sis96x and i2c-stub SMBus drivers
+S: USA
+
N: Dirk Hohndel
E: hohndel@suse.de
D: The XFree86[tm] Project
diff --git a/Documentation/hwmon/lm75 b/Documentation/hwmon/lm75
index c91a1d1..69af1c7 100644
--- a/Documentation/hwmon/lm75
+++ b/Documentation/hwmon/lm75
@@ -23,7 +23,7 @@
Datasheet: Publicly available at the Maxim website
http://www.maxim-ic.com/
* Microchip (TelCom) TCN75
- Prefix: 'lm75'
+ Prefix: 'tcn75'
Addresses scanned: none
Datasheet: Publicly available at the Microchip website
http://www.microchip.com/
diff --git a/Documentation/i2c/busses/i2c-diolan-u2c b/Documentation/i2c/busses/i2c-diolan-u2c
index 30fe4bb..0d6018c 100644
--- a/Documentation/i2c/busses/i2c-diolan-u2c
+++ b/Documentation/i2c/busses/i2c-diolan-u2c
@@ -5,7 +5,7 @@
Documentation:
http://www.diolan.com/i2c/u2c12.html
-Author: Guenter Roeck <guenter.roeck@ericsson.com>
+Author: Guenter Roeck <linux@roeck-us.net>
Description
-----------
diff --git a/Documentation/networking/ipvs-sysctl.txt b/Documentation/networking/ipvs-sysctl.txt
index f2a2488..9573d0c 100644
--- a/Documentation/networking/ipvs-sysctl.txt
+++ b/Documentation/networking/ipvs-sysctl.txt
@@ -15,6 +15,13 @@
enabled and the variable is automatically set to 2, otherwise
the strategy is disabled and the variable is set to 1.
+backup_only - BOOLEAN
+ 0 - disabled (default)
+ not 0 - enabled
+
+ If set, disable the director function while the server is
+ in backup mode to avoid packet loops for DR/TUN methods.
+
conntrack - BOOLEAN
0 - disabled (default)
not 0 - enabled
diff --git a/Documentation/sound/alsa/ALSA-Configuration.txt b/Documentation/sound/alsa/ALSA-Configuration.txt
index ce6581c..4499bd9 100644
--- a/Documentation/sound/alsa/ALSA-Configuration.txt
+++ b/Documentation/sound/alsa/ALSA-Configuration.txt
@@ -912,7 +912,7 @@
models depending on the codec chip. The list of available models
is found in HD-Audio-Models.txt
- The model name "genric" is treated as a special case. When this
+ The model name "generic" is treated as a special case. When this
model is given, the driver uses the generic codec parser without
"codec-patch". It's sometimes good for testing and debugging.
diff --git a/Documentation/sound/alsa/seq_oss.html b/Documentation/sound/alsa/seq_oss.html
index d9776cf..9663b45 100644
--- a/Documentation/sound/alsa/seq_oss.html
+++ b/Documentation/sound/alsa/seq_oss.html
@@ -285,7 +285,7 @@
<H4>
7.2.4 Close Callback</H4>
The <TT>close</TT> callback is called when this device is closed by the
-applicaion. If any private data was allocated in open callback, it must
+application. If any private data was allocated in open callback, it must
be released in the close callback. The deletion of ALSA port should be
done here, too. This callback must not be NULL.
<H4>
diff --git a/MAINTAINERS b/MAINTAINERS
index 50b4d73..72b0843 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1338,12 +1338,6 @@
F: drivers/platform/x86/asus*.c
F: drivers/platform/x86/eeepc*.c
-ASUS ASB100 HARDWARE MONITOR DRIVER
-M: "Mark M. Hoffman" <mhoffman@lightlink.com>
-L: lm-sensors@lm-sensors.org
-S: Maintained
-F: drivers/hwmon/asb100.c
-
ASYNCHRONOUS TRANSFERS/TRANSFORMS (IOAT) API
M: Dan Williams <djbw@fb.com>
W: http://sourceforge.net/projects/xscaleiop
@@ -1467,6 +1461,12 @@
F: drivers/dma/at_hdmac_regs.h
F: include/linux/platform_data/dma-atmel.h
+ATMEL I2C DRIVER
+M: Ludovic Desroches <ludovic.desroches@atmel.com>
+L: linux-i2c@vger.kernel.org
+S: Supported
+F: drivers/i2c/busses/i2c-at91.c
+
ATMEL ISI DRIVER
M: Josh Wu <josh.wu@atmel.com>
L: linux-media@vger.kernel.org
@@ -2629,7 +2629,7 @@
INTEL DRM DRIVERS (excluding Poulsbo, Moorestown and derivative chipsets)
M: Daniel Vetter <daniel.vetter@ffwll.ch>
-L: intel-gfx@lists.freedesktop.org (subscribers-only)
+L: intel-gfx@lists.freedesktop.org
L: dri-devel@lists.freedesktop.org
T: git git://people.freedesktop.org/~danvet/drm-intel
S: Supported
@@ -3851,7 +3851,7 @@
F: Documentation/i2c/busses/i2c-ismt
I2C/SMBUS STUB DRIVER
-M: "Mark M. Hoffman" <mhoffman@lightlink.com>
+M: Jean Delvare <khali@linux-fr.org>
L: linux-i2c@vger.kernel.org
S: Maintained
F: drivers/i2c/i2c-stub.c
@@ -5647,6 +5647,14 @@
F: drivers/video/riva/
F: drivers/video/nvidia/
+NVM EXPRESS DRIVER
+M: Matthew Wilcox <willy@linux.intel.com>
+L: linux-nvme@lists.infradead.org
+T: git git://git.infradead.org/users/willy/linux-nvme.git
+S: Supported
+F: drivers/block/nvme.c
+F: include/linux/nvme.h
+
OMAP SUPPORT
M: Tony Lindgren <tony@atomide.com>
L: linux-omap@vger.kernel.org
@@ -5675,7 +5683,7 @@
F: arch/arm/*omap*/*clock*
OMAP POWER MANAGEMENT SUPPORT
-M: Kevin Hilman <khilman@ti.com>
+M: Kevin Hilman <khilman@deeprootsystems.com>
L: linux-omap@vger.kernel.org
S: Maintained
F: arch/arm/*omap*/*pm*
@@ -5769,7 +5777,7 @@
OMAP GPIO DRIVER
M: Santosh Shilimkar <santosh.shilimkar@ti.com>
-M: Kevin Hilman <khilman@ti.com>
+M: Kevin Hilman <khilman@deeprootsystems.com>
L: linux-omap@vger.kernel.org
S: Maintained
F: drivers/gpio/gpio-omap.c
@@ -7165,7 +7173,7 @@
TI DAVINCI MACHINE SUPPORT
M: Sekhar Nori <nsekhar@ti.com>
-M: Kevin Hilman <khilman@ti.com>
+M: Kevin Hilman <khilman@deeprootsystems.com>
L: davinci-linux-open-source@linux.davincidsp.com (moderated for non-subscribers)
T: git git://gitorious.org/linux-davinci/linux-davinci.git
Q: http://patchwork.kernel.org/project/linux-davinci/list/
@@ -7198,13 +7206,6 @@
S: Maintained
F: drivers/net/ethernet/sis/sis900.*
-SIS 96X I2C/SMBUS DRIVER
-M: "Mark M. Hoffman" <mhoffman@lightlink.com>
-L: linux-i2c@vger.kernel.org
-S: Maintained
-F: Documentation/i2c/busses/i2c-sis96x
-F: drivers/i2c/busses/i2c-sis96x.c
-
SIS FRAMEBUFFER DRIVER
M: Thomas Winischhofer <thomas@winischhofer.net>
W: http://www.winischhofer.net/linuxsisvga.shtml
@@ -7282,7 +7283,7 @@
F: drivers/hwmon/sch5627.c
SMSC47B397 HARDWARE MONITOR DRIVER
-M: "Mark M. Hoffman" <mhoffman@lightlink.com>
+M: Jean Delvare <khali@linux-fr.org>
L: lm-sensors@lm-sensors.org
S: Maintained
F: Documentation/hwmon/smsc47b397
diff --git a/Makefile b/Makefile
index 22113a7..54d2b2a 100644
--- a/Makefile
+++ b/Makefile
@@ -1,7 +1,7 @@
VERSION = 3
PATCHLEVEL = 9
SUBLEVEL = 0
-EXTRAVERSION = -rc3
+EXTRAVERSION = -rc4
NAME = Unicycling Gorilla
# *DOCUMENTATION*
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index 2c3bdce..13b7394 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -49,7 +49,6 @@
select HAVE_REGS_AND_STACK_ACCESS_API
select HAVE_SYSCALL_TRACEPOINTS
select HAVE_UID16
- select VIRT_TO_BUS
select KTIME_SCALAR
select PERF_USE_VMALLOC
select RTC_LIB
@@ -743,6 +742,7 @@
select NEED_MACH_IO_H
select NEED_MACH_MEMORY_H
select NO_IOPORT
+ select VIRT_TO_BUS
help
On the Acorn Risc-PC, Linux can support the internal IDE disk and
CD-ROM interface, serial and parallel port, and the floppy drive.
@@ -878,6 +878,7 @@
select ISA_DMA
select NEED_MACH_MEMORY_H
select PCI
+ select VIRT_TO_BUS
select ZONE_DMA
help
Support for the StrongARM based Digital DNARD machine, also known
@@ -1005,12 +1006,12 @@
bool
config ARCH_MULTI_V6
- bool "ARMv6 based platforms (ARM11, Scorpion, ...)"
+ bool "ARMv6 based platforms (ARM11)"
select ARCH_MULTI_V6_V7
select CPU_V6
config ARCH_MULTI_V7
- bool "ARMv7 based platforms (Cortex-A, PJ4, Krait)"
+ bool "ARMv7 based platforms (Cortex-A, PJ4, Scorpion, Krait)"
default y
select ARCH_MULTI_V6_V7
select ARCH_VEXPRESS
@@ -1461,10 +1462,6 @@
bool
select ISA_DMA_API
-config ARCH_NO_VIRT_TO_BUS
- def_bool y
- depends on !ARCH_RPC && !ARCH_NETWINDER && !ARCH_SHARK
-
# Select ISA DMA interface
config ISA_DMA_API
bool
diff --git a/arch/arm/Kconfig.debug b/arch/arm/Kconfig.debug
index ecfcdba..9b31f43 100644
--- a/arch/arm/Kconfig.debug
+++ b/arch/arm/Kconfig.debug
@@ -495,6 +495,7 @@
DEBUG_IMX53_UART || \
DEBUG_IMX6Q_UART
default 1
+ depends on ARCH_MXC
help
Choose UART port on which kernel low-level debug messages
should be output.
diff --git a/arch/arm/boot/dts/at91sam9x5.dtsi b/arch/arm/boot/dts/at91sam9x5.dtsi
index aa98e64..a98c0d5 100644
--- a/arch/arm/boot/dts/at91sam9x5.dtsi
+++ b/arch/arm/boot/dts/at91sam9x5.dtsi
@@ -238,8 +238,32 @@
nand {
pinctrl_nand: nand-0 {
atmel,pins =
- <3 4 0x0 0x1 /* PD5 gpio RDY pin pull_up */
- 3 5 0x0 0x1>; /* PD4 gpio enable pin pull_up */
+ <3 0 0x1 0x0 /* PD0 periph A Read Enable */
+ 3 1 0x1 0x0 /* PD1 periph A Write Enable */
+ 3 2 0x1 0x0 /* PD2 periph A Address Latch Enable */
+ 3 3 0x1 0x0 /* PD3 periph A Command Latch Enable */
+ 3 4 0x0 0x1 /* PD4 gpio Chip Enable pin pull_up */
+ 3 5 0x0 0x1 /* PD5 gpio RDY/BUSY pin pull_up */
+ 3 6 0x1 0x0 /* PD6 periph A Data bit 0 */
+ 3 7 0x1 0x0 /* PD7 periph A Data bit 1 */
+ 3 8 0x1 0x0 /* PD8 periph A Data bit 2 */
+ 3 9 0x1 0x0 /* PD9 periph A Data bit 3 */
+ 3 10 0x1 0x0 /* PD10 periph A Data bit 4 */
+ 3 11 0x1 0x0 /* PD11 periph A Data bit 5 */
+ 3 12 0x1 0x0 /* PD12 periph A Data bit 6 */
+ 3 13 0x1 0x0>; /* PD13 periph A Data bit 7 */
+ };
+
+ pinctrl_nand_16bits: nand_16bits-0 {
+ atmel,pins =
+ <3 14 0x1 0x0 /* PD14 periph A Data bit 8 */
+ 3 15 0x1 0x0 /* PD15 periph A Data bit 9 */
+ 3 16 0x1 0x0 /* PD16 periph A Data bit 10 */
+ 3 17 0x1 0x0 /* PD17 periph A Data bit 11 */
+ 3 18 0x1 0x0 /* PD18 periph A Data bit 12 */
+ 3 19 0x1 0x0 /* PD19 periph A Data bit 13 */
+ 3 20 0x1 0x0 /* PD20 periph A Data bit 14 */
+ 3 21 0x1 0x0>; /* PD21 periph A Data bit 15 */
};
};
diff --git a/arch/arm/boot/dts/exynos4.dtsi b/arch/arm/boot/dts/exynos4.dtsi
index e1347fc..1a62bcf 100644
--- a/arch/arm/boot/dts/exynos4.dtsi
+++ b/arch/arm/boot/dts/exynos4.dtsi
@@ -275,18 +275,27 @@
compatible = "arm,pl330", "arm,primecell";
reg = <0x12680000 0x1000>;
interrupts = <0 35 0>;
+ #dma-cells = <1>;
+ #dma-channels = <8>;
+ #dma-requests = <32>;
};
pdma1: pdma@12690000 {
compatible = "arm,pl330", "arm,primecell";
reg = <0x12690000 0x1000>;
interrupts = <0 36 0>;
+ #dma-cells = <1>;
+ #dma-channels = <8>;
+ #dma-requests = <32>;
};
mdma1: mdma@12850000 {
compatible = "arm,pl330", "arm,primecell";
reg = <0x12850000 0x1000>;
interrupts = <0 34 0>;
+ #dma-cells = <1>;
+ #dma-channels = <8>;
+ #dma-requests = <1>;
};
};
};
diff --git a/arch/arm/boot/dts/exynos5440.dtsi b/arch/arm/boot/dts/exynos5440.dtsi
index 5f3562a..9a99755 100644
--- a/arch/arm/boot/dts/exynos5440.dtsi
+++ b/arch/arm/boot/dts/exynos5440.dtsi
@@ -142,12 +142,18 @@
compatible = "arm,pl330", "arm,primecell";
reg = <0x120000 0x1000>;
interrupts = <0 34 0>;
+ #dma-cells = <1>;
+ #dma-channels = <8>;
+ #dma-requests = <32>;
};
pdma1: pdma@121B0000 {
compatible = "arm,pl330", "arm,primecell";
reg = <0x121000 0x1000>;
interrupts = <0 35 0>;
+ #dma-cells = <1>;
+ #dma-channels = <8>;
+ #dma-requests = <32>;
};
};
diff --git a/arch/arm/boot/dts/tegra20.dtsi b/arch/arm/boot/dts/tegra20.dtsi
index 48d00a0..3d3f64d 100644
--- a/arch/arm/boot/dts/tegra20.dtsi
+++ b/arch/arm/boot/dts/tegra20.dtsi
@@ -385,7 +385,7 @@
spi@7000d800 {
compatible = "nvidia,tegra20-slink";
- reg = <0x7000d480 0x200>;
+ reg = <0x7000d800 0x200>;
interrupts = <0 83 0x04>;
nvidia,dma-request-selector = <&apbdma 17>;
#address-cells = <1>;
diff --git a/arch/arm/boot/dts/tegra30.dtsi b/arch/arm/boot/dts/tegra30.dtsi
index 9d87a3f..dbf46c2 100644
--- a/arch/arm/boot/dts/tegra30.dtsi
+++ b/arch/arm/boot/dts/tegra30.dtsi
@@ -372,7 +372,7 @@
spi@7000d800 {
compatible = "nvidia,tegra30-slink", "nvidia,tegra20-slink";
- reg = <0x7000d480 0x200>;
+ reg = <0x7000d800 0x200>;
interrupts = <0 83 0x04>;
nvidia,dma-request-selector = <&apbdma 17>;
#address-cells = <1>;
diff --git a/arch/arm/kernel/smp.c b/arch/arm/kernel/smp.c
index 31644f1..79078ed 100644
--- a/arch/arm/kernel/smp.c
+++ b/arch/arm/kernel/smp.c
@@ -480,7 +480,7 @@
evt->features = CLOCK_EVT_FEAT_ONESHOT |
CLOCK_EVT_FEAT_PERIODIC |
CLOCK_EVT_FEAT_DUMMY;
- evt->rating = 400;
+ evt->rating = 100;
evt->mult = 1;
evt->set_mode = broadcast_timer_set_mode;
diff --git a/arch/arm/lib/memset.S b/arch/arm/lib/memset.S
index d912e73..94b0650 100644
--- a/arch/arm/lib/memset.S
+++ b/arch/arm/lib/memset.S
@@ -14,31 +14,15 @@
.text
.align 5
- .word 0
-
-1: subs r2, r2, #4 @ 1 do we have enough
- blt 5f @ 1 bytes to align with?
- cmp r3, #2 @ 1
- strltb r1, [ip], #1 @ 1
- strleb r1, [ip], #1 @ 1
- strb r1, [ip], #1 @ 1
- add r2, r2, r3 @ 1 (r2 = r2 - (4 - r3))
-/*
- * The pointer is now aligned and the length is adjusted. Try doing the
- * memset again.
- */
ENTRY(memset)
-/*
- * Preserve the contents of r0 for the return value.
- */
- mov ip, r0
- ands r3, ip, #3 @ 1 unaligned?
- bne 1b @ 1
+ ands r3, r0, #3 @ 1 unaligned?
+ mov ip, r0 @ preserve r0 as return value
+ bne 6f @ 1
/*
* we know that the pointer in ip is aligned to a word boundary.
*/
- orr r1, r1, r1, lsl #8
+1: orr r1, r1, r1, lsl #8
orr r1, r1, r1, lsl #16
mov r3, r1
cmp r2, #16
@@ -127,4 +111,13 @@
tst r2, #1
strneb r1, [ip], #1
mov pc, lr
+
+6: subs r2, r2, #4 @ 1 do we have enough
+ blt 5b @ 1 bytes to align with?
+ cmp r3, #2 @ 1
+ strltb r1, [ip], #1 @ 1
+ strleb r1, [ip], #1 @ 1
+ strb r1, [ip], #1 @ 1
+ add r2, r2, r3 @ 1 (r2 = r2 - (4 - r3))
+ b 1b
ENDPROC(memset)
diff --git a/arch/arm/mach-at91/include/mach/gpio.h b/arch/arm/mach-at91/include/mach/gpio.h
index eed465a..5fc2377 100644
--- a/arch/arm/mach-at91/include/mach/gpio.h
+++ b/arch/arm/mach-at91/include/mach/gpio.h
@@ -209,6 +209,14 @@
extern void at91_gpio_suspend(void);
extern void at91_gpio_resume(void);
+#ifdef CONFIG_PINCTRL_AT91
+extern void at91_pinctrl_gpio_suspend(void);
+extern void at91_pinctrl_gpio_resume(void);
+#else
+static inline void at91_pinctrl_gpio_suspend(void) {}
+static inline void at91_pinctrl_gpio_resume(void) {}
+#endif
+
#endif /* __ASSEMBLY__ */
#endif
diff --git a/arch/arm/mach-at91/irq.c b/arch/arm/mach-at91/irq.c
index 8e21026..e0ca591 100644
--- a/arch/arm/mach-at91/irq.c
+++ b/arch/arm/mach-at91/irq.c
@@ -92,23 +92,21 @@
void at91_irq_suspend(void)
{
- int i = 0, bit;
+ int bit = -1;
if (has_aic5()) {
/* disable enabled irqs */
- while ((bit = find_next_bit(backups, n_irqs, i)) < n_irqs) {
+ while ((bit = find_next_bit(backups, n_irqs, bit + 1)) < n_irqs) {
at91_aic_write(AT91_AIC5_SSR,
bit & AT91_AIC5_INTSEL_MSK);
at91_aic_write(AT91_AIC5_IDCR, 1);
- i = bit;
}
/* enable wakeup irqs */
- i = 0;
- while ((bit = find_next_bit(wakeups, n_irqs, i)) < n_irqs) {
+ bit = -1;
+ while ((bit = find_next_bit(wakeups, n_irqs, bit + 1)) < n_irqs) {
at91_aic_write(AT91_AIC5_SSR,
bit & AT91_AIC5_INTSEL_MSK);
at91_aic_write(AT91_AIC5_IECR, 1);
- i = bit;
}
} else {
at91_aic_write(AT91_AIC_IDCR, *backups);
@@ -118,23 +116,21 @@
void at91_irq_resume(void)
{
- int i = 0, bit;
+ int bit = -1;
if (has_aic5()) {
/* disable wakeup irqs */
- while ((bit = find_next_bit(wakeups, n_irqs, i)) < n_irqs) {
+ while ((bit = find_next_bit(wakeups, n_irqs, bit + 1)) < n_irqs) {
at91_aic_write(AT91_AIC5_SSR,
bit & AT91_AIC5_INTSEL_MSK);
at91_aic_write(AT91_AIC5_IDCR, 1);
- i = bit;
}
/* enable irqs disabled for suspend */
- i = 0;
- while ((bit = find_next_bit(backups, n_irqs, i)) < n_irqs) {
+ bit = -1;
+ while ((bit = find_next_bit(backups, n_irqs, bit + 1)) < n_irqs) {
at91_aic_write(AT91_AIC5_SSR,
bit & AT91_AIC5_INTSEL_MSK);
at91_aic_write(AT91_AIC5_IECR, 1);
- i = bit;
}
} else {
at91_aic_write(AT91_AIC_IDCR, *wakeups);
diff --git a/arch/arm/mach-at91/pm.c b/arch/arm/mach-at91/pm.c
index adb6db8..73f1f25 100644
--- a/arch/arm/mach-at91/pm.c
+++ b/arch/arm/mach-at91/pm.c
@@ -201,7 +201,10 @@
static int at91_pm_enter(suspend_state_t state)
{
- at91_gpio_suspend();
+ if (of_have_populated_dt())
+ at91_pinctrl_gpio_suspend();
+ else
+ at91_gpio_suspend();
at91_irq_suspend();
pr_debug("AT91: PM - wake mask %08x, pm state %d\n",
@@ -286,7 +289,10 @@
error:
target_state = PM_SUSPEND_ON;
at91_irq_resume();
- at91_gpio_resume();
+ if (of_have_populated_dt())
+ at91_pinctrl_gpio_resume();
+ else
+ at91_gpio_resume();
return 0;
}
diff --git a/arch/arm/mach-davinci/dma.c b/arch/arm/mach-davinci/dma.c
index a685e97..45b7c71 100644
--- a/arch/arm/mach-davinci/dma.c
+++ b/arch/arm/mach-davinci/dma.c
@@ -743,6 +743,9 @@
*/
int edma_alloc_slot(unsigned ctlr, int slot)
{
+ if (!edma_cc[ctlr])
+ return -EINVAL;
+
if (slot >= 0)
slot = EDMA_CHAN_SLOT(slot);
diff --git a/arch/arm/mach-footbridge/Kconfig b/arch/arm/mach-footbridge/Kconfig
index abda5a1..0f2111a 100644
--- a/arch/arm/mach-footbridge/Kconfig
+++ b/arch/arm/mach-footbridge/Kconfig
@@ -67,6 +67,7 @@
select ISA
select ISA_DMA
select PCI
+ select VIRT_TO_BUS
help
Say Y here if you intend to run this kernel on the Rebel.COM
NetWinder. Information about this machine can be found at:
diff --git a/arch/arm/mach-imx/clk-imx35.c b/arch/arm/mach-imx/clk-imx35.c
index 74e3a34..e13a8fa 100644
--- a/arch/arm/mach-imx/clk-imx35.c
+++ b/arch/arm/mach-imx/clk-imx35.c
@@ -264,6 +264,7 @@
clk_prepare_enable(clk[gpio3_gate]);
clk_prepare_enable(clk[iim_gate]);
clk_prepare_enable(clk[emi_gate]);
+ clk_prepare_enable(clk[max_gate]);
/*
* SCC is needed to boot via mmc after a watchdog reset. The clock code
diff --git a/arch/arm/mach-imx/imx25-dt.c b/arch/arm/mach-imx/imx25-dt.c
index 03b65e5..8234839 100644
--- a/arch/arm/mach-imx/imx25-dt.c
+++ b/arch/arm/mach-imx/imx25-dt.c
@@ -27,6 +27,11 @@
NULL
};
+static void __init imx25_timer_init(void)
+{
+ mx25_clocks_init_dt();
+}
+
DT_MACHINE_START(IMX25_DT, "Freescale i.MX25 (Device Tree Support)")
.map_io = mx25_map_io,
.init_early = imx25_init_early,
diff --git a/arch/arm/mach-mmp/gplugd.c b/arch/arm/mach-mmp/gplugd.c
index d1e2d59..f62b68d 100644
--- a/arch/arm/mach-mmp/gplugd.c
+++ b/arch/arm/mach-mmp/gplugd.c
@@ -9,6 +9,7 @@
*/
#include <linux/init.h>
+#include <linux/platform_device.h>
#include <linux/gpio.h>
#include <asm/mach/arch.h>
diff --git a/arch/arm/mach-mxs/mach-mxs.c b/arch/arm/mach-mxs/mach-mxs.c
index 3218f1f..e7b781d 100644
--- a/arch/arm/mach-mxs/mach-mxs.c
+++ b/arch/arm/mach-mxs/mach-mxs.c
@@ -41,8 +41,6 @@
.lower_margin = 4,
.hsync_len = 1,
.vsync_len = 1,
- .sync = FB_SYNC_DATA_ENABLE_HIGH_ACT |
- FB_SYNC_DOTCLK_FAILING_ACT,
},
};
@@ -59,8 +57,6 @@
.lower_margin = 10,
.hsync_len = 10,
.vsync_len = 10,
- .sync = FB_SYNC_DATA_ENABLE_HIGH_ACT |
- FB_SYNC_DOTCLK_FAILING_ACT,
},
};
@@ -77,7 +73,6 @@
.lower_margin = 45,
.hsync_len = 1,
.vsync_len = 1,
- .sync = FB_SYNC_DATA_ENABLE_HIGH_ACT,
},
};
@@ -94,9 +89,7 @@
.lower_margin = 13,
.hsync_len = 48,
.vsync_len = 3,
- .sync = FB_SYNC_HOR_HIGH_ACT | FB_SYNC_VERT_HIGH_ACT |
- FB_SYNC_DATA_ENABLE_HIGH_ACT |
- FB_SYNC_DOTCLK_FAILING_ACT,
+ .sync = FB_SYNC_HOR_HIGH_ACT | FB_SYNC_VERT_HIGH_ACT,
},
};
@@ -113,9 +106,7 @@
.lower_margin = 0x15,
.hsync_len = 64,
.vsync_len = 4,
- .sync = FB_SYNC_HOR_HIGH_ACT | FB_SYNC_VERT_HIGH_ACT |
- FB_SYNC_DATA_ENABLE_HIGH_ACT |
- FB_SYNC_DOTCLK_FAILING_ACT,
+ .sync = FB_SYNC_HOR_HIGH_ACT | FB_SYNC_VERT_HIGH_ACT,
},
};
@@ -132,7 +123,6 @@
.lower_margin = 2,
.hsync_len = 15,
.vsync_len = 15,
- .sync = FB_SYNC_DATA_ENABLE_HIGH_ACT
},
};
@@ -259,6 +249,8 @@
mxsfb_pdata.mode_count = ARRAY_SIZE(mx23evk_video_modes);
mxsfb_pdata.default_bpp = 32;
mxsfb_pdata.ld_intf_width = STMLCDIF_24BIT;
+ mxsfb_pdata.sync = MXSFB_SYNC_DATA_ENABLE_HIGH_ACT |
+ MXSFB_SYNC_DOTCLK_FAILING_ACT;
}
static inline void enable_clk_enet_out(void)
@@ -278,6 +270,8 @@
mxsfb_pdata.mode_count = ARRAY_SIZE(mx28evk_video_modes);
mxsfb_pdata.default_bpp = 32;
mxsfb_pdata.ld_intf_width = STMLCDIF_24BIT;
+ mxsfb_pdata.sync = MXSFB_SYNC_DATA_ENABLE_HIGH_ACT |
+ MXSFB_SYNC_DOTCLK_FAILING_ACT;
mxs_saif_clkmux_select(MXS_DIGCTL_SAIF_CLKMUX_EXTMSTR0);
}
@@ -297,6 +291,7 @@
mxsfb_pdata.mode_count = ARRAY_SIZE(m28evk_video_modes);
mxsfb_pdata.default_bpp = 16;
mxsfb_pdata.ld_intf_width = STMLCDIF_18BIT;
+ mxsfb_pdata.sync = MXSFB_SYNC_DATA_ENABLE_HIGH_ACT;
}
static void __init sc_sps1_init(void)
@@ -322,6 +317,8 @@
mxsfb_pdata.mode_count = ARRAY_SIZE(apx4devkit_video_modes);
mxsfb_pdata.default_bpp = 32;
mxsfb_pdata.ld_intf_width = STMLCDIF_24BIT;
+ mxsfb_pdata.sync = MXSFB_SYNC_DATA_ENABLE_HIGH_ACT |
+ MXSFB_SYNC_DOTCLK_FAILING_ACT;
}
#define ENET0_MDC__GPIO_4_0 MXS_GPIO_NR(4, 0)
@@ -407,6 +404,7 @@
mxsfb_pdata.mode_count = ARRAY_SIZE(cfa10049_video_modes);
mxsfb_pdata.default_bpp = 32;
mxsfb_pdata.ld_intf_width = STMLCDIF_18BIT;
+ mxsfb_pdata.sync = MXSFB_SYNC_DATA_ENABLE_HIGH_ACT;
}
static void __init cfa10037_init(void)
@@ -423,6 +421,8 @@
mxsfb_pdata.mode_count = ARRAY_SIZE(apf28dev_video_modes);
mxsfb_pdata.default_bpp = 16;
mxsfb_pdata.ld_intf_width = STMLCDIF_16BIT;
+ mxsfb_pdata.sync = MXSFB_SYNC_DATA_ENABLE_HIGH_ACT |
+ MXSFB_SYNC_DOTCLK_FAILING_ACT;
}
static void __init mxs_machine_init(void)
diff --git a/arch/arm/mach-s5pv210/clock.c b/arch/arm/mach-s5pv210/clock.c
index fcdf52d..f051f53 100644
--- a/arch/arm/mach-s5pv210/clock.c
+++ b/arch/arm/mach-s5pv210/clock.c
@@ -214,11 +214,6 @@
.name = "pcmcdclk",
};
-static struct clk dummy_apb_pclk = {
- .name = "apb_pclk",
- .id = -1,
-};
-
static struct clk *clkset_vpllsrc_list[] = {
[0] = &clk_fin_vpll,
[1] = &clk_sclk_hdmi27m,
@@ -305,18 +300,6 @@
static struct clk init_clocks_off[] = {
{
- .name = "dma",
- .devname = "dma-pl330.0",
- .parent = &clk_hclk_psys.clk,
- .enable = s5pv210_clk_ip0_ctrl,
- .ctrlbit = (1 << 3),
- }, {
- .name = "dma",
- .devname = "dma-pl330.1",
- .parent = &clk_hclk_psys.clk,
- .enable = s5pv210_clk_ip0_ctrl,
- .ctrlbit = (1 << 4),
- }, {
.name = "rot",
.parent = &clk_hclk_dsys.clk,
.enable = s5pv210_clk_ip0_ctrl,
@@ -573,6 +556,20 @@
.ctrlbit = (1<<19),
};
+static struct clk clk_pdma0 = {
+ .name = "pdma0",
+ .parent = &clk_hclk_psys.clk,
+ .enable = s5pv210_clk_ip0_ctrl,
+ .ctrlbit = (1 << 3),
+};
+
+static struct clk clk_pdma1 = {
+ .name = "pdma1",
+ .parent = &clk_hclk_psys.clk,
+ .enable = s5pv210_clk_ip0_ctrl,
+ .ctrlbit = (1 << 4),
+};
+
static struct clk *clkset_uart_list[] = {
[6] = &clk_mout_mpll.clk,
[7] = &clk_mout_epll.clk,
@@ -1075,6 +1072,8 @@
&clk_hsmmc1,
&clk_hsmmc2,
&clk_hsmmc3,
+ &clk_pdma0,
+ &clk_pdma1,
};
/* Clock initialisation code */
@@ -1333,6 +1332,8 @@
CLKDEV_INIT(NULL, "spi_busclk0", &clk_p),
CLKDEV_INIT("s5pv210-spi.0", "spi_busclk1", &clk_sclk_spi0.clk),
CLKDEV_INIT("s5pv210-spi.1", "spi_busclk1", &clk_sclk_spi1.clk),
+ CLKDEV_INIT("dma-pl330.0", "apb_pclk", &clk_pdma0),
+ CLKDEV_INIT("dma-pl330.1", "apb_pclk", &clk_pdma1),
};
void __init s5pv210_register_clocks(void)
@@ -1361,6 +1362,5 @@
for (ptr = 0; ptr < ARRAY_SIZE(clk_cdev); ptr++)
s3c_disable_clocks(clk_cdev[ptr], 1);
- s3c24xx_register_clock(&dummy_apb_pclk);
s3c_pwmclk_init();
}
diff --git a/arch/arm/mach-s5pv210/mach-goni.c b/arch/arm/mach-s5pv210/mach-goni.c
index 3a38f7b..e373de4 100644
--- a/arch/arm/mach-s5pv210/mach-goni.c
+++ b/arch/arm/mach-s5pv210/mach-goni.c
@@ -845,7 +845,7 @@
.mux_id = 0,
.flags = V4L2_MBUS_PCLK_SAMPLE_FALLING |
V4L2_MBUS_VSYNC_ACTIVE_LOW,
- .bus_type = FIMC_BUS_TYPE_ITU_601,
+ .fimc_bus_type = FIMC_BUS_TYPE_ITU_601,
.board_info = &noon010pc30_board_info,
.i2c_bus_num = 0,
.clk_frequency = 16000000UL,
diff --git a/arch/arm/mach-shmobile/board-marzen.c b/arch/arm/mach-shmobile/board-marzen.c
index cdcb799..fec49ebc 100644
--- a/arch/arm/mach-shmobile/board-marzen.c
+++ b/arch/arm/mach-shmobile/board-marzen.c
@@ -32,6 +32,7 @@
#include <linux/smsc911x.h>
#include <linux/spi/spi.h>
#include <linux/spi/sh_hspi.h>
+#include <linux/mmc/host.h>
#include <linux/mmc/sh_mobile_sdhi.h>
#include <linux/mfd/tmio.h>
#include <linux/usb/otg.h>
diff --git a/arch/arm/net/bpf_jit_32.c b/arch/arm/net/bpf_jit_32.c
index 6828ef6..a0bd8a7 100644
--- a/arch/arm/net/bpf_jit_32.c
+++ b/arch/arm/net/bpf_jit_32.c
@@ -576,7 +576,7 @@
/* x = ((*(frame + k)) & 0xf) << 2; */
ctx->seen |= SEEN_X | SEEN_DATA | SEEN_CALL;
/* the interpreter should deal with the negative K */
- if (k < 0)
+ if ((int)k < 0)
return -1;
/* offset in r1: we might have to take the slow path */
emit_mov_i(r_off, k, ctx);
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index fd70a68..9b6d19f 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -9,7 +9,6 @@
select CLONE_BACKWARDS
select COMMON_CLK
select GENERIC_CLOCKEVENTS
- select GENERIC_HARDIRQS_NO_DEPRECATED
select GENERIC_IOMAP
select GENERIC_IRQ_PROBE
select GENERIC_IRQ_SHOW
diff --git a/arch/arm64/Kconfig.debug b/arch/arm64/Kconfig.debug
index 5149343..1a6bfe9 100644
--- a/arch/arm64/Kconfig.debug
+++ b/arch/arm64/Kconfig.debug
@@ -6,17 +6,6 @@
bool
default y
-config DEBUG_ERRORS
- bool "Verbose kernel error messages"
- depends on DEBUG_KERNEL
- help
- This option controls verbose debugging information which can be
- printed when the kernel detects an internal error. This debugging
- information is useful to kernel hackers when tracking down problems,
- but mostly meaningless to other people. It's safe to say Y unless
- you are concerned with the code size or don't want to see these
- messages.
-
config DEBUG_STACK_USAGE
bool "Enable stack utilization instrumentation"
depends on DEBUG_KERNEL
diff --git a/arch/arm64/configs/defconfig b/arch/arm64/configs/defconfig
index 9212c78..09bef29 100644
--- a/arch/arm64/configs/defconfig
+++ b/arch/arm64/configs/defconfig
@@ -82,4 +82,3 @@
CONFIG_DEBUG_INFO=y
# CONFIG_FTRACE is not set
CONFIG_ATOMIC64_SELFTEST=y
-CONFIG_DEBUG_ERRORS=y
diff --git a/arch/arm64/include/asm/ucontext.h b/arch/arm64/include/asm/ucontext.h
index bde9607..42e04c8 100644
--- a/arch/arm64/include/asm/ucontext.h
+++ b/arch/arm64/include/asm/ucontext.h
@@ -22,7 +22,7 @@
stack_t uc_stack;
sigset_t uc_sigmask;
/* glibc uses a 1024-bit sigset_t */
- __u8 __unused[(1024 - sizeof(sigset_t)) / 8];
+ __u8 __unused[1024 / 8 - sizeof(sigset_t)];
/* last for future expansion */
struct sigcontext uc_mcontext;
};
diff --git a/arch/arm64/kernel/arm64ksyms.c b/arch/arm64/kernel/arm64ksyms.c
index cef3925..aa3e948 100644
--- a/arch/arm64/kernel/arm64ksyms.c
+++ b/arch/arm64/kernel/arm64ksyms.c
@@ -40,7 +40,9 @@
EXPORT_SYMBOL(__clear_user);
/* bitops */
+#ifdef CONFIG_SMP
EXPORT_SYMBOL(__atomic_hash);
+#endif
/* physical memory */
EXPORT_SYMBOL(memstart_addr);
diff --git a/arch/arm64/kernel/signal32.c b/arch/arm64/kernel/signal32.c
index 7f4f367..e393174 100644
--- a/arch/arm64/kernel/signal32.c
+++ b/arch/arm64/kernel/signal32.c
@@ -549,7 +549,6 @@
sigset_t *set, struct pt_regs *regs)
{
struct compat_rt_sigframe __user *frame;
- compat_stack_t stack;
int err = 0;
frame = compat_get_sigframe(ka, regs, sizeof(*frame));
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 8082151..ea5bb04 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -90,6 +90,7 @@
config PPC
bool
default y
+ select BINFMT_ELF
select OF
select OF_EARLY_FLATTREE
select HAVE_FTRACE_MCOUNT_RECORD
diff --git a/arch/powerpc/include/asm/mmu-hash64.h b/arch/powerpc/include/asm/mmu-hash64.h
index 2fdb47a..b59e06f 100644
--- a/arch/powerpc/include/asm/mmu-hash64.h
+++ b/arch/powerpc/include/asm/mmu-hash64.h
@@ -343,17 +343,16 @@
/*
* VSID allocation (256MB segment)
*
- * We first generate a 38-bit "proto-VSID". For kernel addresses this
- * is equal to the ESID | 1 << 37, for user addresses it is:
- * (context << USER_ESID_BITS) | (esid & ((1U << USER_ESID_BITS) - 1)
+ * We first generate a 37-bit "proto-VSID". Proto-VSIDs are generated
+ * from mmu context id and effective segment id of the address.
*
- * This splits the proto-VSID into the below range
- * 0 - (2^(CONTEXT_BITS + USER_ESID_BITS) - 1) : User proto-VSID range
- * 2^(CONTEXT_BITS + USER_ESID_BITS) - 2^(VSID_BITS) : Kernel proto-VSID range
- *
- * We also have CONTEXT_BITS + USER_ESID_BITS = VSID_BITS - 1
- * That is, we assign half of the space to user processes and half
- * to the kernel.
+ * For user processes max context id is limited to ((1ul << 19) - 5)
+ * for kernel space, we use the top 4 context ids to map address as below
+ * NOTE: each context only support 64TB now.
+ * 0x7fffc - [ 0xc000000000000000 - 0xc0003fffffffffff ]
+ * 0x7fffd - [ 0xd000000000000000 - 0xd0003fffffffffff ]
+ * 0x7fffe - [ 0xe000000000000000 - 0xe0003fffffffffff ]
+ * 0x7ffff - [ 0xf000000000000000 - 0xf0003fffffffffff ]
*
* The proto-VSIDs are then scrambled into real VSIDs with the
* multiplicative hash:
@@ -363,41 +362,49 @@
* VSID_MULTIPLIER is prime, so in particular it is
* co-prime to VSID_MODULUS, making this a 1:1 scrambling function.
* Because the modulus is 2^n-1 we can compute it efficiently without
- * a divide or extra multiply (see below).
+ * a divide or extra multiply (see below). The scramble function gives
+ * robust scattering in the hash table (at least based on some initial
+ * results).
*
- * This scheme has several advantages over older methods:
+ * We also consider VSID 0 special. We use VSID 0 for slb entries mapping
+ * bad address. This enables us to consolidate bad address handling in
+ * hash_page.
*
- * - We have VSIDs allocated for every kernel address
- * (i.e. everything above 0xC000000000000000), except the very top
- * segment, which simplifies several things.
- *
- * - We allow for USER_ESID_BITS significant bits of ESID and
- * CONTEXT_BITS bits of context for user addresses.
- * i.e. 64T (46 bits) of address space for up to half a million contexts.
- *
- * - The scramble function gives robust scattering in the hash
- * table (at least based on some initial results). The previous
- * method was more susceptible to pathological cases giving excessive
- * hash collisions.
+ * We also need to avoid the last segment of the last context, because that
+ * would give a protovsid of 0x1fffffffff. That will result in a VSID 0
+ * because of the modulo operation in vsid scramble. But the vmemmap
+ * (which is what uses region 0xf) will never be close to 64TB in size
+ * (it's 56 bytes per page of system memory).
*/
+#define CONTEXT_BITS 19
+#define ESID_BITS 18
+#define ESID_BITS_1T 6
+
+/*
+ * 256MB segment
+ * The proto-VSID space has 2^(CONTEX_BITS + ESID_BITS) - 1 segments
+ * available for user + kernel mapping. The top 4 contexts are used for
+ * kernel mapping. Each segment contains 2^28 bytes. Each
+ * context maps 2^46 bytes (64TB) so we can support 2^19-1 contexts
+ * (19 == 37 + 28 - 46).
+ */
+#define MAX_USER_CONTEXT ((ASM_CONST(1) << CONTEXT_BITS) - 5)
+
/*
* This should be computed such that protovosid * vsid_mulitplier
* doesn't overflow 64 bits. It should also be co-prime to vsid_modulus
*/
#define VSID_MULTIPLIER_256M ASM_CONST(12538073) /* 24-bit prime */
-#define VSID_BITS_256M 38
+#define VSID_BITS_256M (CONTEXT_BITS + ESID_BITS)
#define VSID_MODULUS_256M ((1UL<<VSID_BITS_256M)-1)
#define VSID_MULTIPLIER_1T ASM_CONST(12538073) /* 24-bit prime */
-#define VSID_BITS_1T 26
+#define VSID_BITS_1T (CONTEXT_BITS + ESID_BITS_1T)
#define VSID_MODULUS_1T ((1UL<<VSID_BITS_1T)-1)
-#define CONTEXT_BITS 19
-#define USER_ESID_BITS 18
-#define USER_ESID_BITS_1T 6
-#define USER_VSID_RANGE (1UL << (USER_ESID_BITS + SID_SHIFT))
+#define USER_VSID_RANGE (1UL << (ESID_BITS + SID_SHIFT))
/*
* This macro generates asm code to compute the VSID scramble
@@ -421,7 +428,8 @@
srdi rx,rt,VSID_BITS_##size; \
clrldi rt,rt,(64-VSID_BITS_##size); \
add rt,rt,rx; /* add high and low bits */ \
- /* Now, r3 == VSID (mod 2^36-1), and lies between 0 and \
+ /* NOTE: explanation based on VSID_BITS_##size = 36 \
+ * Now, r3 == VSID (mod 2^36-1), and lies between 0 and \
* 2^36-1+2^28-1. That in particular means that if r3 >= \
* 2^36-1, then r3+1 has the 2^36 bit set. So, if r3+1 has \
* the bit clear, r3 already has the answer we want, if it \
@@ -513,34 +521,6 @@
})
#endif /* 1 */
-/*
- * This is only valid for addresses >= PAGE_OFFSET
- * The proto-VSID space is divided into two class
- * User: 0 to 2^(CONTEXT_BITS + USER_ESID_BITS) -1
- * kernel: 2^(CONTEXT_BITS + USER_ESID_BITS) to 2^(VSID_BITS) - 1
- *
- * With KERNEL_START at 0xc000000000000000, the proto vsid for
- * the kernel ends up with 0xc00000000 (36 bits). With 64TB
- * support we need to have kernel proto-VSID in the
- * [2^37 to 2^38 - 1] range due to the increased USER_ESID_BITS.
- */
-static inline unsigned long get_kernel_vsid(unsigned long ea, int ssize)
-{
- unsigned long proto_vsid;
- /*
- * We need to make sure proto_vsid for the kernel is
- * >= 2^(CONTEXT_BITS + USER_ESID_BITS[_1T])
- */
- if (ssize == MMU_SEGSIZE_256M) {
- proto_vsid = ea >> SID_SHIFT;
- proto_vsid |= (1UL << (CONTEXT_BITS + USER_ESID_BITS));
- return vsid_scramble(proto_vsid, 256M);
- }
- proto_vsid = ea >> SID_SHIFT_1T;
- proto_vsid |= (1UL << (CONTEXT_BITS + USER_ESID_BITS_1T));
- return vsid_scramble(proto_vsid, 1T);
-}
-
/* Returns the segment size indicator for a user address */
static inline int user_segment_size(unsigned long addr)
{
@@ -550,17 +530,41 @@
return MMU_SEGSIZE_256M;
}
-/* This is only valid for user addresses (which are below 2^44) */
static inline unsigned long get_vsid(unsigned long context, unsigned long ea,
int ssize)
{
+ /*
+ * Bad address. We return VSID 0 for that
+ */
+ if ((ea & ~REGION_MASK) >= PGTABLE_RANGE)
+ return 0;
+
if (ssize == MMU_SEGSIZE_256M)
- return vsid_scramble((context << USER_ESID_BITS)
+ return vsid_scramble((context << ESID_BITS)
| (ea >> SID_SHIFT), 256M);
- return vsid_scramble((context << USER_ESID_BITS_1T)
+ return vsid_scramble((context << ESID_BITS_1T)
| (ea >> SID_SHIFT_1T), 1T);
}
+/*
+ * This is only valid for addresses >= PAGE_OFFSET
+ *
+ * For kernel space, we use the top 4 context ids to map address as below
+ * 0x7fffc - [ 0xc000000000000000 - 0xc0003fffffffffff ]
+ * 0x7fffd - [ 0xd000000000000000 - 0xd0003fffffffffff ]
+ * 0x7fffe - [ 0xe000000000000000 - 0xe0003fffffffffff ]
+ * 0x7ffff - [ 0xf000000000000000 - 0xf0003fffffffffff ]
+ */
+static inline unsigned long get_kernel_vsid(unsigned long ea, int ssize)
+{
+ unsigned long context;
+
+ /*
+ * kernel take the top 4 context from the available range
+ */
+ context = (MAX_USER_CONTEXT) + ((ea >> 60) - 0xc) + 1;
+ return get_vsid(context, ea, ssize);
+}
#endif /* __ASSEMBLY__ */
#endif /* _ASM_POWERPC_MMU_HASH64_H_ */
diff --git a/arch/powerpc/kernel/cputable.c b/arch/powerpc/kernel/cputable.c
index 75a3d71..19599ef 100644
--- a/arch/powerpc/kernel/cputable.c
+++ b/arch/powerpc/kernel/cputable.c
@@ -275,7 +275,7 @@
.cpu_features = CPU_FTRS_PPC970,
.cpu_user_features = COMMON_USER_POWER4 |
PPC_FEATURE_HAS_ALTIVEC_COMP,
- .mmu_features = MMU_FTR_HPTE_TABLE,
+ .mmu_features = MMU_FTRS_PPC970,
.icache_bsize = 128,
.dcache_bsize = 128,
.num_pmcs = 8,
diff --git a/arch/powerpc/kernel/epapr_paravirt.c b/arch/powerpc/kernel/epapr_paravirt.c
index f3eab85..d44a571 100644
--- a/arch/powerpc/kernel/epapr_paravirt.c
+++ b/arch/powerpc/kernel/epapr_paravirt.c
@@ -23,8 +23,10 @@
#include <asm/code-patching.h>
#include <asm/machdep.h>
+#if !defined(CONFIG_64BIT) || defined(CONFIG_PPC_BOOK3E_64)
extern void epapr_ev_idle(void);
extern u32 epapr_ev_idle_start[];
+#endif
bool epapr_paravirt_enabled;
@@ -47,11 +49,15 @@
for (i = 0; i < (len / 4); i++) {
patch_instruction(epapr_hypercall_start + i, insts[i]);
+#if !defined(CONFIG_64BIT) || defined(CONFIG_PPC_BOOK3E_64)
patch_instruction(epapr_ev_idle_start + i, insts[i]);
+#endif
}
+#if !defined(CONFIG_64BIT) || defined(CONFIG_PPC_BOOK3E_64)
if (of_get_property(hyper_node, "has-idle", NULL))
ppc_md.power_save = epapr_ev_idle;
+#endif
epapr_paravirt_enabled = true;
diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
index 87ef8f5..56bd923 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -1066,78 +1066,6 @@
#endif /* __DISABLED__ */
-/*
- * r13 points to the PACA, r9 contains the saved CR,
- * r12 contain the saved SRR1, SRR0 is still ready for return
- * r3 has the faulting address
- * r9 - r13 are saved in paca->exslb.
- * r3 is saved in paca->slb_r3
- * We assume we aren't going to take any exceptions during this procedure.
- */
-_GLOBAL(slb_miss_realmode)
- mflr r10
-#ifdef CONFIG_RELOCATABLE
- mtctr r11
-#endif
-
- stw r9,PACA_EXSLB+EX_CCR(r13) /* save CR in exc. frame */
- std r10,PACA_EXSLB+EX_LR(r13) /* save LR */
-
- bl .slb_allocate_realmode
-
- /* All done -- return from exception. */
-
- ld r10,PACA_EXSLB+EX_LR(r13)
- ld r3,PACA_EXSLB+EX_R3(r13)
- lwz r9,PACA_EXSLB+EX_CCR(r13) /* get saved CR */
-
- mtlr r10
-
- andi. r10,r12,MSR_RI /* check for unrecoverable exception */
- beq- 2f
-
-.machine push
-.machine "power4"
- mtcrf 0x80,r9
- mtcrf 0x01,r9 /* slb_allocate uses cr0 and cr7 */
-.machine pop
-
- RESTORE_PPR_PACA(PACA_EXSLB, r9)
- ld r9,PACA_EXSLB+EX_R9(r13)
- ld r10,PACA_EXSLB+EX_R10(r13)
- ld r11,PACA_EXSLB+EX_R11(r13)
- ld r12,PACA_EXSLB+EX_R12(r13)
- ld r13,PACA_EXSLB+EX_R13(r13)
- rfid
- b . /* prevent speculative execution */
-
-2: mfspr r11,SPRN_SRR0
- ld r10,PACAKBASE(r13)
- LOAD_HANDLER(r10,unrecov_slb)
- mtspr SPRN_SRR0,r10
- ld r10,PACAKMSR(r13)
- mtspr SPRN_SRR1,r10
- rfid
- b .
-
-unrecov_slb:
- EXCEPTION_PROLOG_COMMON(0x4100, PACA_EXSLB)
- DISABLE_INTS
- bl .save_nvgprs
-1: addi r3,r1,STACK_FRAME_OVERHEAD
- bl .unrecoverable_exception
- b 1b
-
-
-#ifdef CONFIG_PPC_970_NAP
-power4_fixup_nap:
- andc r9,r9,r10
- std r9,TI_LOCAL_FLAGS(r11)
- ld r10,_LINK(r1) /* make idle task do the */
- std r10,_NIP(r1) /* equivalent of a blr */
- blr
-#endif
-
.align 7
.globl alignment_common
alignment_common:
@@ -1336,6 +1264,78 @@
/*
+ * r13 points to the PACA, r9 contains the saved CR,
+ * r12 contain the saved SRR1, SRR0 is still ready for return
+ * r3 has the faulting address
+ * r9 - r13 are saved in paca->exslb.
+ * r3 is saved in paca->slb_r3
+ * We assume we aren't going to take any exceptions during this procedure.
+ */
+_GLOBAL(slb_miss_realmode)
+ mflr r10
+#ifdef CONFIG_RELOCATABLE
+ mtctr r11
+#endif
+
+ stw r9,PACA_EXSLB+EX_CCR(r13) /* save CR in exc. frame */
+ std r10,PACA_EXSLB+EX_LR(r13) /* save LR */
+
+ bl .slb_allocate_realmode
+
+ /* All done -- return from exception. */
+
+ ld r10,PACA_EXSLB+EX_LR(r13)
+ ld r3,PACA_EXSLB+EX_R3(r13)
+ lwz r9,PACA_EXSLB+EX_CCR(r13) /* get saved CR */
+
+ mtlr r10
+
+ andi. r10,r12,MSR_RI /* check for unrecoverable exception */
+ beq- 2f
+
+.machine push
+.machine "power4"
+ mtcrf 0x80,r9
+ mtcrf 0x01,r9 /* slb_allocate uses cr0 and cr7 */
+.machine pop
+
+ RESTORE_PPR_PACA(PACA_EXSLB, r9)
+ ld r9,PACA_EXSLB+EX_R9(r13)
+ ld r10,PACA_EXSLB+EX_R10(r13)
+ ld r11,PACA_EXSLB+EX_R11(r13)
+ ld r12,PACA_EXSLB+EX_R12(r13)
+ ld r13,PACA_EXSLB+EX_R13(r13)
+ rfid
+ b . /* prevent speculative execution */
+
+2: mfspr r11,SPRN_SRR0
+ ld r10,PACAKBASE(r13)
+ LOAD_HANDLER(r10,unrecov_slb)
+ mtspr SPRN_SRR0,r10
+ ld r10,PACAKMSR(r13)
+ mtspr SPRN_SRR1,r10
+ rfid
+ b .
+
+unrecov_slb:
+ EXCEPTION_PROLOG_COMMON(0x4100, PACA_EXSLB)
+ DISABLE_INTS
+ bl .save_nvgprs
+1: addi r3,r1,STACK_FRAME_OVERHEAD
+ bl .unrecoverable_exception
+ b 1b
+
+
+#ifdef CONFIG_PPC_970_NAP
+power4_fixup_nap:
+ andc r9,r9,r10
+ std r9,TI_LOCAL_FLAGS(r11)
+ ld r10,_LINK(r1) /* make idle task do the */
+ std r10,_NIP(r1) /* equivalent of a blr */
+ blr
+#endif
+
+/*
* Hash table stuff
*/
.align 7
@@ -1452,20 +1452,36 @@
_GLOBAL(do_stab_bolted)
stw r9,PACA_EXSLB+EX_CCR(r13) /* save CR in exc. frame */
std r11,PACA_EXSLB+EX_SRR0(r13) /* save SRR0 in exc. frame */
+ mfspr r11,SPRN_DAR /* ea */
+ /*
+ * check for bad kernel/user address
+ * (ea & ~REGION_MASK) >= PGTABLE_RANGE
+ */
+ rldicr. r9,r11,4,(63 - 46 - 4)
+ li r9,0 /* VSID = 0 for bad address */
+ bne- 0f
+
+ /*
+ * Calculate VSID:
+ * This is the kernel vsid, we take the top for context from
+ * the range. context = (MAX_USER_CONTEXT) + ((ea >> 60) - 0xc) + 1
+ * Here we know that (ea >> 60) == 0xc
+ */
+ lis r9,(MAX_USER_CONTEXT + 1)@ha
+ addi r9,r9,(MAX_USER_CONTEXT + 1)@l
+
+ srdi r10,r11,SID_SHIFT
+ rldimi r10,r9,ESID_BITS,0 /* proto vsid */
+ ASM_VSID_SCRAMBLE(r10, r9, 256M)
+ rldic r9,r10,12,16 /* r9 = vsid << 12 */
+
+0:
/* Hash to the primary group */
ld r10,PACASTABVIRT(r13)
- mfspr r11,SPRN_DAR
- srdi r11,r11,28
+ srdi r11,r11,SID_SHIFT
rldimi r10,r11,7,52 /* r10 = first ste of the group */
- /* Calculate VSID */
- /* This is a kernel address, so protovsid = ESID | 1 << 37 */
- li r9,0x1
- rldimi r11,r9,(CONTEXT_BITS + USER_ESID_BITS),0
- ASM_VSID_SCRAMBLE(r11, r9, 256M)
- rldic r9,r11,12,16 /* r9 = vsid << 12 */
-
/* Search the primary group for a free entry */
1: ld r11,0(r10) /* Test valid bit of the current ste */
andi. r11,r11,0x80
diff --git a/arch/powerpc/kernel/prom_init.c b/arch/powerpc/kernel/prom_init.c
index 7f7fb7f..13f8d16 100644
--- a/arch/powerpc/kernel/prom_init.c
+++ b/arch/powerpc/kernel/prom_init.c
@@ -2832,11 +2832,13 @@
{
}
#else
-static void __reloc_toc(void *tocstart, unsigned long offset,
- unsigned long nr_entries)
+static void __reloc_toc(unsigned long offset, unsigned long nr_entries)
{
unsigned long i;
- unsigned long *toc_entry = (unsigned long *)tocstart;
+ unsigned long *toc_entry;
+
+ /* Get the start of the TOC by using r2 directly. */
+ asm volatile("addi %0,2,-0x8000" : "=b" (toc_entry));
for (i = 0; i < nr_entries; i++) {
*toc_entry = *toc_entry + offset;
@@ -2850,8 +2852,7 @@
unsigned long nr_entries =
(__prom_init_toc_end - __prom_init_toc_start) / sizeof(long);
- /* Need to add offset to get at __prom_init_toc_start */
- __reloc_toc(__prom_init_toc_start + offset, offset, nr_entries);
+ __reloc_toc(offset, nr_entries);
mb();
}
@@ -2864,8 +2865,7 @@
mb();
- /* __prom_init_toc_start has been relocated, no need to add offset */
- __reloc_toc(__prom_init_toc_start, -offset, nr_entries);
+ __reloc_toc(-offset, nr_entries);
}
#endif
#endif
diff --git a/arch/powerpc/kernel/ptrace.c b/arch/powerpc/kernel/ptrace.c
index 245c1b6..f9b30c6 100644
--- a/arch/powerpc/kernel/ptrace.c
+++ b/arch/powerpc/kernel/ptrace.c
@@ -1428,6 +1428,7 @@
brk.address = bp_info->addr & ~7UL;
brk.type = HW_BRK_TYPE_TRANSLATE;
+ brk.len = 8;
if (bp_info->trigger_type & PPC_BREAKPOINT_TRIGGER_READ)
brk.type |= HW_BRK_TYPE_READ;
if (bp_info->trigger_type & PPC_BREAKPOINT_TRIGGER_WRITE)
diff --git a/arch/powerpc/kvm/book3s_64_mmu_host.c b/arch/powerpc/kvm/book3s_64_mmu_host.c
index ead58e3..5d7d29a 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_host.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_host.c
@@ -326,8 +326,8 @@
vcpu3s->context_id[0] = err;
vcpu3s->proto_vsid_max = ((vcpu3s->context_id[0] + 1)
- << USER_ESID_BITS) - 1;
- vcpu3s->proto_vsid_first = vcpu3s->context_id[0] << USER_ESID_BITS;
+ << ESID_BITS) - 1;
+ vcpu3s->proto_vsid_first = vcpu3s->context_id[0] << ESID_BITS;
vcpu3s->proto_vsid_next = vcpu3s->proto_vsid_first;
kvmppc_mmu_hpte_init(vcpu);
diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c
index 1b6e127..f410c3e 100644
--- a/arch/powerpc/mm/hash_utils_64.c
+++ b/arch/powerpc/mm/hash_utils_64.c
@@ -195,6 +195,11 @@
unsigned long vpn = hpt_vpn(vaddr, vsid, ssize);
unsigned long tprot = prot;
+ /*
+ * If we hit a bad address return error.
+ */
+ if (!vsid)
+ return -1;
/* Make kernel text executable */
if (overlaps_kernel_text(vaddr, vaddr + step))
tprot &= ~HPTE_R_N;
@@ -759,6 +764,8 @@
/* Initialize stab / SLB management */
if (mmu_has_feature(MMU_FTR_SLB))
slb_initialize();
+ else
+ stab_initialize(get_paca()->stab_real);
}
#ifdef CONFIG_SMP
@@ -922,11 +929,6 @@
DBG_LOW("hash_page(ea=%016lx, access=%lx, trap=%lx\n",
ea, access, trap);
- if ((ea & ~REGION_MASK) >= PGTABLE_RANGE) {
- DBG_LOW(" out of pgtable range !\n");
- return 1;
- }
-
/* Get region & vsid */
switch (REGION_ID(ea)) {
case USER_REGION_ID:
@@ -957,6 +959,11 @@
}
DBG_LOW(" mm=%p, mm->pgdir=%p, vsid=%016lx\n", mm, mm->pgd, vsid);
+ /* Bad address. */
+ if (!vsid) {
+ DBG_LOW("Bad address!\n");
+ return 1;
+ }
/* Get pgdir */
pgdir = mm->pgd;
if (pgdir == NULL)
@@ -1126,6 +1133,8 @@
/* Get VSID */
ssize = user_segment_size(ea);
vsid = get_vsid(mm->context.id, ea, ssize);
+ if (!vsid)
+ return;
/* Hash doesn't like irqs */
local_irq_save(flags);
@@ -1233,6 +1242,9 @@
hash = hpt_hash(vpn, PAGE_SHIFT, mmu_kernel_ssize);
hpteg = ((hash & htab_hash_mask) * HPTES_PER_GROUP);
+ /* Don't create HPTE entries for bad address */
+ if (!vsid)
+ return;
ret = ppc_md.hpte_insert(hpteg, vpn, __pa(vaddr),
mode, HPTE_V_BOLTED,
mmu_linear_psize, mmu_kernel_ssize);
diff --git a/arch/powerpc/mm/mmu_context_hash64.c b/arch/powerpc/mm/mmu_context_hash64.c
index 40bc5b0..d1d1b92 100644
--- a/arch/powerpc/mm/mmu_context_hash64.c
+++ b/arch/powerpc/mm/mmu_context_hash64.c
@@ -29,15 +29,6 @@
static DEFINE_SPINLOCK(mmu_context_lock);
static DEFINE_IDA(mmu_context_ida);
-/*
- * 256MB segment
- * The proto-VSID space has 2^(CONTEX_BITS + USER_ESID_BITS) - 1 segments
- * available for user mappings. Each segment contains 2^28 bytes. Each
- * context maps 2^46 bytes (64TB) so we can support 2^19-1 contexts
- * (19 == 37 + 28 - 46).
- */
-#define MAX_CONTEXT ((1UL << CONTEXT_BITS) - 1)
-
int __init_new_context(void)
{
int index;
@@ -56,7 +47,7 @@
else if (err)
return err;
- if (index > MAX_CONTEXT) {
+ if (index > MAX_USER_CONTEXT) {
spin_lock(&mmu_context_lock);
ida_remove(&mmu_context_ida, index);
spin_unlock(&mmu_context_lock);
diff --git a/arch/powerpc/mm/pgtable_64.c b/arch/powerpc/mm/pgtable_64.c
index e212a27..654258f 100644
--- a/arch/powerpc/mm/pgtable_64.c
+++ b/arch/powerpc/mm/pgtable_64.c
@@ -61,7 +61,7 @@
#endif
#ifdef CONFIG_PPC_STD_MMU_64
-#if TASK_SIZE_USER64 > (1UL << (USER_ESID_BITS + SID_SHIFT))
+#if TASK_SIZE_USER64 > (1UL << (ESID_BITS + SID_SHIFT))
#error TASK_SIZE_USER64 exceeds user VSID range
#endif
#endif
diff --git a/arch/powerpc/mm/slb_low.S b/arch/powerpc/mm/slb_low.S
index 1a16ca2..17aa6df 100644
--- a/arch/powerpc/mm/slb_low.S
+++ b/arch/powerpc/mm/slb_low.S
@@ -31,10 +31,15 @@
* No other registers are examined or changed.
*/
_GLOBAL(slb_allocate_realmode)
- /* r3 = faulting address */
+ /*
+ * check for bad kernel/user address
+ * (ea & ~REGION_MASK) >= PGTABLE_RANGE
+ */
+ rldicr. r9,r3,4,(63 - 46 - 4)
+ bne- 8f
srdi r9,r3,60 /* get region */
- srdi r10,r3,28 /* get esid */
+ srdi r10,r3,SID_SHIFT /* get esid */
cmpldi cr7,r9,0xc /* cmp PAGE_OFFSET for later use */
/* r3 = address, r10 = esid, cr7 = <> PAGE_OFFSET */
@@ -56,12 +61,14 @@
*/
_GLOBAL(slb_miss_kernel_load_linear)
li r11,0
- li r9,0x1
/*
- * for 1T we shift 12 bits more. slb_finish_load_1T will do
- * the necessary adjustment
+ * context = (MAX_USER_CONTEXT) + ((ea >> 60) - 0xc) + 1
+ * r9 = region id.
*/
- rldimi r10,r9,(CONTEXT_BITS + USER_ESID_BITS),0
+ addis r9,r9,(MAX_USER_CONTEXT - 0xc + 1)@ha
+ addi r9,r9,(MAX_USER_CONTEXT - 0xc + 1)@l
+
+
BEGIN_FTR_SECTION
b slb_finish_load
END_MMU_FTR_SECTION_IFCLR(MMU_FTR_1T_SEGMENT)
@@ -91,24 +98,19 @@
_GLOBAL(slb_miss_kernel_load_io)
li r11,0
6:
- li r9,0x1
/*
- * for 1T we shift 12 bits more. slb_finish_load_1T will do
- * the necessary adjustment
+ * context = (MAX_USER_CONTEXT) + ((ea >> 60) - 0xc) + 1
+ * r9 = region id.
*/
- rldimi r10,r9,(CONTEXT_BITS + USER_ESID_BITS),0
+ addis r9,r9,(MAX_USER_CONTEXT - 0xc + 1)@ha
+ addi r9,r9,(MAX_USER_CONTEXT - 0xc + 1)@l
+
BEGIN_FTR_SECTION
b slb_finish_load
END_MMU_FTR_SECTION_IFCLR(MMU_FTR_1T_SEGMENT)
b slb_finish_load_1T
-0: /* user address: proto-VSID = context << 15 | ESID. First check
- * if the address is within the boundaries of the user region
- */
- srdi. r9,r10,USER_ESID_BITS
- bne- 8f /* invalid ea bits set */
-
-
+0:
/* when using slices, we extract the psize off the slice bitmaps
* and then we need to get the sllp encoding off the mmu_psize_defs
* array.
@@ -164,15 +166,13 @@
ld r9,PACACONTEXTID(r13)
BEGIN_FTR_SECTION
cmpldi r10,0x1000
-END_MMU_FTR_SECTION_IFSET(MMU_FTR_1T_SEGMENT)
- rldimi r10,r9,USER_ESID_BITS,0
-BEGIN_FTR_SECTION
bge slb_finish_load_1T
END_MMU_FTR_SECTION_IFSET(MMU_FTR_1T_SEGMENT)
b slb_finish_load
8: /* invalid EA */
li r10,0 /* BAD_VSID */
+ li r9,0 /* BAD_VSID */
li r11,SLB_VSID_USER /* flags don't much matter */
b slb_finish_load
@@ -221,8 +221,6 @@
/* get context to calculate proto-VSID */
ld r9,PACACONTEXTID(r13)
- rldimi r10,r9,USER_ESID_BITS,0
-
/* fall through slb_finish_load */
#endif /* __DISABLED__ */
@@ -231,9 +229,10 @@
/*
* Finish loading of an SLB entry and return
*
- * r3 = EA, r10 = proto-VSID, r11 = flags, clobbers r9, cr7 = <> PAGE_OFFSET
+ * r3 = EA, r9 = context, r10 = ESID, r11 = flags, clobbers r9, cr7 = <> PAGE_OFFSET
*/
slb_finish_load:
+ rldimi r10,r9,ESID_BITS,0
ASM_VSID_SCRAMBLE(r10,r9,256M)
/*
* bits above VSID_BITS_256M need to be ignored from r10
@@ -298,10 +297,11 @@
/*
* Finish loading of a 1T SLB entry (for the kernel linear mapping) and return.
*
- * r3 = EA, r10 = proto-VSID, r11 = flags, clobbers r9
+ * r3 = EA, r9 = context, r10 = ESID(256MB), r11 = flags, clobbers r9
*/
slb_finish_load_1T:
- srdi r10,r10,40-28 /* get 1T ESID */
+ srdi r10,r10,(SID_SHIFT_1T - SID_SHIFT) /* get 1T ESID */
+ rldimi r10,r9,ESID_BITS_1T,0
ASM_VSID_SCRAMBLE(r10,r9,1T)
/*
* bits above VSID_BITS_1T need to be ignored from r10
diff --git a/arch/powerpc/mm/tlb_hash64.c b/arch/powerpc/mm/tlb_hash64.c
index 0d82ef5..023ec8a 100644
--- a/arch/powerpc/mm/tlb_hash64.c
+++ b/arch/powerpc/mm/tlb_hash64.c
@@ -82,11 +82,11 @@
if (!is_kernel_addr(addr)) {
ssize = user_segment_size(addr);
vsid = get_vsid(mm->context.id, addr, ssize);
- WARN_ON(vsid == 0);
} else {
vsid = get_kernel_vsid(addr, mmu_kernel_ssize);
ssize = mmu_kernel_ssize;
}
+ WARN_ON(vsid == 0);
vpn = hpt_vpn(addr, vsid, ssize);
rpte = __real_pte(__pte(pte), ptep);
diff --git a/arch/powerpc/perf/power7-pmu.c b/arch/powerpc/perf/power7-pmu.c
index b554879..3c475d6 100644
--- a/arch/powerpc/perf/power7-pmu.c
+++ b/arch/powerpc/perf/power7-pmu.c
@@ -420,7 +420,20 @@
.attrs = power7_events_attr,
};
+PMU_FORMAT_ATTR(event, "config:0-19");
+
+static struct attribute *power7_pmu_format_attr[] = {
+ &format_attr_event.attr,
+ NULL,
+};
+
+struct attribute_group power7_pmu_format_group = {
+ .name = "format",
+ .attrs = power7_pmu_format_attr,
+};
+
static const struct attribute_group *power7_pmu_attr_groups[] = {
+ &power7_pmu_format_group,
&power7_pmu_events_group,
NULL,
};
diff --git a/arch/powerpc/platforms/85xx/sgy_cts1000.c b/arch/powerpc/platforms/85xx/sgy_cts1000.c
index 611e92f..7179726 100644
--- a/arch/powerpc/platforms/85xx/sgy_cts1000.c
+++ b/arch/powerpc/platforms/85xx/sgy_cts1000.c
@@ -69,7 +69,7 @@
return IRQ_HANDLED;
};
-static int __devinit gpio_halt_probe(struct platform_device *pdev)
+static int gpio_halt_probe(struct platform_device *pdev)
{
enum of_gpio_flags flags;
struct device_node *node = pdev->dev.of_node;
@@ -128,7 +128,7 @@
return 0;
}
-static int __devexit gpio_halt_remove(struct platform_device *pdev)
+static int gpio_halt_remove(struct platform_device *pdev)
{
if (halt_node) {
int gpio = of_get_gpio(halt_node, 0);
@@ -165,7 +165,7 @@
.of_match_table = gpio_halt_match,
},
.probe = gpio_halt_probe,
- .remove = __devexit_p(gpio_halt_remove),
+ .remove = gpio_halt_remove,
};
module_platform_driver(gpio_halt_driver);
diff --git a/arch/powerpc/platforms/Kconfig.cputype b/arch/powerpc/platforms/Kconfig.cputype
index cea2f09..18e3b76 100644
--- a/arch/powerpc/platforms/Kconfig.cputype
+++ b/arch/powerpc/platforms/Kconfig.cputype
@@ -124,9 +124,8 @@
select PPC_HAVE_PMU_SUPPORT
config POWER3
- bool
depends on PPC64 && PPC_BOOK3S
- default y if !POWER4_ONLY
+ def_bool y
config POWER4
depends on PPC64 && PPC_BOOK3S
@@ -145,8 +144,7 @@
but somewhat slower on other machines. This option only changes
the scheduling of instructions, not the selection of instructions
itself, so the resulting kernel will keep running on all other
- machines. When building a kernel that is supposed to run only
- on Cell, you should also select the POWER4_ONLY option.
+ machines.
# this is temp to handle compat with arch=ppc
config 8xx
diff --git a/arch/s390/include/asm/eadm.h b/arch/s390/include/asm/eadm.h
index 8d48471..dc9200c 100644
--- a/arch/s390/include/asm/eadm.h
+++ b/arch/s390/include/asm/eadm.h
@@ -34,6 +34,8 @@
u32 reserved[4];
} __packed;
+#define EQC_WR_PROHIBIT 22
+
struct msb {
u8 fmt:4;
u8 oc:4;
@@ -96,11 +98,13 @@
#define OP_STATE_TEMP_ERR 2
#define OP_STATE_PERM_ERR 3
+enum scm_event {SCM_CHANGE, SCM_AVAIL};
+
struct scm_driver {
struct device_driver drv;
int (*probe) (struct scm_device *scmdev);
int (*remove) (struct scm_device *scmdev);
- void (*notify) (struct scm_device *scmdev);
+ void (*notify) (struct scm_device *scmdev, enum scm_event event);
void (*handler) (struct scm_device *scmdev, void *data, int error);
};
diff --git a/arch/s390/include/asm/tlbflush.h b/arch/s390/include/asm/tlbflush.h
index 1d8fe2b..6b32af3 100644
--- a/arch/s390/include/asm/tlbflush.h
+++ b/arch/s390/include/asm/tlbflush.h
@@ -74,8 +74,6 @@
static inline void __tlb_flush_mm(struct mm_struct * mm)
{
- if (unlikely(cpumask_empty(mm_cpumask(mm))))
- return;
/*
* If the machine has IDTE we prefer to do a per mm flush
* on all cpus instead of doing a local flush if the mm
diff --git a/arch/s390/kernel/entry.S b/arch/s390/kernel/entry.S
index 5502285..94feff7 100644
--- a/arch/s390/kernel/entry.S
+++ b/arch/s390/kernel/entry.S
@@ -636,7 +636,8 @@
UPDATE_VTIME %r14,%r15,__LC_MCCK_ENTER_TIMER
mcck_skip:
SWITCH_ASYNC __LC_GPREGS_SAVE_AREA+32,__LC_PANIC_STACK,PAGE_SHIFT
- mvc __PT_R0(64,%r11),__LC_GPREGS_SAVE_AREA
+ stm %r0,%r7,__PT_R0(%r11)
+ mvc __PT_R8(32,%r11),__LC_GPREGS_SAVE_AREA+32
stm %r8,%r9,__PT_PSW(%r11)
xc __SF_BACKCHAIN(4,%r15),__SF_BACKCHAIN(%r15)
l %r1,BASED(.Ldo_machine_check)
diff --git a/arch/s390/kernel/entry64.S b/arch/s390/kernel/entry64.S
index 9c837c1..2e6d60c 100644
--- a/arch/s390/kernel/entry64.S
+++ b/arch/s390/kernel/entry64.S
@@ -678,8 +678,9 @@
UPDATE_VTIME %r14,__LC_MCCK_ENTER_TIMER
LAST_BREAK %r14
mcck_skip:
- lghi %r14,__LC_GPREGS_SAVE_AREA
- mvc __PT_R0(128,%r11),0(%r14)
+ lghi %r14,__LC_GPREGS_SAVE_AREA+64
+ stmg %r0,%r7,__PT_R0(%r11)
+ mvc __PT_R8(64,%r11),0(%r14)
stmg %r8,%r9,__PT_PSW(%r11)
xc __SF_BACKCHAIN(8,%r15),__SF_BACKCHAIN(%r15)
lgr %r2,%r11 # pass pointer to pt_regs
diff --git a/arch/s390/kernel/setup.c b/arch/s390/kernel/setup.c
index a5360de..2926885 100644
--- a/arch/s390/kernel/setup.c
+++ b/arch/s390/kernel/setup.c
@@ -571,6 +571,8 @@
/* Split remaining virtual space between 1:1 mapping & vmemmap array */
tmp = VMALLOC_START / (PAGE_SIZE + sizeof(struct page));
+ /* vmemmap contains a multiple of PAGES_PER_SECTION struct pages */
+ tmp = SECTION_ALIGN_UP(tmp);
tmp = VMALLOC_START - tmp * sizeof(struct page);
tmp &= ~((vmax >> 11) - 1); /* align to page table level */
tmp = min(tmp, 1UL << MAX_PHYSMEM_BITS);
diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig
index 289127d..3d361f2 100644
--- a/arch/sparc/Kconfig
+++ b/arch/sparc/Kconfig
@@ -84,12 +84,6 @@
default "arch/sparc/configs/sparc32_defconfig" if SPARC32
default "arch/sparc/configs/sparc64_defconfig" if SPARC64
-# CONFIG_BITS can be used at source level to get 32/64 bits
-config BITS
- int
- default 32 if SPARC32
- default 64 if SPARC64
-
config IOMMU_HELPER
bool
default y if SPARC64
@@ -197,7 +191,7 @@
config GENERIC_HWEIGHT
bool
- default y if !ULTRA_HAS_POPULATION_COUNT
+ default y
config GENERIC_CALIBRATE_DELAY
bool
diff --git a/arch/sparc/include/asm/spitfire.h b/arch/sparc/include/asm/spitfire.h
index d06a2660..6b67e50 100644
--- a/arch/sparc/include/asm/spitfire.h
+++ b/arch/sparc/include/asm/spitfire.h
@@ -45,6 +45,7 @@
#define SUN4V_CHIP_NIAGARA3 0x03
#define SUN4V_CHIP_NIAGARA4 0x04
#define SUN4V_CHIP_NIAGARA5 0x05
+#define SUN4V_CHIP_SPARC64X 0x8a
#define SUN4V_CHIP_UNKNOWN 0xff
#ifndef __ASSEMBLY__
diff --git a/arch/sparc/kernel/cpu.c b/arch/sparc/kernel/cpu.c
index a6c94a2..5c51258 100644
--- a/arch/sparc/kernel/cpu.c
+++ b/arch/sparc/kernel/cpu.c
@@ -493,6 +493,12 @@
sparc_pmu_type = "niagara5";
break;
+ case SUN4V_CHIP_SPARC64X:
+ sparc_cpu_type = "SPARC64-X";
+ sparc_fpu_type = "SPARC64-X integrated FPU";
+ sparc_pmu_type = "sparc64-x";
+ break;
+
default:
printk(KERN_WARNING "CPU: Unknown sun4v cpu type [%s]\n",
prom_cpu_compatible);
diff --git a/arch/sparc/kernel/head_64.S b/arch/sparc/kernel/head_64.S
index 2feb15c..26b706a 100644
--- a/arch/sparc/kernel/head_64.S
+++ b/arch/sparc/kernel/head_64.S
@@ -134,6 +134,8 @@
.asciz "SUNW,UltraSPARC-T"
prom_sparc_prefix:
.asciz "SPARC-"
+prom_sparc64x_prefix:
+ .asciz "SPARC64-X"
.align 4
prom_root_compatible:
.skip 64
@@ -412,7 +414,7 @@
cmp %g2, 'T'
be,pt %xcc, 70f
cmp %g2, 'M'
- bne,pn %xcc, 4f
+ bne,pn %xcc, 49f
nop
70: ldub [%g1 + 7], %g2
@@ -425,7 +427,7 @@
cmp %g2, '5'
be,pt %xcc, 5f
mov SUN4V_CHIP_NIAGARA5, %g4
- ba,pt %xcc, 4f
+ ba,pt %xcc, 49f
nop
91: sethi %hi(prom_cpu_compatible), %g1
@@ -439,6 +441,25 @@
mov SUN4V_CHIP_NIAGARA2, %g4
4:
+ /* Athena */
+ sethi %hi(prom_cpu_compatible), %g1
+ or %g1, %lo(prom_cpu_compatible), %g1
+ sethi %hi(prom_sparc64x_prefix), %g7
+ or %g7, %lo(prom_sparc64x_prefix), %g7
+ mov 9, %g3
+41: ldub [%g7], %g2
+ ldub [%g1], %g4
+ cmp %g2, %g4
+ bne,pn %icc, 49f
+ add %g7, 1, %g7
+ subcc %g3, 1, %g3
+ bne,pt %xcc, 41b
+ add %g1, 1, %g1
+ mov SUN4V_CHIP_SPARC64X, %g4
+ ba,pt %xcc, 5f
+ nop
+
+49:
mov SUN4V_CHIP_UNKNOWN, %g4
5: sethi %hi(sun4v_chip_type), %g2
or %g2, %lo(sun4v_chip_type), %g2
diff --git a/arch/sparc/kernel/leon_pci_grpci2.c b/arch/sparc/kernel/leon_pci_grpci2.c
index fc43208..4d1487138 100644
--- a/arch/sparc/kernel/leon_pci_grpci2.c
+++ b/arch/sparc/kernel/leon_pci_grpci2.c
@@ -186,6 +186,8 @@
#define CAP9_IOMAP_OFS 0x20
#define CAP9_BARSIZE_OFS 0x24
+#define TGT 256
+
struct grpci2_priv {
struct leon_pci_info info; /* must be on top of this structure */
struct grpci2_regs *regs;
@@ -237,8 +239,12 @@
if (where & 0x3)
return -EINVAL;
- if (bus == 0 && PCI_SLOT(devfn) != 0)
- devfn += (0x8 * 6);
+ if (bus == 0) {
+ devfn += (0x8 * 6); /* start at AD16=Device0 */
+ } else if (bus == TGT) {
+ bus = 0;
+ devfn = 0; /* special case: bridge controller itself */
+ }
/* Select bus */
spin_lock_irqsave(&grpci2_dev_lock, flags);
@@ -303,8 +309,12 @@
if (where & 0x3)
return -EINVAL;
- if (bus == 0 && PCI_SLOT(devfn) != 0)
- devfn += (0x8 * 6);
+ if (bus == 0) {
+ devfn += (0x8 * 6); /* start at AD16=Device0 */
+ } else if (bus == TGT) {
+ bus = 0;
+ devfn = 0; /* special case: bridge controller itself */
+ }
/* Select bus */
spin_lock_irqsave(&grpci2_dev_lock, flags);
@@ -368,7 +378,7 @@
unsigned int busno = bus->number;
int ret;
- if (PCI_SLOT(devfn) > 15 || (PCI_SLOT(devfn) == 0 && busno == 0)) {
+ if (PCI_SLOT(devfn) > 15 || busno > 255) {
*val = ~0;
return 0;
}
@@ -406,7 +416,7 @@
struct grpci2_priv *priv = grpci2priv;
unsigned int busno = bus->number;
- if (PCI_SLOT(devfn) > 15 || (PCI_SLOT(devfn) == 0 && busno == 0))
+ if (PCI_SLOT(devfn) > 15 || busno > 255)
return 0;
#ifdef GRPCI2_DEBUG_CFGACCESS
@@ -578,15 +588,15 @@
REGSTORE(regs->ahbmst_map[i], priv->pci_area);
/* Get the GRPCI2 Host PCI ID */
- grpci2_cfg_r32(priv, 0, 0, PCI_VENDOR_ID, &priv->pciid);
+ grpci2_cfg_r32(priv, TGT, 0, PCI_VENDOR_ID, &priv->pciid);
/* Get address to first (always defined) capability structure */
- grpci2_cfg_r8(priv, 0, 0, PCI_CAPABILITY_LIST, &capptr);
+ grpci2_cfg_r8(priv, TGT, 0, PCI_CAPABILITY_LIST, &capptr);
/* Enable/Disable Byte twisting */
- grpci2_cfg_r32(priv, 0, 0, capptr+CAP9_IOMAP_OFS, &io_map);
+ grpci2_cfg_r32(priv, TGT, 0, capptr+CAP9_IOMAP_OFS, &io_map);
io_map = (io_map & ~0x1) | (priv->bt_enabled ? 1 : 0);
- grpci2_cfg_w32(priv, 0, 0, capptr+CAP9_IOMAP_OFS, io_map);
+ grpci2_cfg_w32(priv, TGT, 0, capptr+CAP9_IOMAP_OFS, io_map);
/* Setup the Host's PCI Target BARs for other peripherals to access,
* and do DMA to the host's memory. The target BARs can be sized and
@@ -617,17 +627,18 @@
pciadr = 0;
}
}
- grpci2_cfg_w32(priv, 0, 0, capptr+CAP9_BARSIZE_OFS+i*4, bar_sz);
- grpci2_cfg_w32(priv, 0, 0, PCI_BASE_ADDRESS_0+i*4, pciadr);
- grpci2_cfg_w32(priv, 0, 0, capptr+CAP9_BAR_OFS+i*4, ahbadr);
+ grpci2_cfg_w32(priv, TGT, 0, capptr+CAP9_BARSIZE_OFS+i*4,
+ bar_sz);
+ grpci2_cfg_w32(priv, TGT, 0, PCI_BASE_ADDRESS_0+i*4, pciadr);
+ grpci2_cfg_w32(priv, TGT, 0, capptr+CAP9_BAR_OFS+i*4, ahbadr);
printk(KERN_INFO " TGT BAR[%d]: 0x%08x (PCI)-> 0x%08x\n",
i, pciadr, ahbadr);
}
/* set as bus master and enable pci memory responses */
- grpci2_cfg_r32(priv, 0, 0, PCI_COMMAND, &data);
+ grpci2_cfg_r32(priv, TGT, 0, PCI_COMMAND, &data);
data |= (PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER);
- grpci2_cfg_w32(priv, 0, 0, PCI_COMMAND, data);
+ grpci2_cfg_w32(priv, TGT, 0, PCI_COMMAND, data);
/* Enable Error respone (CPU-TRAP) on illegal memory access. */
REGSTORE(regs->ctrl, CTRL_ER | CTRL_PE);
diff --git a/arch/tile/configs/tilegx_defconfig b/arch/tile/configs/tilegx_defconfig
index 8c5eff6..4768481 100644
--- a/arch/tile/configs/tilegx_defconfig
+++ b/arch/tile/configs/tilegx_defconfig
@@ -330,7 +330,6 @@
CONFIG_MD_RAID1=m
CONFIG_MD_RAID10=m
CONFIG_MD_RAID456=m
-CONFIG_MULTICORE_RAID456=y
CONFIG_MD_FAULTY=m
CONFIG_BLK_DEV_DM=m
CONFIG_DM_DEBUG=y
diff --git a/arch/tile/configs/tilepro_defconfig b/arch/tile/configs/tilepro_defconfig
index e7a3dfc..dd2b8f0 100644
--- a/arch/tile/configs/tilepro_defconfig
+++ b/arch/tile/configs/tilepro_defconfig
@@ -324,7 +324,6 @@
CONFIG_MD_RAID1=m
CONFIG_MD_RAID10=m
CONFIG_MD_RAID456=m
-CONFIG_MULTICORE_RAID456=y
CONFIG_MD_FAULTY=m
CONFIG_BLK_DEV_DM=m
CONFIG_DM_DEBUG=y
diff --git a/arch/x86/include/asm/kprobes.h b/arch/x86/include/asm/kprobes.h
index d3ddd17..5a6d287 100644
--- a/arch/x86/include/asm/kprobes.h
+++ b/arch/x86/include/asm/kprobes.h
@@ -77,6 +77,7 @@
* a post_handler or break_handler).
*/
int boostable;
+ bool if_modifier;
};
struct arch_optimized_insn {
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 635a74d..4979778 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -414,8 +414,8 @@
gpa_t time;
struct pvclock_vcpu_time_info hv_clock;
unsigned int hw_tsc_khz;
- unsigned int time_offset;
- struct page *time_page;
+ struct gfn_to_hva_cache pv_time;
+ bool pv_time_enabled;
/* set guest stopped flag in pvclock flags field */
bool pvclock_set_guest_stopped_request;
diff --git a/arch/x86/include/asm/xen/hypercall.h b/arch/x86/include/asm/xen/hypercall.h
index c20d1ce..e709884 100644
--- a/arch/x86/include/asm/xen/hypercall.h
+++ b/arch/x86/include/asm/xen/hypercall.h
@@ -382,14 +382,14 @@
return _hypercall3(int, console_io, cmd, count, str);
}
-extern int __must_check HYPERVISOR_physdev_op_compat(int, void *);
+extern int __must_check xen_physdev_op_compat(int, void *);
static inline int
HYPERVISOR_physdev_op(int cmd, void *arg)
{
int rc = _hypercall2(int, physdev_op, cmd, arg);
if (unlikely(rc == -ENOSYS))
- rc = HYPERVISOR_physdev_op_compat(cmd, arg);
+ rc = xen_physdev_op_compat(cmd, arg);
return rc;
}
diff --git a/arch/x86/kernel/cpu/perf_event_intel.c b/arch/x86/kernel/cpu/perf_event_intel.c
index 529c893..dab7580 100644
--- a/arch/x86/kernel/cpu/perf_event_intel.c
+++ b/arch/x86/kernel/cpu/perf_event_intel.c
@@ -101,6 +101,10 @@
FIXED_EVENT_CONSTRAINT(0x00c0, 0), /* INST_RETIRED.ANY */
FIXED_EVENT_CONSTRAINT(0x003c, 1), /* CPU_CLK_UNHALTED.CORE */
FIXED_EVENT_CONSTRAINT(0x0300, 2), /* CPU_CLK_UNHALTED.REF */
+ INTEL_UEVENT_CONSTRAINT(0x04a3, 0xf), /* CYCLE_ACTIVITY.CYCLES_NO_DISPATCH */
+ INTEL_UEVENT_CONSTRAINT(0x05a3, 0xf), /* CYCLE_ACTIVITY.STALLS_L2_PENDING */
+ INTEL_UEVENT_CONSTRAINT(0x02a3, 0x4), /* CYCLE_ACTIVITY.CYCLES_L1D_PENDING */
+ INTEL_UEVENT_CONSTRAINT(0x06a3, 0x4), /* CYCLE_ACTIVITY.STALLS_L1D_PENDING */
INTEL_EVENT_CONSTRAINT(0x48, 0x4), /* L1D_PEND_MISS.PENDING */
INTEL_UEVENT_CONSTRAINT(0x01c0, 0x2), /* INST_RETIRED.PREC_DIST */
INTEL_EVENT_CONSTRAINT(0xcd, 0x8), /* MEM_TRANS_RETIRED.LOAD_LATENCY */
diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
index 3f06e61..7bfe318 100644
--- a/arch/x86/kernel/kprobes/core.c
+++ b/arch/x86/kernel/kprobes/core.c
@@ -375,6 +375,9 @@
else
p->ainsn.boostable = -1;
+ /* Check whether the instruction modifies Interrupt Flag or not */
+ p->ainsn.if_modifier = is_IF_modifier(p->ainsn.insn);
+
/* Also, displacement change doesn't affect the first byte */
p->opcode = p->ainsn.insn[0];
}
@@ -434,7 +437,7 @@
__this_cpu_write(current_kprobe, p);
kcb->kprobe_saved_flags = kcb->kprobe_old_flags
= (regs->flags & (X86_EFLAGS_TF | X86_EFLAGS_IF));
- if (is_IF_modifier(p->ainsn.insn))
+ if (p->ainsn.if_modifier)
kcb->kprobe_saved_flags &= ~X86_EFLAGS_IF;
}
diff --git a/arch/x86/kernel/microcode_intel_early.c b/arch/x86/kernel/microcode_intel_early.c
index 7890bc8..d893e8e 100644
--- a/arch/x86/kernel/microcode_intel_early.c
+++ b/arch/x86/kernel/microcode_intel_early.c
@@ -90,13 +90,13 @@
struct microcode_intel ***mc_saved;
mc_saved = (struct microcode_intel ***)
- __pa_symbol(&mc_saved_data->mc_saved);
+ __pa_nodebug(&mc_saved_data->mc_saved);
for (i = 0; i < mc_saved_data->mc_saved_count; i++) {
struct microcode_intel *p;
p = *(struct microcode_intel **)
- __pa(mc_saved_data->mc_saved + i);
- mc_saved_tmp[i] = (struct microcode_intel *)__pa(p);
+ __pa_nodebug(mc_saved_data->mc_saved + i);
+ mc_saved_tmp[i] = (struct microcode_intel *)__pa_nodebug(p);
}
}
#endif
@@ -562,7 +562,7 @@
struct cpio_data cd;
long offset = 0;
#ifdef CONFIG_X86_32
- char *p = (char *)__pa_symbol(ucode_name);
+ char *p = (char *)__pa_nodebug(ucode_name);
#else
char *p = ucode_name;
#endif
@@ -630,8 +630,8 @@
if (mc_intel == NULL)
return;
- delay_ucode_info_p = (int *)__pa_symbol(&delay_ucode_info);
- current_mc_date_p = (int *)__pa_symbol(¤t_mc_date);
+ delay_ucode_info_p = (int *)__pa_nodebug(&delay_ucode_info);
+ current_mc_date_p = (int *)__pa_nodebug(¤t_mc_date);
*delay_ucode_info_p = 1;
*current_mc_date_p = mc_intel->hdr.date;
@@ -659,8 +659,8 @@
}
#endif
-static int apply_microcode_early(struct mc_saved_data *mc_saved_data,
- struct ucode_cpu_info *uci)
+static int __cpuinit apply_microcode_early(struct mc_saved_data *mc_saved_data,
+ struct ucode_cpu_info *uci)
{
struct microcode_intel *mc_intel;
unsigned int val[2];
@@ -741,15 +741,15 @@
#ifdef CONFIG_X86_32
struct boot_params *boot_params_p;
- boot_params_p = (struct boot_params *)__pa_symbol(&boot_params);
+ boot_params_p = (struct boot_params *)__pa_nodebug(&boot_params);
ramdisk_image = boot_params_p->hdr.ramdisk_image;
ramdisk_size = boot_params_p->hdr.ramdisk_size;
initrd_start_early = ramdisk_image;
initrd_end_early = initrd_start_early + ramdisk_size;
_load_ucode_intel_bsp(
- (struct mc_saved_data *)__pa_symbol(&mc_saved_data),
- (unsigned long *)__pa_symbol(&mc_saved_in_initrd),
+ (struct mc_saved_data *)__pa_nodebug(&mc_saved_data),
+ (unsigned long *)__pa_nodebug(&mc_saved_in_initrd),
initrd_start_early, initrd_end_early, &uci);
#else
ramdisk_image = boot_params.hdr.ramdisk_image;
@@ -772,10 +772,10 @@
unsigned long *initrd_start_p;
mc_saved_in_initrd_p =
- (unsigned long *)__pa_symbol(mc_saved_in_initrd);
- mc_saved_data_p = (struct mc_saved_data *)__pa_symbol(&mc_saved_data);
- initrd_start_p = (unsigned long *)__pa_symbol(&initrd_start);
- initrd_start_addr = (unsigned long)__pa_symbol(*initrd_start_p);
+ (unsigned long *)__pa_nodebug(mc_saved_in_initrd);
+ mc_saved_data_p = (struct mc_saved_data *)__pa_nodebug(&mc_saved_data);
+ initrd_start_p = (unsigned long *)__pa_nodebug(&initrd_start);
+ initrd_start_addr = (unsigned long)__pa_nodebug(*initrd_start_p);
#else
mc_saved_data_p = &mc_saved_data;
mc_saved_in_initrd_p = mc_saved_in_initrd;
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index f71500a..f19ac0a 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -1406,25 +1406,15 @@
unsigned long flags, this_tsc_khz;
struct kvm_vcpu_arch *vcpu = &v->arch;
struct kvm_arch *ka = &v->kvm->arch;
- void *shared_kaddr;
s64 kernel_ns, max_kernel_ns;
u64 tsc_timestamp, host_tsc;
- struct pvclock_vcpu_time_info *guest_hv_clock;
+ struct pvclock_vcpu_time_info guest_hv_clock;
u8 pvclock_flags;
bool use_master_clock;
kernel_ns = 0;
host_tsc = 0;
- /* Keep irq disabled to prevent changes to the clock */
- local_irq_save(flags);
- this_tsc_khz = __get_cpu_var(cpu_tsc_khz);
- if (unlikely(this_tsc_khz == 0)) {
- local_irq_restore(flags);
- kvm_make_request(KVM_REQ_CLOCK_UPDATE, v);
- return 1;
- }
-
/*
* If the host uses TSC clock, then passthrough TSC as stable
* to the guest.
@@ -1436,6 +1426,15 @@
kernel_ns = ka->master_kernel_ns;
}
spin_unlock(&ka->pvclock_gtod_sync_lock);
+
+ /* Keep irq disabled to prevent changes to the clock */
+ local_irq_save(flags);
+ this_tsc_khz = __get_cpu_var(cpu_tsc_khz);
+ if (unlikely(this_tsc_khz == 0)) {
+ local_irq_restore(flags);
+ kvm_make_request(KVM_REQ_CLOCK_UPDATE, v);
+ return 1;
+ }
if (!use_master_clock) {
host_tsc = native_read_tsc();
kernel_ns = get_kernel_ns();
@@ -1463,7 +1462,7 @@
local_irq_restore(flags);
- if (!vcpu->time_page)
+ if (!vcpu->pv_time_enabled)
return 0;
/*
@@ -1525,12 +1524,12 @@
*/
vcpu->hv_clock.version += 2;
- shared_kaddr = kmap_atomic(vcpu->time_page);
-
- guest_hv_clock = shared_kaddr + vcpu->time_offset;
+ if (unlikely(kvm_read_guest_cached(v->kvm, &vcpu->pv_time,
+ &guest_hv_clock, sizeof(guest_hv_clock))))
+ return 0;
/* retain PVCLOCK_GUEST_STOPPED if set in guest copy */
- pvclock_flags = (guest_hv_clock->flags & PVCLOCK_GUEST_STOPPED);
+ pvclock_flags = (guest_hv_clock.flags & PVCLOCK_GUEST_STOPPED);
if (vcpu->pvclock_set_guest_stopped_request) {
pvclock_flags |= PVCLOCK_GUEST_STOPPED;
@@ -1543,12 +1542,9 @@
vcpu->hv_clock.flags = pvclock_flags;
- memcpy(shared_kaddr + vcpu->time_offset, &vcpu->hv_clock,
- sizeof(vcpu->hv_clock));
-
- kunmap_atomic(shared_kaddr);
-
- mark_page_dirty(v->kvm, vcpu->time >> PAGE_SHIFT);
+ kvm_write_guest_cached(v->kvm, &vcpu->pv_time,
+ &vcpu->hv_clock,
+ sizeof(vcpu->hv_clock));
return 0;
}
@@ -1837,10 +1833,7 @@
static void kvmclock_reset(struct kvm_vcpu *vcpu)
{
- if (vcpu->arch.time_page) {
- kvm_release_page_dirty(vcpu->arch.time_page);
- vcpu->arch.time_page = NULL;
- }
+ vcpu->arch.pv_time_enabled = false;
}
static void accumulate_steal_time(struct kvm_vcpu *vcpu)
@@ -1947,6 +1940,7 @@
break;
case MSR_KVM_SYSTEM_TIME_NEW:
case MSR_KVM_SYSTEM_TIME: {
+ u64 gpa_offset;
kvmclock_reset(vcpu);
vcpu->arch.time = data;
@@ -1956,14 +1950,17 @@
if (!(data & 1))
break;
- /* ...but clean it before doing the actual write */
- vcpu->arch.time_offset = data & ~(PAGE_MASK | 1);
+ gpa_offset = data & ~(PAGE_MASK | 1);
- vcpu->arch.time_page =
- gfn_to_page(vcpu->kvm, data >> PAGE_SHIFT);
+ /* Check that the address is 32-byte aligned. */
+ if (gpa_offset & (sizeof(struct pvclock_vcpu_time_info) - 1))
+ break;
- if (is_error_page(vcpu->arch.time_page))
- vcpu->arch.time_page = NULL;
+ if (kvm_gfn_to_hva_cache_init(vcpu->kvm,
+ &vcpu->arch.pv_time, data & ~1ULL))
+ vcpu->arch.pv_time_enabled = false;
+ else
+ vcpu->arch.pv_time_enabled = true;
break;
}
@@ -2967,7 +2964,7 @@
*/
static int kvm_set_guest_paused(struct kvm_vcpu *vcpu)
{
- if (!vcpu->arch.time_page)
+ if (!vcpu->arch.pv_time_enabled)
return -EINVAL;
vcpu->arch.pvclock_set_guest_stopped_request = true;
kvm_make_request(KVM_REQ_CLOCK_UPDATE, vcpu);
@@ -6718,6 +6715,7 @@
goto fail_free_wbinvd_dirty_mask;
vcpu->arch.ia32_tsc_adjust_msr = 0x0;
+ vcpu->arch.pv_time_enabled = false;
kvm_async_pf_hash_reset(vcpu);
kvm_pmu_init(vcpu);
diff --git a/arch/x86/lib/usercopy_64.c b/arch/x86/lib/usercopy_64.c
index 05928aa..906fea3 100644
--- a/arch/x86/lib/usercopy_64.c
+++ b/arch/x86/lib/usercopy_64.c
@@ -74,10 +74,10 @@
char c;
unsigned zero_len;
- for (; len; --len) {
+ for (; len; --len, to++) {
if (__get_user_nocheck(c, from++, sizeof(char)))
break;
- if (__put_user_nocheck(c, to++, sizeof(char)))
+ if (__put_user_nocheck(c, to, sizeof(char)))
break;
}
diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index e8e3493..6afbb2c 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -1467,8 +1467,6 @@
__xen_write_cr3(true, cr3);
xen_mc_issue(PARAVIRT_LAZY_CPU); /* interrupts restored */
-
- pv_mmu_ops.write_cr3 = &xen_write_cr3;
}
#endif
@@ -2122,6 +2120,7 @@
#endif
#ifdef CONFIG_X86_64
+ pv_mmu_ops.write_cr3 = &xen_write_cr3;
SetPagePinned(virt_to_page(level3_user_vsyscall));
#endif
xen_mark_init_mm_pinned();
diff --git a/drivers/amba/tegra-ahb.c b/drivers/amba/tegra-ahb.c
index 093c435..1f44e56 100644
--- a/drivers/amba/tegra-ahb.c
+++ b/drivers/amba/tegra-ahb.c
@@ -158,7 +158,7 @@
EXPORT_SYMBOL(tegra_ahb_enable_smmu);
#endif
-#ifdef CONFIG_PM_SLEEP
+#ifdef CONFIG_PM
static int tegra_ahb_suspend(struct device *dev)
{
int i;
diff --git a/drivers/ata/Kconfig b/drivers/ata/Kconfig
index 3e751b7..a5a3ebc 100644
--- a/drivers/ata/Kconfig
+++ b/drivers/ata/Kconfig
@@ -59,15 +59,16 @@
option libata.noacpi=1
config SATA_ZPODD
- bool "SATA Zero Power ODD Support"
+ bool "SATA Zero Power Optical Disc Drive (ZPODD) support"
depends on ATA_ACPI
default n
help
- This option adds support for SATA ZPODD. It requires both
- ODD and the platform support, and if enabled, will automatically
- power on/off the ODD when certain condition is satisfied. This
- does not impact user's experience of the ODD, only power is saved
- when ODD is not in use(i.e. no disc inside).
+ This option adds support for SATA Zero Power Optical Disc
+ Drive (ZPODD). It requires both the ODD and the platform
+ support, and if enabled, will automatically power on/off the
+ ODD when certain condition is satisfied. This does not impact
+ end user's experience of the ODD, only power is saved when
+ the ODD is not in use (i.e. no disc inside).
If unsure, say N.
diff --git a/drivers/ata/ahci.c b/drivers/ata/ahci.c
index a99112c..6a67b07 100644
--- a/drivers/ata/ahci.c
+++ b/drivers/ata/ahci.c
@@ -281,6 +281,8 @@
{ PCI_VDEVICE(INTEL, 0x1f37), board_ahci }, /* Avoton RAID */
{ PCI_VDEVICE(INTEL, 0x1f3e), board_ahci }, /* Avoton RAID */
{ PCI_VDEVICE(INTEL, 0x1f3f), board_ahci }, /* Avoton RAID */
+ { PCI_VDEVICE(INTEL, 0x2823), board_ahci }, /* Wellsburg RAID */
+ { PCI_VDEVICE(INTEL, 0x2827), board_ahci }, /* Wellsburg RAID */
{ PCI_VDEVICE(INTEL, 0x8d02), board_ahci }, /* Wellsburg AHCI */
{ PCI_VDEVICE(INTEL, 0x8d04), board_ahci }, /* Wellsburg RAID */
{ PCI_VDEVICE(INTEL, 0x8d06), board_ahci }, /* Wellsburg RAID */
diff --git a/drivers/ata/ata_piix.c b/drivers/ata/ata_piix.c
index d2ba439..ffdd32d 100644
--- a/drivers/ata/ata_piix.c
+++ b/drivers/ata/ata_piix.c
@@ -1547,6 +1547,10 @@
static int prefer_ms_hyperv = 1;
module_param(prefer_ms_hyperv, int, 0);
+MODULE_PARM_DESC(prefer_ms_hyperv,
+ "Prefer Hyper-V paravirtualization drivers instead of ATA, "
+ "0 - Use ATA drivers, "
+ "1 (Default) - Use the paravirtualization drivers.");
static void piix_ignore_devices_quirk(struct ata_host *host)
{
diff --git a/drivers/ata/libata-acpi.c b/drivers/ata/libata-acpi.c
index beea311..8a52dab 100644
--- a/drivers/ata/libata-acpi.c
+++ b/drivers/ata/libata-acpi.c
@@ -1027,7 +1027,7 @@
handle = ata_dev_acpi_handle(dev);
if (handle)
- acpi_dev_pm_remove_dependent(handle, &sdev->sdev_gendev);
+ acpi_dev_pm_add_dependent(handle, &sdev->sdev_gendev);
}
static void ata_acpi_unregister_power_resource(struct ata_device *dev)
diff --git a/drivers/ata/pata_samsung_cf.c b/drivers/ata/pata_samsung_cf.c
index 70b0e01..6ef27e9 100644
--- a/drivers/ata/pata_samsung_cf.c
+++ b/drivers/ata/pata_samsung_cf.c
@@ -661,18 +661,7 @@
},
};
-static int __init pata_s3c_init(void)
-{
- return platform_driver_probe(&pata_s3c_driver, pata_s3c_probe);
-}
-
-static void __exit pata_s3c_exit(void)
-{
- platform_driver_unregister(&pata_s3c_driver);
-}
-
-module_init(pata_s3c_init);
-module_exit(pata_s3c_exit);
+module_platform_driver_probe(pata_s3c_driver, pata_s3c_probe);
MODULE_AUTHOR("Abhilash Kesavan, <a.kesavan@samsung.com>");
MODULE_DESCRIPTION("low-level driver for Samsung PATA controller");
diff --git a/drivers/ata/sata_fsl.c b/drivers/ata/sata_fsl.c
index 124b2c1..608f82f 100644
--- a/drivers/ata/sata_fsl.c
+++ b/drivers/ata/sata_fsl.c
@@ -1511,8 +1511,7 @@
if (hcr_base)
iounmap(hcr_base);
- if (host_priv)
- kfree(host_priv);
+ kfree(host_priv);
return retval;
}
diff --git a/drivers/block/nvme.c b/drivers/block/nvme.c
index 07fb2df..9dcefe4 100644
--- a/drivers/block/nvme.c
+++ b/drivers/block/nvme.c
@@ -135,6 +135,7 @@
BUILD_BUG_ON(sizeof(struct nvme_id_ctrl) != 4096);
BUILD_BUG_ON(sizeof(struct nvme_id_ns) != 4096);
BUILD_BUG_ON(sizeof(struct nvme_lba_range_type) != 64);
+ BUILD_BUG_ON(sizeof(struct nvme_smart_log) != 512);
}
typedef void (*nvme_completion_fn)(struct nvme_dev *, void *,
@@ -237,7 +238,8 @@
*fn = special_completion;
return CMD_CTX_INVALID;
}
- *fn = info[cmdid].fn;
+ if (fn)
+ *fn = info[cmdid].fn;
ctx = info[cmdid].ctx;
info[cmdid].fn = special_completion;
info[cmdid].ctx = CMD_CTX_COMPLETED;
@@ -335,6 +337,7 @@
iod->offset = offsetof(struct nvme_iod, sg[nseg]);
iod->npages = -1;
iod->length = nbytes;
+ iod->nents = 0;
}
return iod;
@@ -375,7 +378,8 @@
struct bio *bio = iod->private;
u16 status = le16_to_cpup(&cqe->status) >> 1;
- dma_unmap_sg(&dev->pci_dev->dev, iod->sg, iod->nents,
+ if (iod->nents)
+ dma_unmap_sg(&dev->pci_dev->dev, iod->sg, iod->nents,
bio_data_dir(bio) ? DMA_TO_DEVICE : DMA_FROM_DEVICE);
nvme_free_iod(dev, iod);
if (status) {
@@ -589,7 +593,7 @@
result = nvme_map_bio(nvmeq->q_dmadev, iod, bio, dma_dir, psegs);
if (result < 0)
- goto free_iod;
+ goto free_cmdid;
length = result;
cmnd->rw.command_id = cmdid;
@@ -609,6 +613,8 @@
return 0;
+ free_cmdid:
+ free_cmdid(nvmeq, cmdid, NULL);
free_iod:
nvme_free_iod(nvmeq->dev, iod);
nomem:
@@ -835,8 +841,8 @@
return nvme_submit_admin_cmd(dev, &c, NULL);
}
-static int nvme_get_features(struct nvme_dev *dev, unsigned fid,
- unsigned nsid, dma_addr_t dma_addr)
+static int nvme_get_features(struct nvme_dev *dev, unsigned fid, unsigned nsid,
+ dma_addr_t dma_addr, u32 *result)
{
struct nvme_command c;
@@ -846,7 +852,7 @@
c.features.prp1 = cpu_to_le64(dma_addr);
c.features.fid = cpu_to_le32(fid);
- return nvme_submit_admin_cmd(dev, &c, NULL);
+ return nvme_submit_admin_cmd(dev, &c, result);
}
static int nvme_set_features(struct nvme_dev *dev, unsigned fid,
@@ -906,6 +912,10 @@
spin_lock_irq(&nvmeq->q_lock);
nvme_cancel_ios(nvmeq, false);
+ while (bio_list_peek(&nvmeq->sq_cong)) {
+ struct bio *bio = bio_list_pop(&nvmeq->sq_cong);
+ bio_endio(bio, -EIO);
+ }
spin_unlock_irq(&nvmeq->q_lock);
irq_set_affinity_hint(vector, NULL);
@@ -1230,12 +1240,17 @@
if (length != cmd.data_len)
status = -ENOMEM;
else
- status = nvme_submit_admin_cmd(dev, &c, NULL);
+ status = nvme_submit_admin_cmd(dev, &c, &cmd.result);
if (cmd.data_len) {
nvme_unmap_user_pages(dev, cmd.opcode & 1, iod);
nvme_free_iod(dev, iod);
}
+
+ if (!status && copy_to_user(&ucmd->result, &cmd.result,
+ sizeof(cmd.result)))
+ status = -EFAULT;
+
return status;
}
@@ -1523,9 +1538,9 @@
continue;
res = nvme_get_features(dev, NVME_FEAT_LBA_RANGE, i,
- dma_addr + 4096);
+ dma_addr + 4096, NULL);
if (res)
- continue;
+ memset(mem + 4096, 0, 4096);
ns = nvme_alloc_ns(dev, i, mem, mem + 4096);
if (ns)
diff --git a/drivers/bluetooth/ath3k.c b/drivers/bluetooth/ath3k.c
index a8a41e07..6aab00e 100644
--- a/drivers/bluetooth/ath3k.c
+++ b/drivers/bluetooth/ath3k.c
@@ -73,9 +73,13 @@
{ USB_DEVICE(0x03F0, 0x311D) },
/* Atheros AR3012 with sflash firmware*/
+ { USB_DEVICE(0x0CF3, 0x0036) },
{ USB_DEVICE(0x0CF3, 0x3004) },
+ { USB_DEVICE(0x0CF3, 0x3008) },
{ USB_DEVICE(0x0CF3, 0x311D) },
+ { USB_DEVICE(0x0CF3, 0x817a) },
{ USB_DEVICE(0x13d3, 0x3375) },
+ { USB_DEVICE(0x04CA, 0x3004) },
{ USB_DEVICE(0x04CA, 0x3005) },
{ USB_DEVICE(0x04CA, 0x3006) },
{ USB_DEVICE(0x04CA, 0x3008) },
@@ -105,9 +109,13 @@
static struct usb_device_id ath3k_blist_tbl[] = {
/* Atheros AR3012 with sflash firmware*/
+ { USB_DEVICE(0x0CF3, 0x0036), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0cf3, 0x3004), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x0cf3, 0x3008), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0cf3, 0x311D), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x0CF3, 0x817a), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x13d3, 0x3375), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x04ca, 0x3004), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x04ca, 0x3005), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x04ca, 0x3006), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x04ca, 0x3008), .driver_info = BTUSB_ATH3012 },
diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
index 7e351e3..2cc5f77 100644
--- a/drivers/bluetooth/btusb.c
+++ b/drivers/bluetooth/btusb.c
@@ -131,9 +131,13 @@
{ USB_DEVICE(0x03f0, 0x311d), .driver_info = BTUSB_IGNORE },
/* Atheros 3012 with sflash firmware */
+ { USB_DEVICE(0x0cf3, 0x0036), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0cf3, 0x3004), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x0cf3, 0x3008), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0cf3, 0x311d), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x0cf3, 0x817a), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x13d3, 0x3375), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x04ca, 0x3004), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x04ca, 0x3005), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x04ca, 0x3006), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x04ca, 0x3008), .driver_info = BTUSB_ATH3012 },
diff --git a/drivers/clk/clk-vt8500.c b/drivers/clk/clk-vt8500.c
index b5538bb..09c6331 100644
--- a/drivers/clk/clk-vt8500.c
+++ b/drivers/clk/clk-vt8500.c
@@ -157,7 +157,7 @@
divisor = parent_rate / rate;
/* If prate / rate would be decimal, incr the divisor */
- if (rate * divisor < *prate)
+ if (rate * divisor < parent_rate)
divisor++;
if (divisor == cdev->div_mask + 1)
diff --git a/drivers/edac/amd64_edac.c b/drivers/edac/amd64_edac.c
index 910b0116..e1d13c4 100644
--- a/drivers/edac/amd64_edac.c
+++ b/drivers/edac/amd64_edac.c
@@ -2048,12 +2048,18 @@
edac_dbg(1, "MC node: %d, csrow: %d\n",
pvt->mc_node_id, i);
- if (row_dct0)
+ if (row_dct0) {
nr_pages = amd64_csrow_nr_pages(pvt, 0, i);
+ csrow->channels[0]->dimm->nr_pages = nr_pages;
+ }
/* K8 has only one DCT */
- if (boot_cpu_data.x86 != 0xf && row_dct1)
- nr_pages += amd64_csrow_nr_pages(pvt, 1, i);
+ if (boot_cpu_data.x86 != 0xf && row_dct1) {
+ int row_dct1_pages = amd64_csrow_nr_pages(pvt, 1, i);
+
+ csrow->channels[1]->dimm->nr_pages = row_dct1_pages;
+ nr_pages += row_dct1_pages;
+ }
mtype = amd64_determine_memory_type(pvt, i);
@@ -2072,9 +2078,7 @@
dimm = csrow->channels[j]->dimm;
dimm->mtype = mtype;
dimm->edac_mode = edac_mode;
- dimm->nr_pages = nr_pages;
}
- csrow->nr_pages = nr_pages;
}
return empty;
@@ -2419,7 +2423,6 @@
mci->pvt_info = pvt;
mci->pdev = &pvt->F2->dev;
- mci->csbased = 1;
setup_mci_misc_attrs(mci, fam_type);
diff --git a/drivers/edac/edac_mc.c b/drivers/edac/edac_mc.c
index cdb81aa..27e86d9 100644
--- a/drivers/edac/edac_mc.c
+++ b/drivers/edac/edac_mc.c
@@ -86,7 +86,7 @@
edac_dimm_info_location(dimm, location, sizeof(location));
edac_dbg(4, "%s%i: %smapped as virtual row %d, chan %d\n",
- dimm->mci->mem_is_per_rank ? "rank" : "dimm",
+ dimm->mci->csbased ? "rank" : "dimm",
number, location, dimm->csrow, dimm->cschannel);
edac_dbg(4, " dimm = %p\n", dimm);
edac_dbg(4, " dimm->label = '%s'\n", dimm->label);
@@ -341,7 +341,7 @@
memcpy(mci->layers, layers, sizeof(*layer) * n_layers);
mci->nr_csrows = tot_csrows;
mci->num_cschannel = tot_channels;
- mci->mem_is_per_rank = per_rank;
+ mci->csbased = per_rank;
/*
* Alocate and fill the csrow/channels structs
@@ -1235,7 +1235,7 @@
* incrementing the compat API counters
*/
edac_dbg(4, "%s csrows map: (%d,%d)\n",
- mci->mem_is_per_rank ? "rank" : "dimm",
+ mci->csbased ? "rank" : "dimm",
dimm->csrow, dimm->cschannel);
if (row == -1)
row = dimm->csrow;
diff --git a/drivers/edac/edac_mc_sysfs.c b/drivers/edac/edac_mc_sysfs.c
index 4f4b613..5899a76 100644
--- a/drivers/edac/edac_mc_sysfs.c
+++ b/drivers/edac/edac_mc_sysfs.c
@@ -143,7 +143,7 @@
* and the per-dimm/per-rank one
*/
#define DEVICE_ATTR_LEGACY(_name, _mode, _show, _store) \
- struct device_attribute dev_attr_legacy_##_name = __ATTR(_name, _mode, _show, _store)
+ static struct device_attribute dev_attr_legacy_##_name = __ATTR(_name, _mode, _show, _store)
struct dev_ch_attribute {
struct device_attribute attr;
@@ -180,9 +180,6 @@
int i;
u32 nr_pages = 0;
- if (csrow->mci->csbased)
- return sprintf(data, "%u\n", PAGES_TO_MiB(csrow->nr_pages));
-
for (i = 0; i < csrow->nr_channels; i++)
nr_pages += csrow->channels[i]->dimm->nr_pages;
return sprintf(data, "%u\n", PAGES_TO_MiB(nr_pages));
@@ -612,7 +609,7 @@
device_initialize(&dimm->dev);
dimm->dev.parent = &mci->dev;
- if (mci->mem_is_per_rank)
+ if (mci->csbased)
dev_set_name(&dimm->dev, "rank%d", index);
else
dev_set_name(&dimm->dev, "dimm%d", index);
@@ -778,14 +775,10 @@
for (csrow_idx = 0; csrow_idx < mci->nr_csrows; csrow_idx++) {
struct csrow_info *csrow = mci->csrows[csrow_idx];
- if (csrow->mci->csbased) {
- total_pages += csrow->nr_pages;
- } else {
- for (j = 0; j < csrow->nr_channels; j++) {
- struct dimm_info *dimm = csrow->channels[j]->dimm;
+ for (j = 0; j < csrow->nr_channels; j++) {
+ struct dimm_info *dimm = csrow->channels[j]->dimm;
- total_pages += dimm->nr_pages;
- }
+ total_pages += dimm->nr_pages;
}
}
diff --git a/drivers/firmware/Kconfig b/drivers/firmware/Kconfig
index 9b00072..42c759a 100644
--- a/drivers/firmware/Kconfig
+++ b/drivers/firmware/Kconfig
@@ -53,6 +53,24 @@
Subsequent efibootmgr releases may be found at:
<http://linux.dell.com/efibootmgr>
+config EFI_VARS_PSTORE
+ bool "Register efivars backend for pstore"
+ depends on EFI_VARS && PSTORE
+ default y
+ help
+ Say Y here to enable use efivars as a backend to pstore. This
+ will allow writing console messages, crash dumps, or anything
+ else supported by pstore to EFI variables.
+
+config EFI_VARS_PSTORE_DEFAULT_DISABLE
+ bool "Disable using efivars as a pstore backend by default"
+ depends on EFI_VARS_PSTORE
+ default n
+ help
+ Saying Y here will disable the use of efivars as a storage
+ backend for pstore by default. This setting can be overridden
+ using the efivars module's pstore_disable parameter.
+
config EFI_PCDP
bool "Console device selection via EFI PCDP or HCDP table"
depends on ACPI && EFI && IA64
diff --git a/drivers/firmware/efivars.c b/drivers/firmware/efivars.c
index fe62aa3..7acafb8 100644
--- a/drivers/firmware/efivars.c
+++ b/drivers/firmware/efivars.c
@@ -103,6 +103,11 @@
*/
#define GUID_LEN 36
+static bool efivars_pstore_disable =
+ IS_ENABLED(CONFIG_EFI_VARS_PSTORE_DEFAULT_DISABLE);
+
+module_param_named(pstore_disable, efivars_pstore_disable, bool, 0644);
+
/*
* The maximum size of VariableName + Data = 1024
* Therefore, it's reasonable to save that much
@@ -165,6 +170,7 @@
static void efivar_update_sysfs_entries(struct work_struct *);
static DECLARE_WORK(efivar_work, efivar_update_sysfs_entries);
+static bool efivar_wq_enabled = true;
/* Return the number of unicode characters in data */
static unsigned long
@@ -1309,9 +1315,7 @@
.create = efivarfs_create,
};
-static struct pstore_info efi_pstore_info;
-
-#ifdef CONFIG_PSTORE
+#ifdef CONFIG_EFI_VARS_PSTORE
static int efi_pstore_open(struct pstore_info *psi)
{
@@ -1441,7 +1445,7 @@
spin_unlock_irqrestore(&efivars->lock, flags);
- if (reason == KMSG_DUMP_OOPS)
+ if (reason == KMSG_DUMP_OOPS && efivar_wq_enabled)
schedule_work(&efivar_work);
*id = part;
@@ -1514,38 +1518,6 @@
return 0;
}
-#else
-static int efi_pstore_open(struct pstore_info *psi)
-{
- return 0;
-}
-
-static int efi_pstore_close(struct pstore_info *psi)
-{
- return 0;
-}
-
-static ssize_t efi_pstore_read(u64 *id, enum pstore_type_id *type, int *count,
- struct timespec *timespec,
- char **buf, struct pstore_info *psi)
-{
- return -1;
-}
-
-static int efi_pstore_write(enum pstore_type_id type,
- enum kmsg_dump_reason reason, u64 *id,
- unsigned int part, int count, size_t size,
- struct pstore_info *psi)
-{
- return 0;
-}
-
-static int efi_pstore_erase(enum pstore_type_id type, u64 id, int count,
- struct timespec time, struct pstore_info *psi)
-{
- return 0;
-}
-#endif
static struct pstore_info efi_pstore_info = {
.owner = THIS_MODULE,
@@ -1557,6 +1529,24 @@
.erase = efi_pstore_erase,
};
+static void efivar_pstore_register(struct efivars *efivars)
+{
+ efivars->efi_pstore_info = efi_pstore_info;
+ efivars->efi_pstore_info.buf = kmalloc(4096, GFP_KERNEL);
+ if (efivars->efi_pstore_info.buf) {
+ efivars->efi_pstore_info.bufsize = 1024;
+ efivars->efi_pstore_info.data = efivars;
+ spin_lock_init(&efivars->efi_pstore_info.buf_lock);
+ pstore_register(&efivars->efi_pstore_info);
+ }
+}
+#else
+static void efivar_pstore_register(struct efivars *efivars)
+{
+ return;
+}
+#endif
+
static ssize_t efivar_create(struct file *filp, struct kobject *kobj,
struct bin_attribute *bin_attr,
char *buf, loff_t pos, size_t count)
@@ -1716,6 +1706,31 @@
return found;
}
+/*
+ * Returns the size of variable_name, in bytes, including the
+ * terminating NULL character, or variable_name_size if no NULL
+ * character is found among the first variable_name_size bytes.
+ */
+static unsigned long var_name_strnsize(efi_char16_t *variable_name,
+ unsigned long variable_name_size)
+{
+ unsigned long len;
+ efi_char16_t c;
+
+ /*
+ * The variable name is, by definition, a NULL-terminated
+ * string, so make absolutely sure that variable_name_size is
+ * the value we expect it to be. If not, return the real size.
+ */
+ for (len = 2; len <= variable_name_size; len += sizeof(c)) {
+ c = variable_name[(len / sizeof(c)) - 1];
+ if (!c)
+ break;
+ }
+
+ return min(len, variable_name_size);
+}
+
static void efivar_update_sysfs_entries(struct work_struct *work)
{
struct efivars *efivars = &__efivars;
@@ -1756,10 +1771,13 @@
if (!found) {
kfree(variable_name);
break;
- } else
+ } else {
+ variable_name_size = var_name_strnsize(variable_name,
+ variable_name_size);
efivar_create_sysfs_entry(efivars,
variable_name_size,
variable_name, &vendor);
+ }
}
}
@@ -1958,6 +1976,35 @@
}
EXPORT_SYMBOL_GPL(unregister_efivars);
+/*
+ * Print a warning when duplicate EFI variables are encountered and
+ * disable the sysfs workqueue since the firmware is buggy.
+ */
+static void dup_variable_bug(efi_char16_t *s16, efi_guid_t *vendor_guid,
+ unsigned long len16)
+{
+ size_t i, len8 = len16 / sizeof(efi_char16_t);
+ char *s8;
+
+ /*
+ * Disable the workqueue since the algorithm it uses for
+ * detecting new variables won't work with this buggy
+ * implementation of GetNextVariableName().
+ */
+ efivar_wq_enabled = false;
+
+ s8 = kzalloc(len8, GFP_KERNEL);
+ if (!s8)
+ return;
+
+ for (i = 0; i < len8; i++)
+ s8[i] = s16[i];
+
+ printk(KERN_WARNING "efivars: duplicate variable: %s-%pUl\n",
+ s8, vendor_guid);
+ kfree(s8);
+}
+
int register_efivars(struct efivars *efivars,
const struct efivar_operations *ops,
struct kobject *parent_kobj)
@@ -2006,6 +2053,24 @@
&vendor_guid);
switch (status) {
case EFI_SUCCESS:
+ variable_name_size = var_name_strnsize(variable_name,
+ variable_name_size);
+
+ /*
+ * Some firmware implementations return the
+ * same variable name on multiple calls to
+ * get_next_variable(). Terminate the loop
+ * immediately as there is no guarantee that
+ * we'll ever see a different variable name,
+ * and may end up looping here forever.
+ */
+ if (variable_is_present(variable_name, &vendor_guid)) {
+ dup_variable_bug(variable_name, &vendor_guid,
+ variable_name_size);
+ status = EFI_NOT_FOUND;
+ break;
+ }
+
efivar_create_sysfs_entry(efivars,
variable_name_size,
variable_name,
@@ -2025,15 +2090,8 @@
if (error)
unregister_efivars(efivars);
- efivars->efi_pstore_info = efi_pstore_info;
-
- efivars->efi_pstore_info.buf = kmalloc(4096, GFP_KERNEL);
- if (efivars->efi_pstore_info.buf) {
- efivars->efi_pstore_info.bufsize = 1024;
- efivars->efi_pstore_info.data = efivars;
- spin_lock_init(&efivars->efi_pstore_info.buf_lock);
- pstore_register(&efivars->efi_pstore_info);
- }
+ if (!efivars_pstore_disable)
+ efivar_pstore_register(efivars);
register_filesystem(&efivarfs_type);
diff --git a/drivers/gpio/gpiolib-of.c b/drivers/gpio/gpiolib-of.c
index a71a54a..5150df6 100644
--- a/drivers/gpio/gpiolib-of.c
+++ b/drivers/gpio/gpiolib-of.c
@@ -193,7 +193,7 @@
if (!np)
return;
- do {
+ for (;; index++) {
ret = of_parse_phandle_with_args(np, "gpio-ranges",
"#gpio-range-cells", index, &pinspec);
if (ret)
@@ -222,8 +222,7 @@
if (ret)
break;
-
- } while (index++);
+ }
}
#else
diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
index c194f4e..e2acfdb 100644
--- a/drivers/gpu/drm/drm_edid.c
+++ b/drivers/gpu/drm/drm_edid.c
@@ -1634,7 +1634,7 @@
unsigned vblank = (pt->vactive_vblank_hi & 0xf) << 8 | pt->vblank_lo;
unsigned hsync_offset = (pt->hsync_vsync_offset_pulse_width_hi & 0xc0) << 2 | pt->hsync_offset_lo;
unsigned hsync_pulse_width = (pt->hsync_vsync_offset_pulse_width_hi & 0x30) << 4 | pt->hsync_pulse_width_lo;
- unsigned vsync_offset = (pt->hsync_vsync_offset_pulse_width_hi & 0xc) >> 2 | pt->vsync_offset_pulse_width_lo >> 4;
+ unsigned vsync_offset = (pt->hsync_vsync_offset_pulse_width_hi & 0xc) << 2 | pt->vsync_offset_pulse_width_lo >> 4;
unsigned vsync_pulse_width = (pt->hsync_vsync_offset_pulse_width_hi & 0x3) << 4 | (pt->vsync_offset_pulse_width_lo & 0xf);
/* ignore tiny modes */
@@ -1715,6 +1715,7 @@
}
mode->type = DRM_MODE_TYPE_DRIVER;
+ mode->vrefresh = drm_mode_vrefresh(mode);
drm_mode_set_name(mode);
return mode;
diff --git a/drivers/gpu/drm/exynos/exynos_drm_fimd.c b/drivers/gpu/drm/exynos/exynos_drm_fimd.c
index 36493ce..98cc147 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_fimd.c
+++ b/drivers/gpu/drm/exynos/exynos_drm_fimd.c
@@ -38,11 +38,12 @@
/* position control register for hardware window 0, 2 ~ 4.*/
#define VIDOSD_A(win) (VIDOSD_BASE + 0x00 + (win) * 16)
#define VIDOSD_B(win) (VIDOSD_BASE + 0x04 + (win) * 16)
-/* size control register for hardware window 0. */
-#define VIDOSD_C_SIZE_W0 (VIDOSD_BASE + 0x08)
-/* alpha control register for hardware window 1 ~ 4. */
-#define VIDOSD_C(win) (VIDOSD_BASE + 0x18 + (win) * 16)
-/* size control register for hardware window 1 ~ 4. */
+/*
+ * size control register for hardware windows 0 and alpha control register
+ * for hardware windows 1 ~ 4
+ */
+#define VIDOSD_C(win) (VIDOSD_BASE + 0x08 + (win) * 16)
+/* size control register for hardware windows 1 ~ 2. */
#define VIDOSD_D(win) (VIDOSD_BASE + 0x0C + (win) * 16)
#define VIDWx_BUF_START(win, buf) (VIDW_BUF_START(buf) + (win) * 8)
@@ -50,9 +51,9 @@
#define VIDWx_BUF_SIZE(win, buf) (VIDW_BUF_SIZE(buf) + (win) * 4)
/* color key control register for hardware window 1 ~ 4. */
-#define WKEYCON0_BASE(x) ((WKEYCON0 + 0x140) + (x * 8))
+#define WKEYCON0_BASE(x) ((WKEYCON0 + 0x140) + ((x - 1) * 8))
/* color key value register for hardware window 1 ~ 4. */
-#define WKEYCON1_BASE(x) ((WKEYCON1 + 0x140) + (x * 8))
+#define WKEYCON1_BASE(x) ((WKEYCON1 + 0x140) + ((x - 1) * 8))
/* FIMD has totally five hardware windows. */
#define WINDOWS_NR 5
@@ -109,9 +110,9 @@
#ifdef CONFIG_OF
static const struct of_device_id fimd_driver_dt_match[] = {
- { .compatible = "samsung,exynos4-fimd",
+ { .compatible = "samsung,exynos4210-fimd",
.data = &exynos4_fimd_driver_data },
- { .compatible = "samsung,exynos5-fimd",
+ { .compatible = "samsung,exynos5250-fimd",
.data = &exynos5_fimd_driver_data },
{},
};
@@ -581,7 +582,7 @@
if (win != 3 && win != 4) {
u32 offset = VIDOSD_D(win);
if (win == 0)
- offset = VIDOSD_C_SIZE_W0;
+ offset = VIDOSD_C(win);
val = win_data->ovl_width * win_data->ovl_height;
writel(val, ctx->regs + offset);
diff --git a/drivers/gpu/drm/exynos/exynos_drm_g2d.c b/drivers/gpu/drm/exynos/exynos_drm_g2d.c
index 3b0da03..47a493c 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_g2d.c
+++ b/drivers/gpu/drm/exynos/exynos_drm_g2d.c
@@ -48,8 +48,14 @@
/* registers for base address */
#define G2D_SRC_BASE_ADDR 0x0304
+#define G2D_SRC_COLOR_MODE 0x030C
+#define G2D_SRC_LEFT_TOP 0x0310
+#define G2D_SRC_RIGHT_BOTTOM 0x0314
#define G2D_SRC_PLANE2_BASE_ADDR 0x0318
#define G2D_DST_BASE_ADDR 0x0404
+#define G2D_DST_COLOR_MODE 0x040C
+#define G2D_DST_LEFT_TOP 0x0410
+#define G2D_DST_RIGHT_BOTTOM 0x0414
#define G2D_DST_PLANE2_BASE_ADDR 0x0418
#define G2D_PAT_BASE_ADDR 0x0500
#define G2D_MSK_BASE_ADDR 0x0520
@@ -82,7 +88,7 @@
#define G2D_DMA_LIST_DONE_COUNT_OFFSET 17
/* G2D_DMA_HOLD_CMD */
-#define G2D_USET_HOLD (1 << 2)
+#define G2D_USER_HOLD (1 << 2)
#define G2D_LIST_HOLD (1 << 1)
#define G2D_BITBLT_HOLD (1 << 0)
@@ -91,13 +97,27 @@
#define G2D_START_NHOLT (1 << 1)
#define G2D_START_BITBLT (1 << 0)
+/* buffer color format */
+#define G2D_FMT_XRGB8888 0
+#define G2D_FMT_ARGB8888 1
+#define G2D_FMT_RGB565 2
+#define G2D_FMT_XRGB1555 3
+#define G2D_FMT_ARGB1555 4
+#define G2D_FMT_XRGB4444 5
+#define G2D_FMT_ARGB4444 6
+#define G2D_FMT_PACKED_RGB888 7
+#define G2D_FMT_A8 11
+#define G2D_FMT_L8 12
+
+/* buffer valid length */
+#define G2D_LEN_MIN 1
+#define G2D_LEN_MAX 8000
+
#define G2D_CMDLIST_SIZE (PAGE_SIZE / 4)
#define G2D_CMDLIST_NUM 64
#define G2D_CMDLIST_POOL_SIZE (G2D_CMDLIST_SIZE * G2D_CMDLIST_NUM)
#define G2D_CMDLIST_DATA_NUM (G2D_CMDLIST_SIZE / sizeof(u32) - 2)
-#define MAX_BUF_ADDR_NR 6
-
/* maximum buffer pool size of userptr is 64MB as default */
#define MAX_POOL (64 * 1024 * 1024)
@@ -106,6 +126,17 @@
BUF_TYPE_USERPTR,
};
+enum g2d_reg_type {
+ REG_TYPE_NONE = -1,
+ REG_TYPE_SRC,
+ REG_TYPE_SRC_PLANE2,
+ REG_TYPE_DST,
+ REG_TYPE_DST_PLANE2,
+ REG_TYPE_PAT,
+ REG_TYPE_MSK,
+ MAX_REG_TYPE_NR
+};
+
/* cmdlist data structure */
struct g2d_cmdlist {
u32 head;
@@ -113,6 +144,42 @@
u32 last; /* last data offset */
};
+/*
+ * A structure of buffer description
+ *
+ * @format: color format
+ * @left_x: the x coordinates of left top corner
+ * @top_y: the y coordinates of left top corner
+ * @right_x: the x coordinates of right bottom corner
+ * @bottom_y: the y coordinates of right bottom corner
+ *
+ */
+struct g2d_buf_desc {
+ unsigned int format;
+ unsigned int left_x;
+ unsigned int top_y;
+ unsigned int right_x;
+ unsigned int bottom_y;
+};
+
+/*
+ * A structure of buffer information
+ *
+ * @map_nr: manages the number of mapped buffers
+ * @reg_types: stores regitster type in the order of requested command
+ * @handles: stores buffer handle in its reg_type position
+ * @types: stores buffer type in its reg_type position
+ * @descs: stores buffer description in its reg_type position
+ *
+ */
+struct g2d_buf_info {
+ unsigned int map_nr;
+ enum g2d_reg_type reg_types[MAX_REG_TYPE_NR];
+ unsigned long handles[MAX_REG_TYPE_NR];
+ unsigned int types[MAX_REG_TYPE_NR];
+ struct g2d_buf_desc descs[MAX_REG_TYPE_NR];
+};
+
struct drm_exynos_pending_g2d_event {
struct drm_pending_event base;
struct drm_exynos_g2d_event event;
@@ -131,14 +198,11 @@
bool in_pool;
bool out_of_list;
};
-
struct g2d_cmdlist_node {
struct list_head list;
struct g2d_cmdlist *cmdlist;
- unsigned int map_nr;
- unsigned long handles[MAX_BUF_ADDR_NR];
- unsigned int obj_type[MAX_BUF_ADDR_NR];
dma_addr_t dma_addr;
+ struct g2d_buf_info buf_info;
struct drm_exynos_pending_g2d_event *event;
};
@@ -188,6 +252,7 @@
struct exynos_drm_subdrv *subdrv = &g2d->subdrv;
int nr;
int ret;
+ struct g2d_buf_info *buf_info;
init_dma_attrs(&g2d->cmdlist_dma_attrs);
dma_set_attr(DMA_ATTR_WRITE_COMBINE, &g2d->cmdlist_dma_attrs);
@@ -209,11 +274,17 @@
}
for (nr = 0; nr < G2D_CMDLIST_NUM; nr++) {
+ unsigned int i;
+
node[nr].cmdlist =
g2d->cmdlist_pool_virt + nr * G2D_CMDLIST_SIZE;
node[nr].dma_addr =
g2d->cmdlist_pool + nr * G2D_CMDLIST_SIZE;
+ buf_info = &node[nr].buf_info;
+ for (i = 0; i < MAX_REG_TYPE_NR; i++)
+ buf_info->reg_types[i] = REG_TYPE_NONE;
+
list_add_tail(&node[nr].list, &g2d->free_cmdlist);
}
@@ -450,7 +521,7 @@
DMA_BIDIRECTIONAL);
if (ret < 0) {
DRM_ERROR("failed to map sgt with dma region.\n");
- goto err_free_sgt;
+ goto err_sg_free_table;
}
g2d_userptr->dma_addr = sgt->sgl[0].dma_address;
@@ -467,8 +538,10 @@
return &g2d_userptr->dma_addr;
-err_free_sgt:
+err_sg_free_table:
sg_free_table(sgt);
+
+err_free_sgt:
kfree(sgt);
sgt = NULL;
@@ -506,36 +579,172 @@
g2d->current_pool = 0;
}
+static enum g2d_reg_type g2d_get_reg_type(int reg_offset)
+{
+ enum g2d_reg_type reg_type;
+
+ switch (reg_offset) {
+ case G2D_SRC_BASE_ADDR:
+ case G2D_SRC_COLOR_MODE:
+ case G2D_SRC_LEFT_TOP:
+ case G2D_SRC_RIGHT_BOTTOM:
+ reg_type = REG_TYPE_SRC;
+ break;
+ case G2D_SRC_PLANE2_BASE_ADDR:
+ reg_type = REG_TYPE_SRC_PLANE2;
+ break;
+ case G2D_DST_BASE_ADDR:
+ case G2D_DST_COLOR_MODE:
+ case G2D_DST_LEFT_TOP:
+ case G2D_DST_RIGHT_BOTTOM:
+ reg_type = REG_TYPE_DST;
+ break;
+ case G2D_DST_PLANE2_BASE_ADDR:
+ reg_type = REG_TYPE_DST_PLANE2;
+ break;
+ case G2D_PAT_BASE_ADDR:
+ reg_type = REG_TYPE_PAT;
+ break;
+ case G2D_MSK_BASE_ADDR:
+ reg_type = REG_TYPE_MSK;
+ break;
+ default:
+ reg_type = REG_TYPE_NONE;
+ DRM_ERROR("Unknown register offset![%d]\n", reg_offset);
+ break;
+ };
+
+ return reg_type;
+}
+
+static unsigned long g2d_get_buf_bpp(unsigned int format)
+{
+ unsigned long bpp;
+
+ switch (format) {
+ case G2D_FMT_XRGB8888:
+ case G2D_FMT_ARGB8888:
+ bpp = 4;
+ break;
+ case G2D_FMT_RGB565:
+ case G2D_FMT_XRGB1555:
+ case G2D_FMT_ARGB1555:
+ case G2D_FMT_XRGB4444:
+ case G2D_FMT_ARGB4444:
+ bpp = 2;
+ break;
+ case G2D_FMT_PACKED_RGB888:
+ bpp = 3;
+ break;
+ default:
+ bpp = 1;
+ break;
+ }
+
+ return bpp;
+}
+
+static bool g2d_check_buf_desc_is_valid(struct g2d_buf_desc *buf_desc,
+ enum g2d_reg_type reg_type,
+ unsigned long size)
+{
+ unsigned int width, height;
+ unsigned long area;
+
+ /*
+ * check source and destination buffers only.
+ * so the others are always valid.
+ */
+ if (reg_type != REG_TYPE_SRC && reg_type != REG_TYPE_DST)
+ return true;
+
+ width = buf_desc->right_x - buf_desc->left_x;
+ if (width < G2D_LEN_MIN || width > G2D_LEN_MAX) {
+ DRM_ERROR("width[%u] is out of range!\n", width);
+ return false;
+ }
+
+ height = buf_desc->bottom_y - buf_desc->top_y;
+ if (height < G2D_LEN_MIN || height > G2D_LEN_MAX) {
+ DRM_ERROR("height[%u] is out of range!\n", height);
+ return false;
+ }
+
+ area = (unsigned long)width * (unsigned long)height *
+ g2d_get_buf_bpp(buf_desc->format);
+ if (area > size) {
+ DRM_ERROR("area[%lu] is out of range[%lu]!\n", area, size);
+ return false;
+ }
+
+ return true;
+}
+
static int g2d_map_cmdlist_gem(struct g2d_data *g2d,
struct g2d_cmdlist_node *node,
struct drm_device *drm_dev,
struct drm_file *file)
{
struct g2d_cmdlist *cmdlist = node->cmdlist;
+ struct g2d_buf_info *buf_info = &node->buf_info;
int offset;
+ int ret;
int i;
- for (i = 0; i < node->map_nr; i++) {
+ for (i = 0; i < buf_info->map_nr; i++) {
+ struct g2d_buf_desc *buf_desc;
+ enum g2d_reg_type reg_type;
+ int reg_pos;
unsigned long handle;
dma_addr_t *addr;
- offset = cmdlist->last - (i * 2 + 1);
- handle = cmdlist->data[offset];
+ reg_pos = cmdlist->last - 2 * (i + 1);
- if (node->obj_type[i] == BUF_TYPE_GEM) {
+ offset = cmdlist->data[reg_pos];
+ handle = cmdlist->data[reg_pos + 1];
+
+ reg_type = g2d_get_reg_type(offset);
+ if (reg_type == REG_TYPE_NONE) {
+ ret = -EFAULT;
+ goto err;
+ }
+
+ buf_desc = &buf_info->descs[reg_type];
+
+ if (buf_info->types[reg_type] == BUF_TYPE_GEM) {
+ unsigned long size;
+
+ size = exynos_drm_gem_get_size(drm_dev, handle, file);
+ if (!size) {
+ ret = -EFAULT;
+ goto err;
+ }
+
+ if (!g2d_check_buf_desc_is_valid(buf_desc, reg_type,
+ size)) {
+ ret = -EFAULT;
+ goto err;
+ }
+
addr = exynos_drm_gem_get_dma_addr(drm_dev, handle,
file);
if (IS_ERR(addr)) {
- node->map_nr = i;
- return -EFAULT;
+ ret = -EFAULT;
+ goto err;
}
} else {
struct drm_exynos_g2d_userptr g2d_userptr;
if (copy_from_user(&g2d_userptr, (void __user *)handle,
sizeof(struct drm_exynos_g2d_userptr))) {
- node->map_nr = i;
- return -EFAULT;
+ ret = -EFAULT;
+ goto err;
+ }
+
+ if (!g2d_check_buf_desc_is_valid(buf_desc, reg_type,
+ g2d_userptr.size)) {
+ ret = -EFAULT;
+ goto err;
}
addr = g2d_userptr_get_dma_addr(drm_dev,
@@ -544,16 +753,21 @@
file,
&handle);
if (IS_ERR(addr)) {
- node->map_nr = i;
- return -EFAULT;
+ ret = -EFAULT;
+ goto err;
}
}
- cmdlist->data[offset] = *addr;
- node->handles[i] = handle;
+ cmdlist->data[reg_pos + 1] = *addr;
+ buf_info->reg_types[i] = reg_type;
+ buf_info->handles[reg_type] = handle;
}
return 0;
+
+err:
+ buf_info->map_nr = i;
+ return ret;
}
static void g2d_unmap_cmdlist_gem(struct g2d_data *g2d,
@@ -561,22 +775,33 @@
struct drm_file *filp)
{
struct exynos_drm_subdrv *subdrv = &g2d->subdrv;
+ struct g2d_buf_info *buf_info = &node->buf_info;
int i;
- for (i = 0; i < node->map_nr; i++) {
- unsigned long handle = node->handles[i];
+ for (i = 0; i < buf_info->map_nr; i++) {
+ struct g2d_buf_desc *buf_desc;
+ enum g2d_reg_type reg_type;
+ unsigned long handle;
- if (node->obj_type[i] == BUF_TYPE_GEM)
+ reg_type = buf_info->reg_types[i];
+
+ buf_desc = &buf_info->descs[reg_type];
+ handle = buf_info->handles[reg_type];
+
+ if (buf_info->types[reg_type] == BUF_TYPE_GEM)
exynos_drm_gem_put_dma_addr(subdrv->drm_dev, handle,
filp);
else
g2d_userptr_put_dma_addr(subdrv->drm_dev, handle,
false);
- node->handles[i] = 0;
+ buf_info->reg_types[i] = REG_TYPE_NONE;
+ buf_info->handles[reg_type] = 0;
+ buf_info->types[reg_type] = 0;
+ memset(buf_desc, 0x00, sizeof(*buf_desc));
}
- node->map_nr = 0;
+ buf_info->map_nr = 0;
}
static void g2d_dma_start(struct g2d_data *g2d,
@@ -589,10 +814,6 @@
pm_runtime_get_sync(g2d->dev);
clk_enable(g2d->gate_clk);
- /* interrupt enable */
- writel_relaxed(G2D_INTEN_ACF | G2D_INTEN_UCF | G2D_INTEN_GCF,
- g2d->regs + G2D_INTEN);
-
writel_relaxed(node->dma_addr, g2d->regs + G2D_DMA_SFR_BASE_ADDR);
writel_relaxed(G2D_DMA_START, g2d->regs + G2D_DMA_COMMAND);
}
@@ -643,7 +864,6 @@
struct g2d_data *g2d = container_of(work, struct g2d_data,
runqueue_work);
-
mutex_lock(&g2d->runqueue_mutex);
clk_disable(g2d->gate_clk);
pm_runtime_put_sync(g2d->dev);
@@ -724,20 +944,14 @@
int i;
for (i = 0; i < nr; i++) {
+ struct g2d_buf_info *buf_info = &node->buf_info;
+ struct g2d_buf_desc *buf_desc;
+ enum g2d_reg_type reg_type;
+ unsigned long value;
+
index = cmdlist->last - 2 * (i + 1);
- if (for_addr) {
- /* check userptr buffer type. */
- reg_offset = (cmdlist->data[index] &
- ~0x7fffffff) >> 31;
- if (reg_offset) {
- node->obj_type[i] = BUF_TYPE_USERPTR;
- cmdlist->data[index] &= ~G2D_BUF_USERPTR;
- }
- }
-
reg_offset = cmdlist->data[index] & ~0xfffff000;
-
if (reg_offset < G2D_VALID_START || reg_offset > G2D_VALID_END)
goto err;
if (reg_offset % 4)
@@ -753,8 +967,60 @@
if (!for_addr)
goto err;
- if (node->obj_type[i] != BUF_TYPE_USERPTR)
- node->obj_type[i] = BUF_TYPE_GEM;
+ reg_type = g2d_get_reg_type(reg_offset);
+ if (reg_type == REG_TYPE_NONE)
+ goto err;
+
+ /* check userptr buffer type. */
+ if ((cmdlist->data[index] & ~0x7fffffff) >> 31) {
+ buf_info->types[reg_type] = BUF_TYPE_USERPTR;
+ cmdlist->data[index] &= ~G2D_BUF_USERPTR;
+ } else
+ buf_info->types[reg_type] = BUF_TYPE_GEM;
+ break;
+ case G2D_SRC_COLOR_MODE:
+ case G2D_DST_COLOR_MODE:
+ if (for_addr)
+ goto err;
+
+ reg_type = g2d_get_reg_type(reg_offset);
+ if (reg_type == REG_TYPE_NONE)
+ goto err;
+
+ buf_desc = &buf_info->descs[reg_type];
+ value = cmdlist->data[index + 1];
+
+ buf_desc->format = value & 0xf;
+ break;
+ case G2D_SRC_LEFT_TOP:
+ case G2D_DST_LEFT_TOP:
+ if (for_addr)
+ goto err;
+
+ reg_type = g2d_get_reg_type(reg_offset);
+ if (reg_type == REG_TYPE_NONE)
+ goto err;
+
+ buf_desc = &buf_info->descs[reg_type];
+ value = cmdlist->data[index + 1];
+
+ buf_desc->left_x = value & 0x1fff;
+ buf_desc->top_y = (value & 0x1fff0000) >> 16;
+ break;
+ case G2D_SRC_RIGHT_BOTTOM:
+ case G2D_DST_RIGHT_BOTTOM:
+ if (for_addr)
+ goto err;
+
+ reg_type = g2d_get_reg_type(reg_offset);
+ if (reg_type == REG_TYPE_NONE)
+ goto err;
+
+ buf_desc = &buf_info->descs[reg_type];
+ value = cmdlist->data[index + 1];
+
+ buf_desc->right_x = value & 0x1fff;
+ buf_desc->bottom_y = (value & 0x1fff0000) >> 16;
break;
default:
if (for_addr)
@@ -860,9 +1126,23 @@
cmdlist->data[cmdlist->last++] = G2D_SRC_BASE_ADDR;
cmdlist->data[cmdlist->last++] = 0;
+ /*
+ * 'LIST_HOLD' command should be set to the DMA_HOLD_CMD_REG
+ * and GCF bit should be set to INTEN register if user wants
+ * G2D interrupt event once current command list execution is
+ * finished.
+ * Otherwise only ACF bit should be set to INTEN register so
+ * that one interrupt is occured after all command lists
+ * have been completed.
+ */
if (node->event) {
+ cmdlist->data[cmdlist->last++] = G2D_INTEN;
+ cmdlist->data[cmdlist->last++] = G2D_INTEN_ACF | G2D_INTEN_GCF;
cmdlist->data[cmdlist->last++] = G2D_DMA_HOLD_CMD;
cmdlist->data[cmdlist->last++] = G2D_LIST_HOLD;
+ } else {
+ cmdlist->data[cmdlist->last++] = G2D_INTEN;
+ cmdlist->data[cmdlist->last++] = G2D_INTEN_ACF;
}
/* Check size of cmdlist: last 2 is about G2D_BITBLT_START */
@@ -887,7 +1167,7 @@
if (ret < 0)
goto err_free_event;
- node->map_nr = req->cmd_buf_nr;
+ node->buf_info.map_nr = req->cmd_buf_nr;
if (req->cmd_buf_nr) {
struct drm_exynos_g2d_cmd *cmd_buf;
diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.c b/drivers/gpu/drm/exynos/exynos_drm_gem.c
index 67e17ce..0e6fe00 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_gem.c
+++ b/drivers/gpu/drm/exynos/exynos_drm_gem.c
@@ -164,6 +164,27 @@
exynos_gem_obj = NULL;
}
+unsigned long exynos_drm_gem_get_size(struct drm_device *dev,
+ unsigned int gem_handle,
+ struct drm_file *file_priv)
+{
+ struct exynos_drm_gem_obj *exynos_gem_obj;
+ struct drm_gem_object *obj;
+
+ obj = drm_gem_object_lookup(dev, file_priv, gem_handle);
+ if (!obj) {
+ DRM_ERROR("failed to lookup gem object.\n");
+ return 0;
+ }
+
+ exynos_gem_obj = to_exynos_gem_obj(obj);
+
+ drm_gem_object_unreference_unlocked(obj);
+
+ return exynos_gem_obj->buffer->size;
+}
+
+
struct exynos_drm_gem_obj *exynos_drm_gem_init(struct drm_device *dev,
unsigned long size)
{
diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.h b/drivers/gpu/drm/exynos/exynos_drm_gem.h
index 35ebac4..468766b 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_gem.h
+++ b/drivers/gpu/drm/exynos/exynos_drm_gem.h
@@ -130,6 +130,11 @@
int exynos_drm_gem_get_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv);
+/* get buffer size to gem handle. */
+unsigned long exynos_drm_gem_get_size(struct drm_device *dev,
+ unsigned int gem_handle,
+ struct drm_file *file_priv);
+
/* initialize gem object. */
int exynos_drm_gem_init_object(struct drm_gem_object *obj);
diff --git a/drivers/gpu/drm/exynos/exynos_drm_vidi.c b/drivers/gpu/drm/exynos/exynos_drm_vidi.c
index 13ccbd4..9504b0c 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_vidi.c
+++ b/drivers/gpu/drm/exynos/exynos_drm_vidi.c
@@ -117,13 +117,12 @@
}
edid_len = (1 + ctx->raw_edid->extensions) * EDID_LENGTH;
- edid = kzalloc(edid_len, GFP_KERNEL);
+ edid = kmemdup(ctx->raw_edid, edid_len, GFP_KERNEL);
if (!edid) {
DRM_DEBUG_KMS("failed to allocate edid\n");
return ERR_PTR(-ENOMEM);
}
- memcpy(edid, ctx->raw_edid, edid_len);
return edid;
}
@@ -563,12 +562,11 @@
return -EINVAL;
}
edid_len = (1 + raw_edid->extensions) * EDID_LENGTH;
- ctx->raw_edid = kzalloc(edid_len, GFP_KERNEL);
+ ctx->raw_edid = kmemdup(raw_edid, edid_len, GFP_KERNEL);
if (!ctx->raw_edid) {
DRM_DEBUG_KMS("failed to allocate raw_edid.\n");
return -ENOMEM;
}
- memcpy(ctx->raw_edid, raw_edid, edid_len);
} else {
/*
* with connection = 0, free raw_edid
diff --git a/drivers/gpu/drm/exynos/exynos_mixer.c b/drivers/gpu/drm/exynos/exynos_mixer.c
index e919aba..2f4f72f 100644
--- a/drivers/gpu/drm/exynos/exynos_mixer.c
+++ b/drivers/gpu/drm/exynos/exynos_mixer.c
@@ -818,7 +818,7 @@
mixer_ctx->win_data[win].enabled = false;
}
-int mixer_check_timing(void *ctx, struct fb_videomode *timing)
+static int mixer_check_timing(void *ctx, struct fb_videomode *timing)
{
struct mixer_context *mixer_ctx = ctx;
u32 w, h;
diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index aae3148..7299ea4 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -103,7 +103,7 @@
static void
describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj)
{
- seq_printf(m, "%p: %s%s %8zdKiB %02x %02x %d %d %d%s%s%s",
+ seq_printf(m, "%pK: %s%s %8zdKiB %02x %02x %d %d %d%s%s%s",
&obj->base,
get_pin_flag(obj),
get_tiling_flag(obj),
diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
index 0a8eceb..e9b5789 100644
--- a/drivers/gpu/drm/i915/i915_drv.c
+++ b/drivers/gpu/drm/i915/i915_drv.c
@@ -125,6 +125,11 @@
"Enable Haswell and ValleyView Support. "
"(default: false)");
+int i915_disable_power_well __read_mostly = 0;
+module_param_named(disable_power_well, i915_disable_power_well, int, 0600);
+MODULE_PARM_DESC(disable_power_well,
+ "Disable the power well when possible (default: false)");
+
static struct drm_driver driver;
extern int intel_agp_enabled;
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index e95337c..01769e2 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -1398,6 +1398,7 @@
extern bool i915_enable_hangcheck __read_mostly;
extern int i915_enable_ppgtt __read_mostly;
extern unsigned int i915_preliminary_hw_support __read_mostly;
+extern int i915_disable_power_well __read_mostly;
extern int i915_suspend(struct drm_device *dev, pm_message_t state);
extern int i915_resume(struct drm_device *dev);
diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
index 2f2daeb..3b11ab0 100644
--- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
@@ -732,6 +732,8 @@
int count)
{
int i;
+ int relocs_total = 0;
+ int relocs_max = INT_MAX / sizeof(struct drm_i915_gem_relocation_entry);
for (i = 0; i < count; i++) {
char __user *ptr = (char __user *)(uintptr_t)exec[i].relocs_ptr;
@@ -740,10 +742,13 @@
if (exec[i].flags & __EXEC_OBJECT_UNKNOWN_FLAGS)
return -EINVAL;
- /* First check for malicious input causing overflow */
- if (exec[i].relocation_count >
- INT_MAX / sizeof(struct drm_i915_gem_relocation_entry))
+ /* First check for malicious input causing overflow in
+ * the worst case where we need to allocate the entire
+ * relocation tree as a single array.
+ */
+ if (exec[i].relocation_count > relocs_max - relocs_total)
return -EINVAL;
+ relocs_total += exec[i].relocation_count;
length = exec[i].relocation_count *
sizeof(struct drm_i915_gem_relocation_entry);
diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index 287b42c..b20d501 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -5771,6 +5771,11 @@
num_connectors++;
}
+ if (is_cpu_edp)
+ intel_crtc->cpu_transcoder = TRANSCODER_EDP;
+ else
+ intel_crtc->cpu_transcoder = pipe;
+
/* We are not sure yet this won't happen. */
WARN(!HAS_PCH_LPT(dev), "Unexpected PCH type %d\n",
INTEL_PCH_TYPE(dev));
@@ -5837,11 +5842,6 @@
int pipe = intel_crtc->pipe;
int ret;
- if (IS_HASWELL(dev) && intel_pipe_has_type(crtc, INTEL_OUTPUT_EDP))
- intel_crtc->cpu_transcoder = TRANSCODER_EDP;
- else
- intel_crtc->cpu_transcoder = pipe;
-
drm_vblank_pre_modeset(dev, pipe);
ret = dev_priv->display.crtc_mode_set(crtc, mode, adjusted_mode,
diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
index 6f728e5..d7d4afe 100644
--- a/drivers/gpu/drm/i915/intel_dp.c
+++ b/drivers/gpu/drm/i915/intel_dp.c
@@ -820,6 +820,7 @@
struct intel_link_m_n m_n;
int pipe = intel_crtc->pipe;
enum transcoder cpu_transcoder = intel_crtc->cpu_transcoder;
+ int target_clock;
/*
* Find the lane count in the intel_encoder private
@@ -835,13 +836,22 @@
}
}
+ target_clock = mode->clock;
+ for_each_encoder_on_crtc(dev, crtc, intel_encoder) {
+ if (intel_encoder->type == INTEL_OUTPUT_EDP) {
+ target_clock = intel_edp_target_clock(intel_encoder,
+ mode);
+ break;
+ }
+ }
+
/*
* Compute the GMCH and Link ratios. The '3' here is
* the number of bytes_per_pixel post-LUT, which we always
* set up for 8-bits of R/G/B, or 3 bytes total.
*/
intel_link_compute_m_n(intel_crtc->bpp, lane_count,
- mode->clock, adjusted_mode->clock, &m_n);
+ target_clock, adjusted_mode->clock, &m_n);
if (IS_HASWELL(dev)) {
I915_WRITE(PIPE_DATA_M1(cpu_transcoder),
@@ -1930,7 +1940,7 @@
for (i = 0; i < intel_dp->lane_count; i++)
if ((intel_dp->train_set[i] & DP_TRAIN_MAX_SWING_REACHED) == 0)
break;
- if (i == intel_dp->lane_count && voltage_tries == 5) {
+ if (i == intel_dp->lane_count) {
++loop_tries;
if (loop_tries == 5) {
DRM_DEBUG_KMS("too many full retries, give up\n");
diff --git a/drivers/gpu/drm/i915/intel_i2c.c b/drivers/gpu/drm/i915/intel_i2c.c
index acf8aec..ef4744e 100644
--- a/drivers/gpu/drm/i915/intel_i2c.c
+++ b/drivers/gpu/drm/i915/intel_i2c.c
@@ -203,7 +203,13 @@
algo->data = bus;
}
-#define HAS_GMBUS_IRQ(dev) (INTEL_INFO(dev)->gen >= 4)
+/*
+ * gmbus on gen4 seems to be able to generate legacy interrupts even when in MSI
+ * mode. This results in spurious interrupt warnings if the legacy irq no. is
+ * shared with another device. The kernel then disables that interrupt source
+ * and so prevents the other device from working properly.
+ */
+#define HAS_GMBUS_IRQ(dev) (INTEL_INFO(dev)->gen >= 5)
static int
gmbus_wait_hw_status(struct drm_i915_private *dev_priv,
u32 gmbus2_status,
@@ -214,6 +220,9 @@
u32 gmbus2 = 0;
DEFINE_WAIT(wait);
+ if (!HAS_GMBUS_IRQ(dev_priv->dev))
+ gmbus4_irq_en = 0;
+
/* Important: The hw handles only the first bit, so set only one! Since
* we also need to check for NAKs besides the hw ready/idle signal, we
* need to wake up periodically and check that ourselves. */
diff --git a/drivers/gpu/drm/i915/intel_panel.c b/drivers/gpu/drm/i915/intel_panel.c
index a3730e0..bee8cb6 100644
--- a/drivers/gpu/drm/i915/intel_panel.c
+++ b/drivers/gpu/drm/i915/intel_panel.c
@@ -321,9 +321,6 @@
if (dev_priv->backlight_level == 0)
dev_priv->backlight_level = intel_panel_get_max_backlight(dev);
- dev_priv->backlight_enabled = true;
- intel_panel_actually_set_backlight(dev, dev_priv->backlight_level);
-
if (INTEL_INFO(dev)->gen >= 4) {
uint32_t reg, tmp;
@@ -359,12 +356,12 @@
}
set_level:
- /* Check the current backlight level and try to set again if it's zero.
- * On some machines, BLC_PWM_CPU_CTL is cleared to zero automatically
- * when BLC_PWM_CPU_CTL2 and BLC_PWM_PCH_CTL1 are written.
+ /* Call below after setting BLC_PWM_CPU_CTL2 and BLC_PWM_PCH_CTL1.
+ * BLC_PWM_CPU_CTL may be cleared to zero automatically when these
+ * registers are set.
*/
- if (!intel_panel_get_backlight(dev))
- intel_panel_actually_set_backlight(dev, dev_priv->backlight_level);
+ dev_priv->backlight_enabled = true;
+ intel_panel_actually_set_backlight(dev, dev_priv->backlight_level);
}
static void intel_panel_init_backlight(struct drm_device *dev)
diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
index a1794c6..adca007 100644
--- a/drivers/gpu/drm/i915/intel_pm.c
+++ b/drivers/gpu/drm/i915/intel_pm.c
@@ -4079,6 +4079,9 @@
if (!IS_HASWELL(dev))
return;
+ if (!i915_disable_power_well && !enable)
+ return;
+
tmp = I915_READ(HSW_PWR_WELL_DRIVER);
is_enabled = tmp & HSW_PWR_WELL_STATE;
enable_requested = tmp & HSW_PWR_WELL_ENABLE;
diff --git a/drivers/gpu/drm/mgag200/mgag200_mode.c b/drivers/gpu/drm/mgag200/mgag200_mode.c
index a274b99..fe22bb7 100644
--- a/drivers/gpu/drm/mgag200/mgag200_mode.c
+++ b/drivers/gpu/drm/mgag200/mgag200_mode.c
@@ -382,19 +382,19 @@
m = n = p = 0;
vcomax = 800000;
vcomin = 400000;
- pllreffreq = 3333;
+ pllreffreq = 33333;
delta = 0xffffffff;
permitteddelta = clock * 5 / 1000;
- for (testp = 16; testp > 0; testp--) {
+ for (testp = 16; testp > 0; testp >>= 1) {
if (clock * testp > vcomax)
continue;
if (clock * testp < vcomin)
continue;
for (testm = 1; testm < 33; testm++) {
- for (testn = 1; testn < 257; testn++) {
+ for (testn = 17; testn < 257; testn++) {
computed = (pllreffreq * testn) /
(testm * testp);
if (computed > clock)
@@ -404,11 +404,11 @@
if (tmpdelta < delta) {
delta = tmpdelta;
n = testn - 1;
- m = (testm - 1) | ((n >> 1) & 0x80);
+ m = (testm - 1);
p = testp - 1;
}
if ((clock * testp) >= 600000)
- p |= 80;
+ p |= 0x80;
}
}
}
diff --git a/drivers/gpu/drm/nouveau/core/core/object.c b/drivers/gpu/drm/nouveau/core/core/object.c
index 0daab62..3b2e7b63 100644
--- a/drivers/gpu/drm/nouveau/core/core/object.c
+++ b/drivers/gpu/drm/nouveau/core/core/object.c
@@ -278,7 +278,6 @@
struct nouveau_object *parent = NULL;
struct nouveau_object *namedb = NULL;
struct nouveau_handle *handle = NULL;
- int ret = -EINVAL;
parent = nouveau_handle_ref(client, _parent);
if (!parent)
@@ -295,7 +294,7 @@
}
nouveau_object_ref(NULL, &parent);
- return ret;
+ return handle ? 0 : -EINVAL;
}
int
diff --git a/drivers/gpu/drm/nouveau/core/include/subdev/therm.h b/drivers/gpu/drm/nouveau/core/include/subdev/therm.h
index 6b17b61..0b20fc0 100644
--- a/drivers/gpu/drm/nouveau/core/include/subdev/therm.h
+++ b/drivers/gpu/drm/nouveau/core/include/subdev/therm.h
@@ -4,7 +4,7 @@
#include <core/device.h>
#include <core/subdev.h>
-enum nouveau_therm_mode {
+enum nouveau_therm_fan_mode {
NOUVEAU_THERM_CTRL_NONE = 0,
NOUVEAU_THERM_CTRL_MANUAL = 1,
NOUVEAU_THERM_CTRL_AUTO = 2,
diff --git a/drivers/gpu/drm/nouveau/core/subdev/therm/base.c b/drivers/gpu/drm/nouveau/core/subdev/therm/base.c
index f794dc8..a00a5a7 100644
--- a/drivers/gpu/drm/nouveau/core/subdev/therm/base.c
+++ b/drivers/gpu/drm/nouveau/core/subdev/therm/base.c
@@ -134,7 +134,7 @@
}
int
-nouveau_therm_mode(struct nouveau_therm *therm, int mode)
+nouveau_therm_fan_mode(struct nouveau_therm *therm, int mode)
{
struct nouveau_therm_priv *priv = (void *)therm;
struct nouveau_device *device = nv_device(therm);
@@ -149,10 +149,15 @@
(mode != NOUVEAU_THERM_CTRL_NONE && device->card_type >= NV_C0))
return -EINVAL;
+ /* do not allow automatic fan management if the thermal sensor is
+ * not available */
+ if (priv->mode == 2 && therm->temp_get(therm) < 0)
+ return -EINVAL;
+
if (priv->mode == mode)
return 0;
- nv_info(therm, "Thermal management: %s\n", name[mode]);
+ nv_info(therm, "fan management: %s\n", name[mode]);
nouveau_therm_update(therm, mode);
return 0;
}
@@ -213,7 +218,7 @@
priv->fan->bios.max_duty = value;
return 0;
case NOUVEAU_THERM_ATTR_FAN_MODE:
- return nouveau_therm_mode(therm, value);
+ return nouveau_therm_fan_mode(therm, value);
case NOUVEAU_THERM_ATTR_THRS_FAN_BOOST:
priv->bios_sensor.thrs_fan_boost.temp = value;
priv->sensor.program_alarms(therm);
@@ -263,7 +268,7 @@
return ret;
if (priv->suspend >= 0)
- nouveau_therm_mode(therm, priv->mode);
+ nouveau_therm_fan_mode(therm, priv->mode);
priv->sensor.program_alarms(therm);
return 0;
}
@@ -313,11 +318,12 @@
int
nouveau_therm_preinit(struct nouveau_therm *therm)
{
- nouveau_therm_ic_ctor(therm);
nouveau_therm_sensor_ctor(therm);
+ nouveau_therm_ic_ctor(therm);
nouveau_therm_fan_ctor(therm);
- nouveau_therm_mode(therm, NOUVEAU_THERM_CTRL_NONE);
+ nouveau_therm_fan_mode(therm, NOUVEAU_THERM_CTRL_NONE);
+ nouveau_therm_sensor_preinit(therm);
return 0;
}
diff --git a/drivers/gpu/drm/nouveau/core/subdev/therm/ic.c b/drivers/gpu/drm/nouveau/core/subdev/therm/ic.c
index e24090b..8b3adec 100644
--- a/drivers/gpu/drm/nouveau/core/subdev/therm/ic.c
+++ b/drivers/gpu/drm/nouveau/core/subdev/therm/ic.c
@@ -32,6 +32,7 @@
struct i2c_board_info *info)
{
struct nouveau_therm_priv *priv = (void *)nouveau_therm(i2c);
+ struct nvbios_therm_sensor *sensor = &priv->bios_sensor;
struct i2c_client *client;
request_module("%s%s", I2C_MODULE_PREFIX, info->type);
@@ -46,8 +47,9 @@
}
nv_info(priv,
- "Found an %s at address 0x%x (controlled by lm_sensors)\n",
- info->type, info->addr);
+ "Found an %s at address 0x%x (controlled by lm_sensors, "
+ "temp offset %+i C)\n",
+ info->type, info->addr, sensor->offset_constant);
priv->ic = client;
return true;
diff --git a/drivers/gpu/drm/nouveau/core/subdev/therm/nv40.c b/drivers/gpu/drm/nouveau/core/subdev/therm/nv40.c
index 0f5363e..a70d1b7 100644
--- a/drivers/gpu/drm/nouveau/core/subdev/therm/nv40.c
+++ b/drivers/gpu/drm/nouveau/core/subdev/therm/nv40.c
@@ -29,54 +29,83 @@
struct nouveau_therm_priv base;
};
-static int
-nv40_sensor_setup(struct nouveau_therm *therm)
+enum nv40_sensor_style { INVALID_STYLE = -1, OLD_STYLE = 0, NEW_STYLE = 1 };
+
+static enum nv40_sensor_style
+nv40_sensor_style(struct nouveau_therm *therm)
{
struct nouveau_device *device = nv_device(therm);
+ switch (device->chipset) {
+ case 0x43:
+ case 0x44:
+ case 0x4a:
+ case 0x47:
+ return OLD_STYLE;
+
+ case 0x46:
+ case 0x49:
+ case 0x4b:
+ case 0x4e:
+ case 0x4c:
+ case 0x67:
+ case 0x68:
+ case 0x63:
+ return NEW_STYLE;
+ default:
+ return INVALID_STYLE;
+ }
+}
+
+static int
+nv40_sensor_setup(struct nouveau_therm *therm)
+{
+ enum nv40_sensor_style style = nv40_sensor_style(therm);
+
/* enable ADC readout and disable the ALARM threshold */
- if (device->chipset >= 0x46) {
+ if (style == NEW_STYLE) {
nv_mask(therm, 0x15b8, 0x80000000, 0);
nv_wr32(therm, 0x15b0, 0x80003fff);
- mdelay(10); /* wait for the temperature to stabilize */
+ mdelay(20); /* wait for the temperature to stabilize */
return nv_rd32(therm, 0x15b4) & 0x3fff;
- } else {
+ } else if (style == OLD_STYLE) {
nv_wr32(therm, 0x15b0, 0xff);
+ mdelay(20); /* wait for the temperature to stabilize */
return nv_rd32(therm, 0x15b4) & 0xff;
- }
+ } else
+ return -ENODEV;
}
static int
nv40_temp_get(struct nouveau_therm *therm)
{
struct nouveau_therm_priv *priv = (void *)therm;
- struct nouveau_device *device = nv_device(therm);
struct nvbios_therm_sensor *sensor = &priv->bios_sensor;
+ enum nv40_sensor_style style = nv40_sensor_style(therm);
int core_temp;
- if (device->chipset >= 0x46) {
+ if (style == NEW_STYLE) {
nv_wr32(therm, 0x15b0, 0x80003fff);
core_temp = nv_rd32(therm, 0x15b4) & 0x3fff;
- } else {
+ } else if (style == OLD_STYLE) {
nv_wr32(therm, 0x15b0, 0xff);
core_temp = nv_rd32(therm, 0x15b4) & 0xff;
- }
+ } else
+ return -ENODEV;
- /* Setup the sensor if the temperature is 0 */
- if (core_temp == 0)
- core_temp = nv40_sensor_setup(therm);
-
- if (sensor->slope_div == 0)
- sensor->slope_div = 1;
- if (sensor->offset_den == 0)
- sensor->offset_den = 1;
- if (sensor->slope_mult < 1)
- sensor->slope_mult = 1;
+ /* if the slope or the offset is unset, do no use the sensor */
+ if (!sensor->slope_div || !sensor->slope_mult ||
+ !sensor->offset_num || !sensor->offset_den)
+ return -ENODEV;
core_temp = core_temp * sensor->slope_mult / sensor->slope_div;
core_temp = core_temp + sensor->offset_num / sensor->offset_den;
core_temp = core_temp + sensor->offset_constant - 8;
+ /* reserve negative temperatures for errors */
+ if (core_temp < 0)
+ core_temp = 0;
+
return core_temp;
}
diff --git a/drivers/gpu/drm/nouveau/core/subdev/therm/priv.h b/drivers/gpu/drm/nouveau/core/subdev/therm/priv.h
index 06b9870..438d982 100644
--- a/drivers/gpu/drm/nouveau/core/subdev/therm/priv.h
+++ b/drivers/gpu/drm/nouveau/core/subdev/therm/priv.h
@@ -102,7 +102,7 @@
struct i2c_client *ic;
};
-int nouveau_therm_mode(struct nouveau_therm *therm, int mode);
+int nouveau_therm_fan_mode(struct nouveau_therm *therm, int mode);
int nouveau_therm_attr_get(struct nouveau_therm *therm,
enum nouveau_therm_attr_type type);
int nouveau_therm_attr_set(struct nouveau_therm *therm,
@@ -122,6 +122,7 @@
int nouveau_therm_preinit(struct nouveau_therm *);
+void nouveau_therm_sensor_preinit(struct nouveau_therm *);
void nouveau_therm_sensor_set_threshold_state(struct nouveau_therm *therm,
enum nouveau_therm_thrs thrs,
enum nouveau_therm_thrs_state st);
diff --git a/drivers/gpu/drm/nouveau/core/subdev/therm/temp.c b/drivers/gpu/drm/nouveau/core/subdev/therm/temp.c
index b37624a..470f6a4 100644
--- a/drivers/gpu/drm/nouveau/core/subdev/therm/temp.c
+++ b/drivers/gpu/drm/nouveau/core/subdev/therm/temp.c
@@ -34,10 +34,6 @@
{
struct nouveau_therm_priv *priv = (void *)therm;
- priv->bios_sensor.slope_mult = 1;
- priv->bios_sensor.slope_div = 1;
- priv->bios_sensor.offset_num = 0;
- priv->bios_sensor.offset_den = 1;
priv->bios_sensor.offset_constant = 0;
priv->bios_sensor.thrs_fan_boost.temp = 90;
@@ -60,11 +56,6 @@
struct nouveau_therm_priv *priv = (void *)therm;
struct nvbios_therm_sensor *s = &priv->bios_sensor;
- if (!priv->bios_sensor.slope_div)
- priv->bios_sensor.slope_div = 1;
- if (!priv->bios_sensor.offset_den)
- priv->bios_sensor.offset_den = 1;
-
/* enforce a minimum hysteresis on thresholds */
s->thrs_fan_boost.hysteresis = max_t(u8, s->thrs_fan_boost.hysteresis, 2);
s->thrs_down_clock.hysteresis = max_t(u8, s->thrs_down_clock.hysteresis, 2);
@@ -106,16 +97,16 @@
const char *thresolds[] = {
"fanboost", "downclock", "critical", "shutdown"
};
- uint8_t temperature = therm->temp_get(therm);
+ int temperature = therm->temp_get(therm);
if (thrs < 0 || thrs > 3)
return;
if (dir == NOUVEAU_THERM_THRS_FALLING)
- nv_info(therm, "temperature (%u C) went below the '%s' threshold\n",
+ nv_info(therm, "temperature (%i C) went below the '%s' threshold\n",
temperature, thresolds[thrs]);
else
- nv_info(therm, "temperature (%u C) hit the '%s' threshold\n",
+ nv_info(therm, "temperature (%i C) hit the '%s' threshold\n",
temperature, thresolds[thrs]);
active = (dir == NOUVEAU_THERM_THRS_RISING);
@@ -123,7 +114,7 @@
case NOUVEAU_THERM_THRS_FANBOOST:
if (active) {
nouveau_therm_fan_set(therm, true, 100);
- nouveau_therm_mode(therm, NOUVEAU_THERM_CTRL_AUTO);
+ nouveau_therm_fan_mode(therm, NOUVEAU_THERM_CTRL_AUTO);
}
break;
case NOUVEAU_THERM_THRS_DOWNCLOCK:
@@ -202,7 +193,7 @@
NOUVEAU_THERM_THRS_SHUTDOWN);
/* schedule the next poll in one second */
- if (list_empty(&alarm->head))
+ if (therm->temp_get(therm) >= 0 && list_empty(&alarm->head))
ptimer->alarm(ptimer, 1000 * 1000 * 1000, alarm);
spin_unlock_irqrestore(&priv->sensor.alarm_program_lock, flags);
@@ -225,6 +216,17 @@
alarm_timer_callback(&priv->sensor.therm_poll_alarm);
}
+void
+nouveau_therm_sensor_preinit(struct nouveau_therm *therm)
+{
+ const char *sensor_avail = "yes";
+
+ if (therm->temp_get(therm) < 0)
+ sensor_avail = "no";
+
+ nv_info(therm, "internal sensor: %s\n", sensor_avail);
+}
+
int
nouveau_therm_sensor_ctor(struct nouveau_therm *therm)
{
diff --git a/drivers/gpu/drm/nouveau/nouveau_pm.c b/drivers/gpu/drm/nouveau/nouveau_pm.c
index bb54098..936b442 100644
--- a/drivers/gpu/drm/nouveau/nouveau_pm.c
+++ b/drivers/gpu/drm/nouveau/nouveau_pm.c
@@ -402,8 +402,12 @@
struct drm_device *dev = dev_get_drvdata(d);
struct nouveau_drm *drm = nouveau_drm(dev);
struct nouveau_therm *therm = nouveau_therm(drm->device);
+ int temp = therm->temp_get(therm);
- return snprintf(buf, PAGE_SIZE, "%d\n", therm->temp_get(therm) * 1000);
+ if (temp < 0)
+ return temp;
+
+ return snprintf(buf, PAGE_SIZE, "%d\n", temp * 1000);
}
static SENSOR_DEVICE_ATTR(temp1_input, S_IRUGO, nouveau_hwmon_show_temp,
NULL, 0);
@@ -871,7 +875,12 @@
nouveau_hwmon_get_pwm1_max,
nouveau_hwmon_set_pwm1_max, 0);
-static struct attribute *hwmon_attributes[] = {
+static struct attribute *hwmon_default_attributes[] = {
+ &sensor_dev_attr_name.dev_attr.attr,
+ &sensor_dev_attr_update_rate.dev_attr.attr,
+ NULL
+};
+static struct attribute *hwmon_temp_attributes[] = {
&sensor_dev_attr_temp1_input.dev_attr.attr,
&sensor_dev_attr_temp1_auto_point1_pwm.dev_attr.attr,
&sensor_dev_attr_temp1_auto_point1_temp.dev_attr.attr,
@@ -882,8 +891,6 @@
&sensor_dev_attr_temp1_crit_hyst.dev_attr.attr,
&sensor_dev_attr_temp1_emergency.dev_attr.attr,
&sensor_dev_attr_temp1_emergency_hyst.dev_attr.attr,
- &sensor_dev_attr_name.dev_attr.attr,
- &sensor_dev_attr_update_rate.dev_attr.attr,
NULL
};
static struct attribute *hwmon_fan_rpm_attributes[] = {
@@ -898,8 +905,11 @@
NULL
};
-static const struct attribute_group hwmon_attrgroup = {
- .attrs = hwmon_attributes,
+static const struct attribute_group hwmon_default_attrgroup = {
+ .attrs = hwmon_default_attributes,
+};
+static const struct attribute_group hwmon_temp_attrgroup = {
+ .attrs = hwmon_temp_attributes,
};
static const struct attribute_group hwmon_fan_rpm_attrgroup = {
.attrs = hwmon_fan_rpm_attributes,
@@ -931,13 +941,22 @@
}
dev_set_drvdata(hwmon_dev, dev);
- /* default sysfs entries */
- ret = sysfs_create_group(&hwmon_dev->kobj, &hwmon_attrgroup);
+ /* set the default attributes */
+ ret = sysfs_create_group(&hwmon_dev->kobj, &hwmon_default_attrgroup);
if (ret) {
if (ret)
goto error;
}
+ /* if the card has a working thermal sensor */
+ if (therm->temp_get(therm) >= 0) {
+ ret = sysfs_create_group(&hwmon_dev->kobj, &hwmon_temp_attrgroup);
+ if (ret) {
+ if (ret)
+ goto error;
+ }
+ }
+
/* if the card has a pwm fan */
/*XXX: incorrect, need better detection for this, some boards have
* the gpio entries for pwm fan control even when there's no
@@ -979,11 +998,10 @@
struct nouveau_pm *pm = nouveau_pm(dev);
if (pm->hwmon) {
- sysfs_remove_group(&pm->hwmon->kobj, &hwmon_attrgroup);
- sysfs_remove_group(&pm->hwmon->kobj,
- &hwmon_pwm_fan_attrgroup);
- sysfs_remove_group(&pm->hwmon->kobj,
- &hwmon_fan_rpm_attrgroup);
+ sysfs_remove_group(&pm->hwmon->kobj, &hwmon_default_attrgroup);
+ sysfs_remove_group(&pm->hwmon->kobj, &hwmon_temp_attrgroup);
+ sysfs_remove_group(&pm->hwmon->kobj, &hwmon_pwm_fan_attrgroup);
+ sysfs_remove_group(&pm->hwmon->kobj, &hwmon_fan_rpm_attrgroup);
hwmon_device_unregister(pm->hwmon);
}
diff --git a/drivers/gpu/drm/nouveau/nv50_display.c b/drivers/gpu/drm/nouveau/nv50_display.c
index 2db5799..7f0e6c3 100644
--- a/drivers/gpu/drm/nouveau/nv50_display.c
+++ b/drivers/gpu/drm/nouveau/nv50_display.c
@@ -524,6 +524,8 @@
swap_interval <<= 4;
if (swap_interval == 0)
swap_interval |= 0x100;
+ if (chan == NULL)
+ evo_sync(crtc->dev);
push = evo_wait(sync, 128);
if (unlikely(push == NULL))
@@ -586,8 +588,6 @@
sync->addr ^= 0x10;
sync->data++;
FIRE_RING (chan);
- } else {
- evo_sync(crtc->dev);
}
/* queue the flip */
diff --git a/drivers/gpu/drm/radeon/ni.c b/drivers/gpu/drm/radeon/ni.c
index d4c633e1..27769e7 100644
--- a/drivers/gpu/drm/radeon/ni.c
+++ b/drivers/gpu/drm/radeon/ni.c
@@ -468,13 +468,19 @@
(rdev->pdev->device == 0x9907) ||
(rdev->pdev->device == 0x9908) ||
(rdev->pdev->device == 0x9909) ||
+ (rdev->pdev->device == 0x990B) ||
+ (rdev->pdev->device == 0x990C) ||
+ (rdev->pdev->device == 0x990F) ||
(rdev->pdev->device == 0x9910) ||
- (rdev->pdev->device == 0x9917)) {
+ (rdev->pdev->device == 0x9917) ||
+ (rdev->pdev->device == 0x9999)) {
rdev->config.cayman.max_simds_per_se = 6;
rdev->config.cayman.max_backends_per_se = 2;
} else if ((rdev->pdev->device == 0x9903) ||
(rdev->pdev->device == 0x9904) ||
(rdev->pdev->device == 0x990A) ||
+ (rdev->pdev->device == 0x990D) ||
+ (rdev->pdev->device == 0x990E) ||
(rdev->pdev->device == 0x9913) ||
(rdev->pdev->device == 0x9918)) {
rdev->config.cayman.max_simds_per_se = 4;
@@ -483,6 +489,9 @@
(rdev->pdev->device == 0x9990) ||
(rdev->pdev->device == 0x9991) ||
(rdev->pdev->device == 0x9994) ||
+ (rdev->pdev->device == 0x9995) ||
+ (rdev->pdev->device == 0x9996) ||
+ (rdev->pdev->device == 0x999A) ||
(rdev->pdev->device == 0x99A0)) {
rdev->config.cayman.max_simds_per_se = 3;
rdev->config.cayman.max_backends_per_se = 1;
@@ -616,11 +625,22 @@
WREG32(DMA_TILING_CONFIG + DMA0_REGISTER_OFFSET, gb_addr_config);
WREG32(DMA_TILING_CONFIG + DMA1_REGISTER_OFFSET, gb_addr_config);
- tmp = gb_addr_config & NUM_PIPES_MASK;
- tmp = r6xx_remap_render_backend(rdev, tmp,
- rdev->config.cayman.max_backends_per_se *
- rdev->config.cayman.max_shader_engines,
- CAYMAN_MAX_BACKENDS, disabled_rb_mask);
+ if ((rdev->config.cayman.max_backends_per_se == 1) &&
+ (rdev->flags & RADEON_IS_IGP)) {
+ if ((disabled_rb_mask & 3) == 1) {
+ /* RB0 disabled, RB1 enabled */
+ tmp = 0x11111111;
+ } else {
+ /* RB1 disabled, RB0 enabled */
+ tmp = 0x00000000;
+ }
+ } else {
+ tmp = gb_addr_config & NUM_PIPES_MASK;
+ tmp = r6xx_remap_render_backend(rdev, tmp,
+ rdev->config.cayman.max_backends_per_se *
+ rdev->config.cayman.max_shader_engines,
+ CAYMAN_MAX_BACKENDS, disabled_rb_mask);
+ }
WREG32(GB_BACKEND_MAP, tmp);
cgts_tcc_disable = 0xffff0000;
@@ -1771,6 +1791,7 @@
int cayman_suspend(struct radeon_device *rdev)
{
r600_audio_fini(rdev);
+ radeon_vm_manager_fini(rdev);
cayman_cp_enable(rdev, false);
cayman_dma_stop(rdev);
evergreen_irq_suspend(rdev);
diff --git a/drivers/gpu/drm/radeon/radeon_benchmark.c b/drivers/gpu/drm/radeon/radeon_benchmark.c
index bedda9c..6e05a2e 100644
--- a/drivers/gpu/drm/radeon/radeon_benchmark.c
+++ b/drivers/gpu/drm/radeon/radeon_benchmark.c
@@ -122,10 +122,7 @@
goto out_cleanup;
}
- /* r100 doesn't have dma engine so skip the test */
- /* also, VRAM-to-VRAM test doesn't make much sense for DMA */
- /* skip it as well if domains are the same */
- if ((rdev->asic->copy.dma) && (sdomain != ddomain)) {
+ if (rdev->asic->copy.dma) {
time = radeon_benchmark_do_move(rdev, size, saddr, daddr,
RADEON_BENCHMARK_COPY_DMA, n);
if (time < 0)
@@ -135,13 +132,15 @@
sdomain, ddomain, "dma");
}
- time = radeon_benchmark_do_move(rdev, size, saddr, daddr,
- RADEON_BENCHMARK_COPY_BLIT, n);
- if (time < 0)
- goto out_cleanup;
- if (time > 0)
- radeon_benchmark_log_results(n, size, time,
- sdomain, ddomain, "blit");
+ if (rdev->asic->copy.blit) {
+ time = radeon_benchmark_do_move(rdev, size, saddr, daddr,
+ RADEON_BENCHMARK_COPY_BLIT, n);
+ if (time < 0)
+ goto out_cleanup;
+ if (time > 0)
+ radeon_benchmark_log_results(n, size, time,
+ sdomain, ddomain, "blit");
+ }
out_cleanup:
if (sobj) {
diff --git a/drivers/gpu/drm/radeon/si.c b/drivers/gpu/drm/radeon/si.c
index 9128120..bafbe32 100644
--- a/drivers/gpu/drm/radeon/si.c
+++ b/drivers/gpu/drm/radeon/si.c
@@ -4469,6 +4469,7 @@
int si_suspend(struct radeon_device *rdev)
{
+ radeon_vm_manager_fini(rdev);
si_cp_enable(rdev, false);
cayman_dma_stop(rdev);
si_irq_suspend(rdev);
diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
index 92e47e5..c438877 100644
--- a/drivers/hid/hid-ids.h
+++ b/drivers/hid/hid-ids.h
@@ -590,6 +590,9 @@
#define USB_VENDOR_ID_MONTEREY 0x0566
#define USB_DEVICE_ID_GENIUS_KB29E 0x3004
+#define USB_VENDOR_ID_MSI 0x1770
+#define USB_DEVICE_ID_MSI_GX680R_LED_PANEL 0xff00
+
#define USB_VENDOR_ID_NATIONAL_SEMICONDUCTOR 0x0400
#define USB_DEVICE_ID_N_S_HARMONY 0xc359
@@ -684,6 +687,9 @@
#define USB_DEVICE_ID_QUANTA_OPTICAL_TOUCH_3001 0x3001
#define USB_DEVICE_ID_QUANTA_OPTICAL_TOUCH_3008 0x3008
+#define USB_VENDOR_ID_REALTEK 0x0bda
+#define USB_DEVICE_ID_REALTEK_READER 0x0152
+
#define USB_VENDOR_ID_ROCCAT 0x1e7d
#define USB_DEVICE_ID_ROCCAT_ARVO 0x30d4
#define USB_DEVICE_ID_ROCCAT_ISKU 0x319c
diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c
index 7a1ebb8..82e9211 100644
--- a/drivers/hid/hid-multitouch.c
+++ b/drivers/hid/hid-multitouch.c
@@ -621,6 +621,7 @@
{
struct mt_device *td = hid_get_drvdata(hid);
__s32 quirks = td->mtclass.quirks;
+ struct input_dev *input = field->hidinput->input;
if (hid->claimed & HID_CLAIMED_INPUT) {
switch (usage->hid) {
@@ -670,13 +671,16 @@
break;
default:
+ if (usage->type)
+ input_event(input, usage->type, usage->code,
+ value);
return;
}
if (usage->usage_index + 1 == field->report_count) {
/* we only take into account the last report. */
if (usage->hid == td->last_slot_field)
- mt_complete_slot(td, field->hidinput->input);
+ mt_complete_slot(td, input);
if (field->index == td->last_field_index
&& td->num_received >= td->num_expected)
diff --git a/drivers/hid/usbhid/hid-quirks.c b/drivers/hid/usbhid/hid-quirks.c
index e0e6abf..19b8360 100644
--- a/drivers/hid/usbhid/hid-quirks.c
+++ b/drivers/hid/usbhid/hid-quirks.c
@@ -73,6 +73,7 @@
{ USB_VENDOR_ID_FORMOSA, USB_DEVICE_ID_FORMOSA_IR_RECEIVER, HID_QUIRK_NO_INIT_REPORTS },
{ USB_VENDOR_ID_FREESCALE, USB_DEVICE_ID_FREESCALE_MX28, HID_QUIRK_NOGET },
{ USB_VENDOR_ID_MGE, USB_DEVICE_ID_MGE_UPS, HID_QUIRK_NOGET },
+ { USB_VENDOR_ID_MSI, USB_DEVICE_ID_MSI_GX680R_LED_PANEL, HID_QUIRK_NO_INIT_REPORTS },
{ USB_VENDOR_ID_NOVATEK, USB_DEVICE_ID_NOVATEK_MOUSE, HID_QUIRK_NO_INIT_REPORTS },
{ USB_VENDOR_ID_PIXART, USB_DEVICE_ID_PIXART_OPTICAL_TOUCH_SCREEN, HID_QUIRK_NO_INIT_REPORTS },
{ USB_VENDOR_ID_PIXART, USB_DEVICE_ID_PIXART_OPTICAL_TOUCH_SCREEN1, HID_QUIRK_NO_INIT_REPORTS },
@@ -80,6 +81,7 @@
{ USB_VENDOR_ID_PRODIGE, USB_DEVICE_ID_PRODIGE_CORDLESS, HID_QUIRK_NOGET },
{ USB_VENDOR_ID_QUANTA, USB_DEVICE_ID_QUANTA_OPTICAL_TOUCH_3001, HID_QUIRK_NOGET },
{ USB_VENDOR_ID_QUANTA, USB_DEVICE_ID_QUANTA_OPTICAL_TOUCH_3008, HID_QUIRK_NOGET },
+ { USB_VENDOR_ID_REALTEK, USB_DEVICE_ID_REALTEK_READER, HID_QUIRK_NO_INIT_REPORTS },
{ USB_VENDOR_ID_SENNHEISER, USB_DEVICE_ID_SENNHEISER_BTD500USB, HID_QUIRK_NOGET },
{ USB_VENDOR_ID_SIGMATEL, USB_DEVICE_ID_SIGMATEL_STMP3780, HID_QUIRK_NOGET },
{ USB_VENDOR_ID_SUN, USB_DEVICE_ID_RARITAN_KVM_DONGLE, HID_QUIRK_NOGET },
diff --git a/drivers/hwmon/lm75.h b/drivers/hwmon/lm75.h
index 668ff47..5cde94e 100644
--- a/drivers/hwmon/lm75.h
+++ b/drivers/hwmon/lm75.h
@@ -25,7 +25,7 @@
which contains this code, we don't worry about the wasted space.
*/
-#include <linux/hwmon.h>
+#include <linux/kernel.h>
/* straight from the datasheet */
#define LM75_TEMP_MIN (-55000)
diff --git a/drivers/i2c/Kconfig b/drivers/i2c/Kconfig
index 46cde098..e380c6e 100644
--- a/drivers/i2c/Kconfig
+++ b/drivers/i2c/Kconfig
@@ -4,7 +4,6 @@
menuconfig I2C
tristate "I2C support"
- depends on !S390
select RT_MUTEXES
---help---
I2C (pronounce: I-squared-C) is a slow serial bus protocol used in
@@ -76,6 +75,7 @@
config I2C_SMBUS
tristate "SMBus-specific protocols" if !I2C_HELPER_AUTO
+ depends on GENERIC_HARDIRQS
help
Say Y here if you want support for SMBus extensions to the I2C
specification. At the moment, the only supported extension is
diff --git a/drivers/i2c/busses/Kconfig b/drivers/i2c/busses/Kconfig
index a3725de..adfee98 100644
--- a/drivers/i2c/busses/Kconfig
+++ b/drivers/i2c/busses/Kconfig
@@ -114,7 +114,7 @@
config I2C_ISCH
tristate "Intel SCH SMBus 1.0"
- depends on PCI
+ depends on PCI && GENERIC_HARDIRQS
select LPC_SCH
help
Say Y here if you want to use SMBus controller on the Intel SCH
@@ -543,6 +543,7 @@
config I2C_OCORES
tristate "OpenCores I2C Controller"
+ depends on GENERIC_HARDIRQS
help
If you say yes to this option, support will be included for the
OpenCores I2C controller. For details see
@@ -777,7 +778,7 @@
config I2C_PARPORT
tristate "Parallel port adapter"
- depends on PARPORT
+ depends on PARPORT && GENERIC_HARDIRQS
select I2C_ALGOBIT
select I2C_SMBUS
help
@@ -802,6 +803,7 @@
config I2C_PARPORT_LIGHT
tristate "Parallel port adapter (light)"
+ depends on GENERIC_HARDIRQS
select I2C_ALGOBIT
select I2C_SMBUS
help
diff --git a/drivers/i2c/busses/i2c-ismt.c b/drivers/i2c/busses/i2c-ismt.c
index e9205ee..130f02c 100644
--- a/drivers/i2c/busses/i2c-ismt.c
+++ b/drivers/i2c/busses/i2c-ismt.c
@@ -80,6 +80,7 @@
/* PCI DIDs for the Intel SMBus Message Transport (SMT) Devices */
#define PCI_DEVICE_ID_INTEL_S1200_SMT0 0x0c59
#define PCI_DEVICE_ID_INTEL_S1200_SMT1 0x0c5a
+#define PCI_DEVICE_ID_INTEL_AVOTON_SMT 0x1f15
#define ISMT_DESC_ENTRIES 32 /* number of descriptor entries */
#define ISMT_MAX_RETRIES 3 /* number of SMBus retries to attempt */
@@ -185,6 +186,7 @@
static const DEFINE_PCI_DEVICE_TABLE(ismt_ids) = {
{ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_S1200_SMT0) },
{ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_S1200_SMT1) },
+ { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_AVOTON_SMT) },
{ 0, }
};
diff --git a/drivers/i2c/busses/i2c-tegra.c b/drivers/i2c/busses/i2c-tegra.c
index 36704e3..b714776 100644
--- a/drivers/i2c/busses/i2c-tegra.c
+++ b/drivers/i2c/busses/i2c-tegra.c
@@ -411,7 +411,11 @@
int clk_multiplier = I2C_CLK_MULTIPLIER_STD_FAST_MODE;
u32 clk_divisor;
- tegra_i2c_clock_enable(i2c_dev);
+ err = tegra_i2c_clock_enable(i2c_dev);
+ if (err < 0) {
+ dev_err(i2c_dev->dev, "Clock enable failed %d\n", err);
+ return err;
+ }
tegra_periph_reset_assert(i2c_dev->div_clk);
udelay(2);
@@ -628,7 +632,12 @@
if (i2c_dev->is_suspended)
return -EBUSY;
- tegra_i2c_clock_enable(i2c_dev);
+ ret = tegra_i2c_clock_enable(i2c_dev);
+ if (ret < 0) {
+ dev_err(i2c_dev->dev, "Clock enable failed %d\n", ret);
+ return ret;
+ }
+
for (i = 0; i < num; i++) {
enum msg_end_type end_type = MSG_END_STOP;
if (i < (num - 1)) {
diff --git a/drivers/i2c/muxes/i2c-mux-pca9541.c b/drivers/i2c/muxes/i2c-mux-pca9541.c
index f3b8f9a..966a18a 100644
--- a/drivers/i2c/muxes/i2c-mux-pca9541.c
+++ b/drivers/i2c/muxes/i2c-mux-pca9541.c
@@ -3,7 +3,7 @@
*
* Copyright (c) 2010 Ericsson AB.
*
- * Author: Guenter Roeck <guenter.roeck@ericsson.com>
+ * Author: Guenter Roeck <linux@roeck-us.net>
*
* Derived from:
* pca954x.c
diff --git a/drivers/infiniband/hw/cxgb4/cm.c b/drivers/infiniband/hw/cxgb4/cm.c
index 565bfb1..a3fde52 100644
--- a/drivers/infiniband/hw/cxgb4/cm.c
+++ b/drivers/infiniband/hw/cxgb4/cm.c
@@ -1575,6 +1575,12 @@
neigh = dst_neigh_lookup(ep->dst,
&ep->com.cm_id->remote_addr.sin_addr.s_addr);
+ if (!neigh) {
+ pr_err("%s - cannot alloc neigh.\n", __func__);
+ err = -ENOMEM;
+ goto fail4;
+ }
+
/* get a l2t entry */
if (neigh->dev->flags & IFF_LOOPBACK) {
PDBG("%s LOOPBACK\n", __func__);
@@ -3053,6 +3059,12 @@
dst = &rt->dst;
neigh = dst_neigh_lookup_skb(dst, skb);
+ if (!neigh) {
+ pr_err("%s - failed to allocate neigh!\n",
+ __func__);
+ goto free_dst;
+ }
+
if (neigh->dev->flags & IFF_LOOPBACK) {
pdev = ip_dev_find(&init_net, iph->daddr);
e = cxgb4_l2t_get(dev->rdev.lldi.l2t, neigh,
diff --git a/drivers/infiniband/hw/cxgb4/qp.c b/drivers/infiniband/hw/cxgb4/qp.c
index 17ba4f8..70b1808 100644
--- a/drivers/infiniband/hw/cxgb4/qp.c
+++ b/drivers/infiniband/hw/cxgb4/qp.c
@@ -186,8 +186,10 @@
wq->rq.queue = dma_alloc_coherent(&(rdev->lldi.pdev->dev),
wq->rq.memsize, &(wq->rq.dma_addr),
GFP_KERNEL);
- if (!wq->rq.queue)
+ if (!wq->rq.queue) {
+ ret = -ENOMEM;
goto free_sq;
+ }
PDBG("%s sq base va 0x%p pa 0x%llx rq base va 0x%p pa 0x%llx\n",
__func__, wq->sq.queue,
(unsigned long long)virt_to_phys(wq->sq.queue),
diff --git a/drivers/infiniband/hw/ipath/ipath_verbs.c b/drivers/infiniband/hw/ipath/ipath_verbs.c
index 439c35d..ea93870 100644
--- a/drivers/infiniband/hw/ipath/ipath_verbs.c
+++ b/drivers/infiniband/hw/ipath/ipath_verbs.c
@@ -620,7 +620,7 @@
goto bail;
}
- opcode = be32_to_cpu(ohdr->bth[0]) >> 24;
+ opcode = (be32_to_cpu(ohdr->bth[0]) >> 24) & 0x7f;
dev->opstats[opcode].n_bytes += tlen;
dev->opstats[opcode].n_packets++;
diff --git a/drivers/infiniband/hw/qib/Kconfig b/drivers/infiniband/hw/qib/Kconfig
index 8349f9c..1e603a3 100644
--- a/drivers/infiniband/hw/qib/Kconfig
+++ b/drivers/infiniband/hw/qib/Kconfig
@@ -1,7 +1,7 @@
config INFINIBAND_QIB
- tristate "QLogic PCIe HCA support"
+ tristate "Intel PCIe HCA support"
depends on 64BIT
---help---
- This is a low-level driver for QLogic PCIe QLE InfiniBand host
- channel adapters. This driver does not support the QLogic
+ This is a low-level driver for Intel PCIe QLE InfiniBand host
+ channel adapters. This driver does not support the Intel
HyperTransport card (model QHT7140).
diff --git a/drivers/infiniband/hw/qib/qib_driver.c b/drivers/infiniband/hw/qib/qib_driver.c
index 5423edc..2160924 100644
--- a/drivers/infiniband/hw/qib/qib_driver.c
+++ b/drivers/infiniband/hw/qib/qib_driver.c
@@ -1,4 +1,5 @@
/*
+ * Copyright (c) 2013 Intel Corporation. All rights reserved.
* Copyright (c) 2006, 2007, 2008, 2009 QLogic Corporation. All rights reserved.
* Copyright (c) 2003, 2004, 2005, 2006 PathScale, Inc. All rights reserved.
*
@@ -63,8 +64,8 @@
"Attempt pre-IBTA 1.2 DDR speed negotiation");
MODULE_LICENSE("Dual BSD/GPL");
-MODULE_AUTHOR("QLogic <support@qlogic.com>");
-MODULE_DESCRIPTION("QLogic IB driver");
+MODULE_AUTHOR("Intel <ibsupport@intel.com>");
+MODULE_DESCRIPTION("Intel IB driver");
MODULE_VERSION(QIB_DRIVER_VERSION);
/*
diff --git a/drivers/infiniband/hw/qib/qib_iba6120.c b/drivers/infiniband/hw/qib/qib_iba6120.c
index a099ac1..0232ae5 100644
--- a/drivers/infiniband/hw/qib/qib_iba6120.c
+++ b/drivers/infiniband/hw/qib/qib_iba6120.c
@@ -1,4 +1,5 @@
/*
+ * Copyright (c) 2013 Intel Corporation. All rights reserved.
* Copyright (c) 2006, 2007, 2008, 2009, 2010 QLogic Corporation.
* All rights reserved.
* Copyright (c) 2003, 2004, 2005, 2006 PathScale, Inc. All rights reserved.
@@ -51,7 +52,7 @@
/*
* This file contains all the chip-specific register information and
- * access functions for the QLogic QLogic_IB PCI-Express chip.
+ * access functions for the Intel Intel_IB PCI-Express chip.
*
*/
diff --git a/drivers/infiniband/hw/qib/qib_init.c b/drivers/infiniband/hw/qib/qib_init.c
index 50e33aa0..173f805 100644
--- a/drivers/infiniband/hw/qib/qib_init.c
+++ b/drivers/infiniband/hw/qib/qib_init.c
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2012 Intel Corporation. All rights reserved.
+ * Copyright (c) 2012, 2013 Intel Corporation. All rights reserved.
* Copyright (c) 2006 - 2012 QLogic Corporation. All rights reserved.
* Copyright (c) 2003, 2004, 2005, 2006 PathScale, Inc. All rights reserved.
*
@@ -1138,7 +1138,7 @@
static void qib_remove_one(struct pci_dev *);
static int qib_init_one(struct pci_dev *, const struct pci_device_id *);
-#define DRIVER_LOAD_MSG "QLogic " QIB_DRV_NAME " loaded: "
+#define DRIVER_LOAD_MSG "Intel " QIB_DRV_NAME " loaded: "
#define PFX QIB_DRV_NAME ": "
static DEFINE_PCI_DEVICE_TABLE(qib_pci_tbl) = {
@@ -1355,7 +1355,7 @@
dd = qib_init_iba6120_funcs(pdev, ent);
#else
qib_early_err(&pdev->dev,
- "QLogic PCIE device 0x%x cannot work if CONFIG_PCI_MSI is not enabled\n",
+ "Intel PCIE device 0x%x cannot work if CONFIG_PCI_MSI is not enabled\n",
ent->device);
dd = ERR_PTR(-ENODEV);
#endif
@@ -1371,7 +1371,7 @@
default:
qib_early_err(&pdev->dev,
- "Failing on unknown QLogic deviceid 0x%x\n",
+ "Failing on unknown Intel deviceid 0x%x\n",
ent->device);
ret = -ENODEV;
}
diff --git a/drivers/infiniband/hw/qib/qib_sd7220.c b/drivers/infiniband/hw/qib/qib_sd7220.c
index 50a8a0d..08a6c6d 100644
--- a/drivers/infiniband/hw/qib/qib_sd7220.c
+++ b/drivers/infiniband/hw/qib/qib_sd7220.c
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2012 Intel Corporation. All rights reserved.
+ * Copyright (c) 2013 Intel Corporation. All rights reserved.
* Copyright (c) 2006 - 2012 QLogic Corporation. All rights reserved.
* Copyright (c) 2003, 2004, 2005, 2006 PathScale, Inc. All rights reserved.
*
@@ -44,7 +44,7 @@
#include "qib.h"
#include "qib_7220.h"
-#define SD7220_FW_NAME "qlogic/sd7220.fw"
+#define SD7220_FW_NAME "intel/sd7220.fw"
MODULE_FIRMWARE(SD7220_FW_NAME);
/*
diff --git a/drivers/infiniband/hw/qib/qib_verbs.c b/drivers/infiniband/hw/qib/qib_verbs.c
index ba51a47..7c0ab16 100644
--- a/drivers/infiniband/hw/qib/qib_verbs.c
+++ b/drivers/infiniband/hw/qib/qib_verbs.c
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2012 Intel Corporation. All rights reserved.
+ * Copyright (c) 2012, 2013 Intel Corporation. All rights reserved.
* Copyright (c) 2006 - 2012 QLogic Corporation. All rights reserved.
* Copyright (c) 2005, 2006 PathScale, Inc. All rights reserved.
*
@@ -2224,7 +2224,7 @@
ibdev->dma_ops = &qib_dma_mapping_ops;
snprintf(ibdev->node_desc, sizeof(ibdev->node_desc),
- "QLogic Infiniband HCA %s", init_utsname()->nodename);
+ "Intel Infiniband HCA %s", init_utsname()->nodename);
ret = ib_register_device(ibdev, qib_create_port_files);
if (ret)
diff --git a/drivers/infiniband/ulp/ipoib/ipoib_cm.c b/drivers/infiniband/ulp/ipoib/ipoib_cm.c
index 67b0c1d..1ef880d 100644
--- a/drivers/infiniband/ulp/ipoib/ipoib_cm.c
+++ b/drivers/infiniband/ulp/ipoib/ipoib_cm.c
@@ -758,9 +758,13 @@
if (++priv->tx_outstanding == ipoib_sendq_size) {
ipoib_dbg(priv, "TX ring 0x%x full, stopping kernel net queue\n",
tx->qp->qp_num);
- if (ib_req_notify_cq(priv->send_cq, IB_CQ_NEXT_COMP))
- ipoib_warn(priv, "request notify on send CQ failed\n");
netif_stop_queue(dev);
+ rc = ib_req_notify_cq(priv->send_cq,
+ IB_CQ_NEXT_COMP | IB_CQ_REPORT_MISSED_EVENTS);
+ if (rc < 0)
+ ipoib_warn(priv, "request notify on send CQ failed\n");
+ else if (rc)
+ ipoib_send_comp_handler(priv->send_cq, dev);
}
}
}
diff --git a/drivers/input/joystick/analog.c b/drivers/input/joystick/analog.c
index 7cd74e2..9135606 100644
--- a/drivers/input/joystick/analog.c
+++ b/drivers/input/joystick/analog.c
@@ -158,14 +158,10 @@
#define GET_TIME(x) rdtscl(x)
#define DELTA(x,y) ((y)-(x))
#define TIME_NAME "TSC"
-#elif defined(__alpha__)
+#elif defined(__alpha__) || defined(CONFIG_MN10300) || defined(CONFIG_ARM) || defined(CONFIG_TILE)
#define GET_TIME(x) do { x = get_cycles(); } while (0)
#define DELTA(x,y) ((y)-(x))
-#define TIME_NAME "PCC"
-#elif defined(CONFIG_MN10300) || defined(CONFIG_TILE)
-#define GET_TIME(x) do { x = get_cycles(); } while (0)
-#define DELTA(x, y) ((x) - (y))
-#define TIME_NAME "TSC"
+#define TIME_NAME "get_cycles"
#else
#define FAKE_TIME
static unsigned long analog_faketime = 0;
diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
index 5c514d07..c332fb9 100644
--- a/drivers/iommu/Kconfig
+++ b/drivers/iommu/Kconfig
@@ -130,7 +130,7 @@
# OMAP IOMMU support
config OMAP_IOMMU
bool "OMAP IOMMU Support"
- depends on ARCH_OMAP
+ depends on ARCH_OMAP2PLUS
select IOMMU_API
config OMAP_IOVMM
diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
index 98f555d..b287ca3 100644
--- a/drivers/iommu/amd_iommu.c
+++ b/drivers/iommu/amd_iommu.c
@@ -2466,18 +2466,16 @@
/* allocate a protection domain if a device is added */
dma_domain = find_protection_domain(devid);
- if (dma_domain)
- goto out;
- dma_domain = dma_ops_domain_alloc();
- if (!dma_domain)
- goto out;
- dma_domain->target_dev = devid;
+ if (!dma_domain) {
+ dma_domain = dma_ops_domain_alloc();
+ if (!dma_domain)
+ goto out;
+ dma_domain->target_dev = devid;
- spin_lock_irqsave(&iommu_pd_list_lock, flags);
- list_add_tail(&dma_domain->list, &iommu_pd_list);
- spin_unlock_irqrestore(&iommu_pd_list_lock, flags);
-
- dev_data = get_dev_data(dev);
+ spin_lock_irqsave(&iommu_pd_list_lock, flags);
+ list_add_tail(&dma_domain->list, &iommu_pd_list);
+ spin_unlock_irqrestore(&iommu_pd_list_lock, flags);
+ }
dev->archdata.dma_ops = &amd_iommu_dma_ops;
diff --git a/drivers/iommu/amd_iommu_init.c b/drivers/iommu/amd_iommu_init.c
index b6ecddb..e3c2d74 100644
--- a/drivers/iommu/amd_iommu_init.c
+++ b/drivers/iommu/amd_iommu_init.c
@@ -980,7 +980,7 @@
* BIOS should disable L2B micellaneous clock gating by setting
* L2_L2B_CK_GATE_CONTROL[CKGateL2BMiscDisable](D0F2xF4_x90[2]) = 1b
*/
-static void __init amd_iommu_erratum_746_workaround(struct amd_iommu *iommu)
+static void amd_iommu_erratum_746_workaround(struct amd_iommu *iommu)
{
u32 value;
diff --git a/drivers/iommu/irq_remapping.c b/drivers/iommu/irq_remapping.c
index d56f8c1..7c11ff3 100644
--- a/drivers/iommu/irq_remapping.c
+++ b/drivers/iommu/irq_remapping.c
@@ -2,7 +2,6 @@
#include <linux/cpumask.h>
#include <linux/kernel.h>
#include <linux/string.h>
-#include <linux/cpumask.h>
#include <linux/errno.h>
#include <linux/msi.h>
#include <linux/irq.h>
diff --git a/drivers/isdn/hisax/Kconfig b/drivers/isdn/hisax/Kconfig
index 5313c9e..d9edcc9 100644
--- a/drivers/isdn/hisax/Kconfig
+++ b/drivers/isdn/hisax/Kconfig
@@ -237,7 +237,8 @@
config HISAX_NETJET
bool "NETjet card"
- depends on PCI && (BROKEN || !(SPARC || PPC || PARISC || M68K || (MIPS && !CPU_LITTLE_ENDIAN) || FRV || (XTENSA && !CPU_LITTLE_ENDIAN)))
+ depends on PCI && (BROKEN || !(PPC || PARISC || M68K || (MIPS && !CPU_LITTLE_ENDIAN) || FRV || (XTENSA && !CPU_LITTLE_ENDIAN)))
+ depends on VIRT_TO_BUS
help
This enables HiSax support for the NetJet from Traverse
Technologies.
@@ -248,7 +249,8 @@
config HISAX_NETJET_U
bool "NETspider U card"
- depends on PCI && (BROKEN || !(SPARC || PPC || PARISC || M68K || (MIPS && !CPU_LITTLE_ENDIAN) || FRV || (XTENSA && !CPU_LITTLE_ENDIAN)))
+ depends on PCI && (BROKEN || !(PPC || PARISC || M68K || (MIPS && !CPU_LITTLE_ENDIAN) || FRV || (XTENSA && !CPU_LITTLE_ENDIAN)))
+ depends on VIRT_TO_BUS
help
This enables HiSax support for the Netspider U interface ISDN card
from Traverse Technologies.
diff --git a/drivers/md/dm-bufio.c b/drivers/md/dm-bufio.c
index 3c955e1..c608313 100644
--- a/drivers/md/dm-bufio.c
+++ b/drivers/md/dm-bufio.c
@@ -1025,6 +1025,8 @@
{
struct blk_plug plug;
+ BUG_ON(dm_bufio_in_request());
+
blk_start_plug(&plug);
dm_bufio_lock(c);
diff --git a/drivers/md/dm-cache-metadata.c b/drivers/md/dm-cache-metadata.c
index fbd3625..83e995f 100644
--- a/drivers/md/dm-cache-metadata.c
+++ b/drivers/md/dm-cache-metadata.c
@@ -83,6 +83,8 @@
__le32 read_misses;
__le32 write_hits;
__le32 write_misses;
+
+ __le32 policy_version[CACHE_POLICY_VERSION_SIZE];
} __packed;
struct dm_cache_metadata {
@@ -109,6 +111,7 @@
bool clean_when_opened:1;
char policy_name[CACHE_POLICY_NAME_SIZE];
+ unsigned policy_version[CACHE_POLICY_VERSION_SIZE];
size_t policy_hint_size;
struct dm_cache_statistics stats;
};
@@ -268,7 +271,8 @@
memset(disk_super->uuid, 0, sizeof(disk_super->uuid));
disk_super->magic = cpu_to_le64(CACHE_SUPERBLOCK_MAGIC);
disk_super->version = cpu_to_le32(CACHE_VERSION);
- memset(disk_super->policy_name, 0, CACHE_POLICY_NAME_SIZE);
+ memset(disk_super->policy_name, 0, sizeof(disk_super->policy_name));
+ memset(disk_super->policy_version, 0, sizeof(disk_super->policy_version));
disk_super->policy_hint_size = 0;
r = dm_sm_copy_root(cmd->metadata_sm, &disk_super->metadata_space_map_root,
@@ -284,7 +288,6 @@
disk_super->metadata_block_size = cpu_to_le32(DM_CACHE_METADATA_BLOCK_SIZE >> SECTOR_SHIFT);
disk_super->data_block_size = cpu_to_le32(cmd->data_block_size);
disk_super->cache_blocks = cpu_to_le32(0);
- memset(disk_super->policy_name, 0, sizeof(disk_super->policy_name));
disk_super->read_hits = cpu_to_le32(0);
disk_super->read_misses = cpu_to_le32(0);
@@ -478,6 +481,9 @@
cmd->data_block_size = le32_to_cpu(disk_super->data_block_size);
cmd->cache_blocks = to_cblock(le32_to_cpu(disk_super->cache_blocks));
strncpy(cmd->policy_name, disk_super->policy_name, sizeof(cmd->policy_name));
+ cmd->policy_version[0] = le32_to_cpu(disk_super->policy_version[0]);
+ cmd->policy_version[1] = le32_to_cpu(disk_super->policy_version[1]);
+ cmd->policy_version[2] = le32_to_cpu(disk_super->policy_version[2]);
cmd->policy_hint_size = le32_to_cpu(disk_super->policy_hint_size);
cmd->stats.read_hits = le32_to_cpu(disk_super->read_hits);
@@ -572,6 +578,9 @@
disk_super->discard_nr_blocks = cpu_to_le64(from_dblock(cmd->discard_nr_blocks));
disk_super->cache_blocks = cpu_to_le32(from_cblock(cmd->cache_blocks));
strncpy(disk_super->policy_name, cmd->policy_name, sizeof(disk_super->policy_name));
+ disk_super->policy_version[0] = cpu_to_le32(cmd->policy_version[0]);
+ disk_super->policy_version[1] = cpu_to_le32(cmd->policy_version[1]);
+ disk_super->policy_version[2] = cpu_to_le32(cmd->policy_version[2]);
disk_super->read_hits = cpu_to_le32(cmd->stats.read_hits);
disk_super->read_misses = cpu_to_le32(cmd->stats.read_misses);
@@ -854,18 +863,43 @@
bool hints_valid;
};
+static bool policy_unchanged(struct dm_cache_metadata *cmd,
+ struct dm_cache_policy *policy)
+{
+ const char *policy_name = dm_cache_policy_get_name(policy);
+ const unsigned *policy_version = dm_cache_policy_get_version(policy);
+ size_t policy_hint_size = dm_cache_policy_get_hint_size(policy);
+
+ /*
+ * Ensure policy names match.
+ */
+ if (strncmp(cmd->policy_name, policy_name, sizeof(cmd->policy_name)))
+ return false;
+
+ /*
+ * Ensure policy major versions match.
+ */
+ if (cmd->policy_version[0] != policy_version[0])
+ return false;
+
+ /*
+ * Ensure policy hint sizes match.
+ */
+ if (cmd->policy_hint_size != policy_hint_size)
+ return false;
+
+ return true;
+}
+
static bool hints_array_initialized(struct dm_cache_metadata *cmd)
{
return cmd->hint_root && cmd->policy_hint_size;
}
static bool hints_array_available(struct dm_cache_metadata *cmd,
- const char *policy_name)
+ struct dm_cache_policy *policy)
{
- bool policy_names_match = !strncmp(cmd->policy_name, policy_name,
- sizeof(cmd->policy_name));
-
- return cmd->clean_when_opened && policy_names_match &&
+ return cmd->clean_when_opened && policy_unchanged(cmd, policy) &&
hints_array_initialized(cmd);
}
@@ -899,7 +933,8 @@
return r;
}
-static int __load_mappings(struct dm_cache_metadata *cmd, const char *policy_name,
+static int __load_mappings(struct dm_cache_metadata *cmd,
+ struct dm_cache_policy *policy,
load_mapping_fn fn, void *context)
{
struct thunk thunk;
@@ -909,18 +944,19 @@
thunk.cmd = cmd;
thunk.respect_dirty_flags = cmd->clean_when_opened;
- thunk.hints_valid = hints_array_available(cmd, policy_name);
+ thunk.hints_valid = hints_array_available(cmd, policy);
return dm_array_walk(&cmd->info, cmd->root, __load_mapping, &thunk);
}
-int dm_cache_load_mappings(struct dm_cache_metadata *cmd, const char *policy_name,
+int dm_cache_load_mappings(struct dm_cache_metadata *cmd,
+ struct dm_cache_policy *policy,
load_mapping_fn fn, void *context)
{
int r;
down_read(&cmd->root_lock);
- r = __load_mappings(cmd, policy_name, fn, context);
+ r = __load_mappings(cmd, policy, fn, context);
up_read(&cmd->root_lock);
return r;
@@ -979,7 +1015,7 @@
/* nothing to be done */
return 0;
- value = pack_value(oblock, flags | (dirty ? M_DIRTY : 0));
+ value = pack_value(oblock, (flags & ~M_DIRTY) | (dirty ? M_DIRTY : 0));
__dm_bless_for_disk(&value);
r = dm_array_set_value(&cmd->info, cmd->root, from_cblock(cblock),
@@ -1070,13 +1106,15 @@
__le32 value;
size_t hint_size;
const char *policy_name = dm_cache_policy_get_name(policy);
+ const unsigned *policy_version = dm_cache_policy_get_version(policy);
if (!policy_name[0] ||
(strlen(policy_name) > sizeof(cmd->policy_name) - 1))
return -EINVAL;
- if (strcmp(cmd->policy_name, policy_name)) {
+ if (!policy_unchanged(cmd, policy)) {
strncpy(cmd->policy_name, policy_name, sizeof(cmd->policy_name));
+ memcpy(cmd->policy_version, policy_version, sizeof(cmd->policy_version));
hint_size = dm_cache_policy_get_hint_size(policy);
if (!hint_size)
diff --git a/drivers/md/dm-cache-metadata.h b/drivers/md/dm-cache-metadata.h
index 135864e..f45cef2 100644
--- a/drivers/md/dm-cache-metadata.h
+++ b/drivers/md/dm-cache-metadata.h
@@ -89,7 +89,7 @@
dm_cblock_t cblock, bool dirty,
uint32_t hint, bool hint_valid);
int dm_cache_load_mappings(struct dm_cache_metadata *cmd,
- const char *policy_name,
+ struct dm_cache_policy *policy,
load_mapping_fn fn,
void *context);
diff --git a/drivers/md/dm-cache-policy-cleaner.c b/drivers/md/dm-cache-policy-cleaner.c
index cc05d70..b04d1f9 100644
--- a/drivers/md/dm-cache-policy-cleaner.c
+++ b/drivers/md/dm-cache-policy-cleaner.c
@@ -17,7 +17,6 @@
/*----------------------------------------------------------------*/
#define DM_MSG_PREFIX "cache cleaner"
-#define CLEANER_VERSION "1.0.0"
/* Cache entry struct. */
struct wb_cache_entry {
@@ -434,6 +433,7 @@
static struct dm_cache_policy_type wb_policy_type = {
.name = "cleaner",
+ .version = {1, 0, 0},
.hint_size = 0,
.owner = THIS_MODULE,
.create = wb_create
@@ -446,7 +446,10 @@
if (r < 0)
DMERR("register failed %d", r);
else
- DMINFO("version " CLEANER_VERSION " loaded");
+ DMINFO("version %u.%u.%u loaded",
+ wb_policy_type.version[0],
+ wb_policy_type.version[1],
+ wb_policy_type.version[2]);
return r;
}
diff --git a/drivers/md/dm-cache-policy-internal.h b/drivers/md/dm-cache-policy-internal.h
index 52a75be..0928abd 100644
--- a/drivers/md/dm-cache-policy-internal.h
+++ b/drivers/md/dm-cache-policy-internal.h
@@ -117,6 +117,8 @@
*/
const char *dm_cache_policy_get_name(struct dm_cache_policy *p);
+const unsigned *dm_cache_policy_get_version(struct dm_cache_policy *p);
+
size_t dm_cache_policy_get_hint_size(struct dm_cache_policy *p);
/*----------------------------------------------------------------*/
diff --git a/drivers/md/dm-cache-policy-mq.c b/drivers/md/dm-cache-policy-mq.c
index 9641532..dc112a7 100644
--- a/drivers/md/dm-cache-policy-mq.c
+++ b/drivers/md/dm-cache-policy-mq.c
@@ -14,7 +14,6 @@
#include <linux/vmalloc.h>
#define DM_MSG_PREFIX "cache-policy-mq"
-#define MQ_VERSION "1.0.0"
static struct kmem_cache *mq_entry_cache;
@@ -1133,6 +1132,7 @@
static struct dm_cache_policy_type mq_policy_type = {
.name = "mq",
+ .version = {1, 0, 0},
.hint_size = 4,
.owner = THIS_MODULE,
.create = mq_create
@@ -1140,6 +1140,7 @@
static struct dm_cache_policy_type default_policy_type = {
.name = "default",
+ .version = {1, 0, 0},
.hint_size = 4,
.owner = THIS_MODULE,
.create = mq_create
@@ -1164,7 +1165,10 @@
r = dm_cache_policy_register(&default_policy_type);
if (!r) {
- DMINFO("version " MQ_VERSION " loaded");
+ DMINFO("version %u.%u.%u loaded",
+ mq_policy_type.version[0],
+ mq_policy_type.version[1],
+ mq_policy_type.version[2]);
return 0;
}
diff --git a/drivers/md/dm-cache-policy.c b/drivers/md/dm-cache-policy.c
index 2cbf5fd..21c03c5 100644
--- a/drivers/md/dm-cache-policy.c
+++ b/drivers/md/dm-cache-policy.c
@@ -150,6 +150,14 @@
}
EXPORT_SYMBOL_GPL(dm_cache_policy_get_name);
+const unsigned *dm_cache_policy_get_version(struct dm_cache_policy *p)
+{
+ struct dm_cache_policy_type *t = p->private;
+
+ return t->version;
+}
+EXPORT_SYMBOL_GPL(dm_cache_policy_get_version);
+
size_t dm_cache_policy_get_hint_size(struct dm_cache_policy *p)
{
struct dm_cache_policy_type *t = p->private;
diff --git a/drivers/md/dm-cache-policy.h b/drivers/md/dm-cache-policy.h
index f0f51b2..558bdfd 100644
--- a/drivers/md/dm-cache-policy.h
+++ b/drivers/md/dm-cache-policy.h
@@ -196,6 +196,7 @@
* We maintain a little register of the different policy types.
*/
#define CACHE_POLICY_NAME_SIZE 16
+#define CACHE_POLICY_VERSION_SIZE 3
struct dm_cache_policy_type {
/* For use by the register code only. */
@@ -206,6 +207,7 @@
* what gets passed on the target line to select your policy.
*/
char name[CACHE_POLICY_NAME_SIZE];
+ unsigned version[CACHE_POLICY_VERSION_SIZE];
/*
* Policies may store a hint for each each cache block.
diff --git a/drivers/md/dm-cache-target.c b/drivers/md/dm-cache-target.c
index 0f4e84b..66120bd 100644
--- a/drivers/md/dm-cache-target.c
+++ b/drivers/md/dm-cache-target.c
@@ -142,6 +142,7 @@
spinlock_t lock;
struct bio_list deferred_bios;
struct bio_list deferred_flush_bios;
+ struct bio_list deferred_writethrough_bios;
struct list_head quiesced_migrations;
struct list_head completed_migrations;
struct list_head need_commit_migrations;
@@ -158,7 +159,7 @@
/*
* origin_blocks entries, discarded if set.
*/
- sector_t discard_block_size; /* a power of 2 times sectors per block */
+ uint32_t discard_block_size; /* a power of 2 times sectors per block */
dm_dblock_t discard_nr_blocks;
unsigned long *discard_bitset;
@@ -199,6 +200,11 @@
bool tick:1;
unsigned req_nr:2;
struct dm_deferred_entry *all_io_entry;
+
+ /* writethrough fields */
+ struct cache *cache;
+ dm_cblock_t cblock;
+ bio_end_io_t *saved_bi_end_io;
};
struct dm_cache_migration {
@@ -412,17 +418,24 @@
return cache->sectors_per_block_shift >= 0;
}
+static dm_block_t block_div(dm_block_t b, uint32_t n)
+{
+ do_div(b, n);
+
+ return b;
+}
+
static dm_dblock_t oblock_to_dblock(struct cache *cache, dm_oblock_t oblock)
{
- sector_t discard_blocks = cache->discard_block_size;
+ uint32_t discard_blocks = cache->discard_block_size;
dm_block_t b = from_oblock(oblock);
if (!block_size_is_power_of_two(cache))
- (void) sector_div(discard_blocks, cache->sectors_per_block);
+ discard_blocks = discard_blocks / cache->sectors_per_block;
else
discard_blocks >>= cache->sectors_per_block_shift;
- (void) sector_div(b, discard_blocks);
+ b = block_div(b, discard_blocks);
return to_dblock(b);
}
@@ -609,6 +622,56 @@
spin_unlock_irqrestore(&cache->lock, flags);
}
+static void defer_writethrough_bio(struct cache *cache, struct bio *bio)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&cache->lock, flags);
+ bio_list_add(&cache->deferred_writethrough_bios, bio);
+ spin_unlock_irqrestore(&cache->lock, flags);
+
+ wake_worker(cache);
+}
+
+static void writethrough_endio(struct bio *bio, int err)
+{
+ struct per_bio_data *pb = get_per_bio_data(bio);
+ bio->bi_end_io = pb->saved_bi_end_io;
+
+ if (err) {
+ bio_endio(bio, err);
+ return;
+ }
+
+ remap_to_cache(pb->cache, bio, pb->cblock);
+
+ /*
+ * We can't issue this bio directly, since we're in interrupt
+ * context. So it get's put on a bio list for processing by the
+ * worker thread.
+ */
+ defer_writethrough_bio(pb->cache, bio);
+}
+
+/*
+ * When running in writethrough mode we need to send writes to clean blocks
+ * to both the cache and origin devices. In future we'd like to clone the
+ * bio and send them in parallel, but for now we're doing them in
+ * series as this is easier.
+ */
+static void remap_to_origin_then_cache(struct cache *cache, struct bio *bio,
+ dm_oblock_t oblock, dm_cblock_t cblock)
+{
+ struct per_bio_data *pb = get_per_bio_data(bio);
+
+ pb->cache = cache;
+ pb->cblock = cblock;
+ pb->saved_bi_end_io = bio->bi_end_io;
+ bio->bi_end_io = writethrough_endio;
+
+ remap_to_origin_clear_discard(pb->cache, bio, oblock);
+}
+
/*----------------------------------------------------------------
* Migration processing
*
@@ -1002,7 +1065,7 @@
dm_block_t end_block = bio->bi_sector + bio_sectors(bio);
dm_block_t b;
- (void) sector_div(end_block, cache->discard_block_size);
+ end_block = block_div(end_block, cache->discard_block_size);
for (b = start_block; b < end_block; b++)
set_discard(cache, to_dblock(b));
@@ -1070,14 +1133,9 @@
inc_hit_counter(cache, bio);
pb->all_io_entry = dm_deferred_entry_inc(cache->all_io_ds);
- if (is_writethrough_io(cache, bio, lookup_result.cblock)) {
- /*
- * No need to mark anything dirty in write through mode.
- */
- pb->req_nr == 0 ?
- remap_to_cache(cache, bio, lookup_result.cblock) :
- remap_to_origin_clear_discard(cache, bio, block);
- } else
+ if (is_writethrough_io(cache, bio, lookup_result.cblock))
+ remap_to_origin_then_cache(cache, bio, block, lookup_result.cblock);
+ else
remap_to_cache_dirty(cache, bio, block, lookup_result.cblock);
issue(cache, bio);
@@ -1086,17 +1144,8 @@
case POLICY_MISS:
inc_miss_counter(cache, bio);
pb->all_io_entry = dm_deferred_entry_inc(cache->all_io_ds);
-
- if (pb->req_nr != 0) {
- /*
- * This is a duplicate writethrough io that is no
- * longer needed because the block has been demoted.
- */
- bio_endio(bio, 0);
- } else {
- remap_to_origin_clear_discard(cache, bio, block);
- issue(cache, bio);
- }
+ remap_to_origin_clear_discard(cache, bio, block);
+ issue(cache, bio);
break;
case POLICY_NEW:
@@ -1217,6 +1266,23 @@
submit_bios ? generic_make_request(bio) : bio_io_error(bio);
}
+static void process_deferred_writethrough_bios(struct cache *cache)
+{
+ unsigned long flags;
+ struct bio_list bios;
+ struct bio *bio;
+
+ bio_list_init(&bios);
+
+ spin_lock_irqsave(&cache->lock, flags);
+ bio_list_merge(&bios, &cache->deferred_writethrough_bios);
+ bio_list_init(&cache->deferred_writethrough_bios);
+ spin_unlock_irqrestore(&cache->lock, flags);
+
+ while ((bio = bio_list_pop(&bios)))
+ generic_make_request(bio);
+}
+
static void writeback_some_dirty_blocks(struct cache *cache)
{
int r = 0;
@@ -1313,6 +1379,7 @@
else
return !bio_list_empty(&cache->deferred_bios) ||
!bio_list_empty(&cache->deferred_flush_bios) ||
+ !bio_list_empty(&cache->deferred_writethrough_bios) ||
!list_empty(&cache->quiesced_migrations) ||
!list_empty(&cache->completed_migrations) ||
!list_empty(&cache->need_commit_migrations);
@@ -1331,6 +1398,8 @@
writeback_some_dirty_blocks(cache);
+ process_deferred_writethrough_bios(cache);
+
if (commit_if_needed(cache)) {
process_deferred_flush_bios(cache, false);
@@ -1756,8 +1825,11 @@
}
r = set_config_values(cache->policy, ca->policy_argc, ca->policy_argv);
- if (r)
+ if (r) {
+ *error = "Error setting cache policy's config values";
dm_cache_policy_destroy(cache->policy);
+ cache->policy = NULL;
+ }
return r;
}
@@ -1793,8 +1865,6 @@
#define DEFAULT_MIGRATION_THRESHOLD (2048 * 100)
-static unsigned cache_num_write_bios(struct dm_target *ti, struct bio *bio);
-
static int cache_create(struct cache_args *ca, struct cache **result)
{
int r = 0;
@@ -1821,9 +1891,6 @@
memcpy(&cache->features, &ca->features, sizeof(cache->features));
- if (cache->features.write_through)
- ti->num_write_bios = cache_num_write_bios;
-
cache->callbacks.congested_fn = cache_is_congested;
dm_table_add_target_callbacks(ti->table, &cache->callbacks);
@@ -1835,7 +1902,7 @@
/* FIXME: factor out this whole section */
origin_blocks = cache->origin_sectors = ca->origin_sectors;
- (void) sector_div(origin_blocks, ca->block_size);
+ origin_blocks = block_div(origin_blocks, ca->block_size);
cache->origin_blocks = to_oblock(origin_blocks);
cache->sectors_per_block = ca->block_size;
@@ -1848,7 +1915,7 @@
dm_block_t cache_size = ca->cache_sectors;
cache->sectors_per_block_shift = -1;
- (void) sector_div(cache_size, ca->block_size);
+ cache_size = block_div(cache_size, ca->block_size);
cache->cache_size = to_cblock(cache_size);
} else {
cache->sectors_per_block_shift = __ffs(ca->block_size);
@@ -1873,6 +1940,7 @@
spin_lock_init(&cache->lock);
bio_list_init(&cache->deferred_bios);
bio_list_init(&cache->deferred_flush_bios);
+ bio_list_init(&cache->deferred_writethrough_bios);
INIT_LIST_HEAD(&cache->quiesced_migrations);
INIT_LIST_HEAD(&cache->completed_migrations);
INIT_LIST_HEAD(&cache->need_commit_migrations);
@@ -2002,6 +2070,8 @@
goto out;
r = cache_create(ca, &cache);
+ if (r)
+ goto out;
r = copy_ctr_args(cache, argc - 3, (const char **)argv + 3);
if (r) {
@@ -2016,20 +2086,6 @@
return r;
}
-static unsigned cache_num_write_bios(struct dm_target *ti, struct bio *bio)
-{
- int r;
- struct cache *cache = ti->private;
- dm_oblock_t block = get_bio_block(cache, bio);
- dm_cblock_t cblock;
-
- r = policy_lookup(cache->policy, block, &cblock);
- if (r < 0)
- return 2; /* assume the worst */
-
- return (!r && !is_dirty(cache, cblock)) ? 2 : 1;
-}
-
static int cache_map(struct dm_target *ti, struct bio *bio)
{
struct cache *cache = ti->private;
@@ -2097,18 +2153,12 @@
inc_hit_counter(cache, bio);
pb->all_io_entry = dm_deferred_entry_inc(cache->all_io_ds);
- if (is_writethrough_io(cache, bio, lookup_result.cblock)) {
- /*
- * No need to mark anything dirty in write through mode.
- */
- pb->req_nr == 0 ?
- remap_to_cache(cache, bio, lookup_result.cblock) :
- remap_to_origin_clear_discard(cache, bio, block);
- cell_defer(cache, cell, false);
- } else {
+ if (is_writethrough_io(cache, bio, lookup_result.cblock))
+ remap_to_origin_then_cache(cache, bio, block, lookup_result.cblock);
+ else
remap_to_cache_dirty(cache, bio, block, lookup_result.cblock);
- cell_defer(cache, cell, false);
- }
+
+ cell_defer(cache, cell, false);
break;
case POLICY_MISS:
@@ -2319,8 +2369,7 @@
}
if (!cache->loaded_mappings) {
- r = dm_cache_load_mappings(cache->cmd,
- dm_cache_policy_get_name(cache->policy),
+ r = dm_cache_load_mappings(cache->cmd, cache->policy,
load_mapping, cache);
if (r) {
DMERR("could not load cache mappings");
@@ -2535,7 +2584,7 @@
static struct target_type cache_target = {
.name = "cache",
- .version = {1, 0, 0},
+ .version = {1, 1, 0},
.module = THIS_MODULE,
.ctr = cache_ctr,
.dtr = cache_dtr,
diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c
index 009339d..004ad165 100644
--- a/drivers/md/dm-thin.c
+++ b/drivers/md/dm-thin.c
@@ -1577,6 +1577,11 @@
return q && blk_queue_discard(q);
}
+static bool is_factor(sector_t block_size, uint32_t n)
+{
+ return !sector_div(block_size, n);
+}
+
/*
* If discard_passdown was enabled verify that the data device
* supports discards. Disable discard_passdown if not.
@@ -1602,7 +1607,7 @@
else if (data_limits->discard_granularity > block_size)
reason = "discard granularity larger than a block";
- else if (block_size & (data_limits->discard_granularity - 1))
+ else if (!is_factor(block_size, data_limits->discard_granularity))
reason = "discard granularity not a factor of block size";
if (reason) {
@@ -2544,7 +2549,7 @@
.name = "thin-pool",
.features = DM_TARGET_SINGLETON | DM_TARGET_ALWAYS_WRITEABLE |
DM_TARGET_IMMUTABLE,
- .version = {1, 6, 1},
+ .version = {1, 7, 0},
.module = THIS_MODULE,
.ctr = pool_ctr,
.dtr = pool_dtr,
@@ -2831,7 +2836,7 @@
static struct target_type thin_target = {
.name = "thin",
- .version = {1, 7, 1},
+ .version = {1, 8, 0},
.module = THIS_MODULE,
.ctr = thin_ctr,
.dtr = thin_dtr,
diff --git a/drivers/md/dm-verity.c b/drivers/md/dm-verity.c
index 6ad5383..a746f1d 100644
--- a/drivers/md/dm-verity.c
+++ b/drivers/md/dm-verity.c
@@ -93,6 +93,13 @@
*/
};
+struct dm_verity_prefetch_work {
+ struct work_struct work;
+ struct dm_verity *v;
+ sector_t block;
+ unsigned n_blocks;
+};
+
static struct shash_desc *io_hash_desc(struct dm_verity *v, struct dm_verity_io *io)
{
return (struct shash_desc *)(io + 1);
@@ -424,15 +431,18 @@
* The root buffer is not prefetched, it is assumed that it will be cached
* all the time.
*/
-static void verity_prefetch_io(struct dm_verity *v, struct dm_verity_io *io)
+static void verity_prefetch_io(struct work_struct *work)
{
+ struct dm_verity_prefetch_work *pw =
+ container_of(work, struct dm_verity_prefetch_work, work);
+ struct dm_verity *v = pw->v;
int i;
for (i = v->levels - 2; i >= 0; i--) {
sector_t hash_block_start;
sector_t hash_block_end;
- verity_hash_at_level(v, io->block, i, &hash_block_start, NULL);
- verity_hash_at_level(v, io->block + io->n_blocks - 1, i, &hash_block_end, NULL);
+ verity_hash_at_level(v, pw->block, i, &hash_block_start, NULL);
+ verity_hash_at_level(v, pw->block + pw->n_blocks - 1, i, &hash_block_end, NULL);
if (!i) {
unsigned cluster = ACCESS_ONCE(dm_verity_prefetch_cluster);
@@ -452,6 +462,25 @@
dm_bufio_prefetch(v->bufio, hash_block_start,
hash_block_end - hash_block_start + 1);
}
+
+ kfree(pw);
+}
+
+static void verity_submit_prefetch(struct dm_verity *v, struct dm_verity_io *io)
+{
+ struct dm_verity_prefetch_work *pw;
+
+ pw = kmalloc(sizeof(struct dm_verity_prefetch_work),
+ GFP_NOIO | __GFP_NORETRY | __GFP_NOMEMALLOC | __GFP_NOWARN);
+
+ if (!pw)
+ return;
+
+ INIT_WORK(&pw->work, verity_prefetch_io);
+ pw->v = v;
+ pw->block = io->block;
+ pw->n_blocks = io->n_blocks;
+ queue_work(v->verify_wq, &pw->work);
}
/*
@@ -498,7 +527,7 @@
memcpy(io->io_vec, bio_iovec(bio),
io->io_vec_size * sizeof(struct bio_vec));
- verity_prefetch_io(v, io);
+ verity_submit_prefetch(v, io);
generic_make_request(bio);
@@ -858,7 +887,7 @@
static struct target_type verity_target = {
.name = "verity",
- .version = {1, 1, 1},
+ .version = {1, 2, 0},
.module = THIS_MODULE,
.ctr = verity_ctr,
.dtr = verity_dtr,
diff --git a/drivers/md/md.c b/drivers/md/md.c
index fcb878f..aeceedfc 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -7663,10 +7663,8 @@
removed++;
}
}
- if (removed)
- sysfs_notify(&mddev->kobj, NULL,
- "degraded");
-
+ if (removed && mddev->kobj.sd)
+ sysfs_notify(&mddev->kobj, NULL, "degraded");
rdev_for_each(rdev, mddev) {
if (rdev->raid_disk >= 0 &&
diff --git a/drivers/md/md.h b/drivers/md/md.h
index eca59c3..d90fb1a 100644
--- a/drivers/md/md.h
+++ b/drivers/md/md.h
@@ -506,7 +506,7 @@
static inline int sysfs_link_rdev(struct mddev *mddev, struct md_rdev *rdev)
{
char nm[20];
- if (!test_bit(Replacement, &rdev->flags)) {
+ if (!test_bit(Replacement, &rdev->flags) && mddev->kobj.sd) {
sprintf(nm, "rd%d", rdev->raid_disk);
return sysfs_create_link(&mddev->kobj, &rdev->kobj, nm);
} else
@@ -516,7 +516,7 @@
static inline void sysfs_unlink_rdev(struct mddev *mddev, struct md_rdev *rdev)
{
char nm[20];
- if (!test_bit(Replacement, &rdev->flags)) {
+ if (!test_bit(Replacement, &rdev->flags) && mddev->kobj.sd) {
sprintf(nm, "rd%d", rdev->raid_disk);
sysfs_remove_link(&mddev->kobj, nm);
}
diff --git a/drivers/md/persistent-data/dm-btree-remove.c b/drivers/md/persistent-data/dm-btree-remove.c
index c4f2813..b88757c 100644
--- a/drivers/md/persistent-data/dm-btree-remove.c
+++ b/drivers/md/persistent-data/dm-btree-remove.c
@@ -139,15 +139,8 @@
struct btree_node *n;
};
-static struct dm_btree_value_type le64_type = {
- .context = NULL,
- .size = sizeof(__le64),
- .inc = NULL,
- .dec = NULL,
- .equal = NULL
-};
-
-static int init_child(struct dm_btree_info *info, struct btree_node *parent,
+static int init_child(struct dm_btree_info *info, struct dm_btree_value_type *vt,
+ struct btree_node *parent,
unsigned index, struct child *result)
{
int r, inc;
@@ -164,7 +157,7 @@
result->n = dm_block_data(result->block);
if (inc)
- inc_children(info->tm, result->n, &le64_type);
+ inc_children(info->tm, result->n, vt);
*((__le64 *) value_ptr(parent, index)) =
cpu_to_le64(dm_block_location(result->block));
@@ -236,7 +229,7 @@
}
static int rebalance2(struct shadow_spine *s, struct dm_btree_info *info,
- unsigned left_index)
+ struct dm_btree_value_type *vt, unsigned left_index)
{
int r;
struct btree_node *parent;
@@ -244,11 +237,11 @@
parent = dm_block_data(shadow_current(s));
- r = init_child(info, parent, left_index, &left);
+ r = init_child(info, vt, parent, left_index, &left);
if (r)
return r;
- r = init_child(info, parent, left_index + 1, &right);
+ r = init_child(info, vt, parent, left_index + 1, &right);
if (r) {
exit_child(info, &left);
return r;
@@ -368,7 +361,7 @@
}
static int rebalance3(struct shadow_spine *s, struct dm_btree_info *info,
- unsigned left_index)
+ struct dm_btree_value_type *vt, unsigned left_index)
{
int r;
struct btree_node *parent = dm_block_data(shadow_current(s));
@@ -377,17 +370,17 @@
/*
* FIXME: fill out an array?
*/
- r = init_child(info, parent, left_index, &left);
+ r = init_child(info, vt, parent, left_index, &left);
if (r)
return r;
- r = init_child(info, parent, left_index + 1, ¢er);
+ r = init_child(info, vt, parent, left_index + 1, ¢er);
if (r) {
exit_child(info, &left);
return r;
}
- r = init_child(info, parent, left_index + 2, &right);
+ r = init_child(info, vt, parent, left_index + 2, &right);
if (r) {
exit_child(info, &left);
exit_child(info, ¢er);
@@ -434,7 +427,8 @@
}
static int rebalance_children(struct shadow_spine *s,
- struct dm_btree_info *info, uint64_t key)
+ struct dm_btree_info *info,
+ struct dm_btree_value_type *vt, uint64_t key)
{
int i, r, has_left_sibling, has_right_sibling;
uint32_t child_entries;
@@ -472,13 +466,13 @@
has_right_sibling = i < (le32_to_cpu(n->header.nr_entries) - 1);
if (!has_left_sibling)
- r = rebalance2(s, info, i);
+ r = rebalance2(s, info, vt, i);
else if (!has_right_sibling)
- r = rebalance2(s, info, i - 1);
+ r = rebalance2(s, info, vt, i - 1);
else
- r = rebalance3(s, info, i - 1);
+ r = rebalance3(s, info, vt, i - 1);
return r;
}
@@ -529,7 +523,7 @@
if (le32_to_cpu(n->header.flags) & LEAF_NODE)
return do_leaf(n, key, index);
- r = rebalance_children(s, info, key);
+ r = rebalance_children(s, info, vt, key);
if (r)
break;
@@ -550,6 +544,14 @@
return r;
}
+static struct dm_btree_value_type le64_type = {
+ .context = NULL,
+ .size = sizeof(__le64),
+ .inc = NULL,
+ .dec = NULL,
+ .equal = NULL
+};
+
int dm_btree_remove(struct dm_btree_info *info, dm_block_t root,
uint64_t *keys, dm_block_t *new_root)
{
diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index 3ee2912..24909eb 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -671,9 +671,11 @@
bi->bi_next = NULL;
if (rrdev)
set_bit(R5_DOUBLE_LOCKED, &sh->dev[i].flags);
- trace_block_bio_remap(bdev_get_queue(bi->bi_bdev),
- bi, disk_devt(conf->mddev->gendisk),
- sh->dev[i].sector);
+
+ if (conf->mddev->gendisk)
+ trace_block_bio_remap(bdev_get_queue(bi->bi_bdev),
+ bi, disk_devt(conf->mddev->gendisk),
+ sh->dev[i].sector);
generic_make_request(bi);
}
if (rrdev) {
@@ -701,9 +703,10 @@
rbi->bi_io_vec[0].bv_offset = 0;
rbi->bi_size = STRIPE_SIZE;
rbi->bi_next = NULL;
- trace_block_bio_remap(bdev_get_queue(rbi->bi_bdev),
- rbi, disk_devt(conf->mddev->gendisk),
- sh->dev[i].sector);
+ if (conf->mddev->gendisk)
+ trace_block_bio_remap(bdev_get_queue(rbi->bi_bdev),
+ rbi, disk_devt(conf->mddev->gendisk),
+ sh->dev[i].sector);
generic_make_request(rbi);
}
if (!rdev && !rrdev) {
@@ -2280,17 +2283,6 @@
int level = conf->level;
if (rcw) {
- /* if we are not expanding this is a proper write request, and
- * there will be bios with new data to be drained into the
- * stripe cache
- */
- if (!expand) {
- sh->reconstruct_state = reconstruct_state_drain_run;
- set_bit(STRIPE_OP_BIODRAIN, &s->ops_request);
- } else
- sh->reconstruct_state = reconstruct_state_run;
-
- set_bit(STRIPE_OP_RECONSTRUCT, &s->ops_request);
for (i = disks; i--; ) {
struct r5dev *dev = &sh->dev[i];
@@ -2303,6 +2295,21 @@
s->locked++;
}
}
+ /* if we are not expanding this is a proper write request, and
+ * there will be bios with new data to be drained into the
+ * stripe cache
+ */
+ if (!expand) {
+ if (!s->locked)
+ /* False alarm, nothing to do */
+ return;
+ sh->reconstruct_state = reconstruct_state_drain_run;
+ set_bit(STRIPE_OP_BIODRAIN, &s->ops_request);
+ } else
+ sh->reconstruct_state = reconstruct_state_run;
+
+ set_bit(STRIPE_OP_RECONSTRUCT, &s->ops_request);
+
if (s->locked + conf->max_degraded == disks)
if (!test_and_set_bit(STRIPE_FULL_WRITE, &sh->state))
atomic_inc(&conf->pending_full_writes);
@@ -2311,11 +2318,6 @@
BUG_ON(!(test_bit(R5_UPTODATE, &sh->dev[pd_idx].flags) ||
test_bit(R5_Wantcompute, &sh->dev[pd_idx].flags)));
- sh->reconstruct_state = reconstruct_state_prexor_drain_run;
- set_bit(STRIPE_OP_PREXOR, &s->ops_request);
- set_bit(STRIPE_OP_BIODRAIN, &s->ops_request);
- set_bit(STRIPE_OP_RECONSTRUCT, &s->ops_request);
-
for (i = disks; i--; ) {
struct r5dev *dev = &sh->dev[i];
if (i == pd_idx)
@@ -2330,6 +2332,13 @@
s->locked++;
}
}
+ if (!s->locked)
+ /* False alarm - nothing to do */
+ return;
+ sh->reconstruct_state = reconstruct_state_prexor_drain_run;
+ set_bit(STRIPE_OP_PREXOR, &s->ops_request);
+ set_bit(STRIPE_OP_BIODRAIN, &s->ops_request);
+ set_bit(STRIPE_OP_RECONSTRUCT, &s->ops_request);
}
/* keep the parity disk(s) locked while asynchronous operations
@@ -2564,6 +2573,8 @@
int i;
clear_bit(STRIPE_SYNCING, &sh->state);
+ if (test_and_clear_bit(R5_Overlap, &sh->dev[sh->pd_idx].flags))
+ wake_up(&conf->wait_for_overlap);
s->syncing = 0;
s->replacing = 0;
/* There is nothing more to do for sync/check/repair.
@@ -2737,6 +2748,7 @@
{
int i;
struct r5dev *dev;
+ int discard_pending = 0;
for (i = disks; i--; )
if (sh->dev[i].written) {
@@ -2765,9 +2777,23 @@
STRIPE_SECTORS,
!test_bit(STRIPE_DEGRADED, &sh->state),
0);
- }
- } else if (test_bit(R5_Discard, &sh->dev[i].flags))
- clear_bit(R5_Discard, &sh->dev[i].flags);
+ } else if (test_bit(R5_Discard, &dev->flags))
+ discard_pending = 1;
+ }
+ if (!discard_pending &&
+ test_bit(R5_Discard, &sh->dev[sh->pd_idx].flags)) {
+ clear_bit(R5_Discard, &sh->dev[sh->pd_idx].flags);
+ clear_bit(R5_UPTODATE, &sh->dev[sh->pd_idx].flags);
+ if (sh->qd_idx >= 0) {
+ clear_bit(R5_Discard, &sh->dev[sh->qd_idx].flags);
+ clear_bit(R5_UPTODATE, &sh->dev[sh->qd_idx].flags);
+ }
+ /* now that discard is done we can proceed with any sync */
+ clear_bit(STRIPE_DISCARD, &sh->state);
+ if (test_bit(STRIPE_SYNC_REQUESTED, &sh->state))
+ set_bit(STRIPE_HANDLE, &sh->state);
+
+ }
if (test_and_clear_bit(STRIPE_FULL_WRITE, &sh->state))
if (atomic_dec_and_test(&conf->pending_full_writes))
@@ -2826,8 +2852,10 @@
set_bit(STRIPE_HANDLE, &sh->state);
if (rmw < rcw && rmw > 0) {
/* prefer read-modify-write, but need to get some data */
- blk_add_trace_msg(conf->mddev->queue, "raid5 rmw %llu %d",
- (unsigned long long)sh->sector, rmw);
+ if (conf->mddev->queue)
+ blk_add_trace_msg(conf->mddev->queue,
+ "raid5 rmw %llu %d",
+ (unsigned long long)sh->sector, rmw);
for (i = disks; i--; ) {
struct r5dev *dev = &sh->dev[i];
if ((dev->towrite || i == sh->pd_idx) &&
@@ -2877,7 +2905,7 @@
}
}
}
- if (rcw)
+ if (rcw && conf->mddev->queue)
blk_add_trace_msg(conf->mddev->queue, "raid5 rcw %llu %d %d %d",
(unsigned long long)sh->sector,
rcw, qread, test_bit(STRIPE_DELAYED, &sh->state));
@@ -3417,9 +3445,15 @@
return;
}
- if (test_and_clear_bit(STRIPE_SYNC_REQUESTED, &sh->state)) {
- set_bit(STRIPE_SYNCING, &sh->state);
- clear_bit(STRIPE_INSYNC, &sh->state);
+ if (test_bit(STRIPE_SYNC_REQUESTED, &sh->state)) {
+ spin_lock(&sh->stripe_lock);
+ /* Cannot process 'sync' concurrently with 'discard' */
+ if (!test_bit(STRIPE_DISCARD, &sh->state) &&
+ test_and_clear_bit(STRIPE_SYNC_REQUESTED, &sh->state)) {
+ set_bit(STRIPE_SYNCING, &sh->state);
+ clear_bit(STRIPE_INSYNC, &sh->state);
+ }
+ spin_unlock(&sh->stripe_lock);
}
clear_bit(STRIPE_DELAYED, &sh->state);
@@ -3579,6 +3613,8 @@
test_bit(STRIPE_INSYNC, &sh->state)) {
md_done_sync(conf->mddev, STRIPE_SECTORS, 1);
clear_bit(STRIPE_SYNCING, &sh->state);
+ if (test_and_clear_bit(R5_Overlap, &sh->dev[sh->pd_idx].flags))
+ wake_up(&conf->wait_for_overlap);
}
/* If the failed drives are just a ReadError, then we might need
@@ -3982,9 +4018,10 @@
atomic_inc(&conf->active_aligned_reads);
spin_unlock_irq(&conf->device_lock);
- trace_block_bio_remap(bdev_get_queue(align_bi->bi_bdev),
- align_bi, disk_devt(mddev->gendisk),
- raid_bio->bi_sector);
+ if (mddev->gendisk)
+ trace_block_bio_remap(bdev_get_queue(align_bi->bi_bdev),
+ align_bi, disk_devt(mddev->gendisk),
+ raid_bio->bi_sector);
generic_make_request(align_bi);
return 1;
} else {
@@ -4078,7 +4115,8 @@
}
spin_unlock_irq(&conf->device_lock);
}
- trace_block_unplug(mddev->queue, cnt, !from_schedule);
+ if (mddev->queue)
+ trace_block_unplug(mddev->queue, cnt, !from_schedule);
kfree(cb);
}
@@ -4141,6 +4179,13 @@
sh = get_active_stripe(conf, logical_sector, 0, 0, 0);
prepare_to_wait(&conf->wait_for_overlap, &w,
TASK_UNINTERRUPTIBLE);
+ set_bit(R5_Overlap, &sh->dev[sh->pd_idx].flags);
+ if (test_bit(STRIPE_SYNCING, &sh->state)) {
+ release_stripe(sh);
+ schedule();
+ goto again;
+ }
+ clear_bit(R5_Overlap, &sh->dev[sh->pd_idx].flags);
spin_lock_irq(&sh->stripe_lock);
for (d = 0; d < conf->raid_disks; d++) {
if (d == sh->pd_idx || d == sh->qd_idx)
@@ -4153,6 +4198,7 @@
goto again;
}
}
+ set_bit(STRIPE_DISCARD, &sh->state);
finish_wait(&conf->wait_for_overlap, &w);
for (d = 0; d < conf->raid_disks; d++) {
if (d == sh->pd_idx || d == sh->qd_idx)
diff --git a/drivers/md/raid5.h b/drivers/md/raid5.h
index 18b2c4a..b0b663b 100644
--- a/drivers/md/raid5.h
+++ b/drivers/md/raid5.h
@@ -221,10 +221,6 @@
struct stripe_operations {
int target, target2;
enum sum_check_flags zero_sum_result;
- #ifdef CONFIG_MULTICORE_RAID456
- unsigned long request;
- wait_queue_head_t wait_for_ops;
- #endif
} ops;
struct r5dev {
/* rreq and rvec are used for the replacement device when
@@ -323,6 +319,7 @@
STRIPE_COMPUTE_RUN,
STRIPE_OPS_REQ_PENDING,
STRIPE_ON_UNPLUG_LIST,
+ STRIPE_DISCARD,
};
/*
diff --git a/drivers/mtd/bcm47xxpart.c b/drivers/mtd/bcm47xxpart.c
index 63feb75..9279a91 100644
--- a/drivers/mtd/bcm47xxpart.c
+++ b/drivers/mtd/bcm47xxpart.c
@@ -19,6 +19,12 @@
/* 10 parts were found on sflash on Netgear WNDR4500 */
#define BCM47XXPART_MAX_PARTS 12
+/*
+ * Amount of bytes we read when analyzing each block of flash memory.
+ * Set it big enough to allow detecting partition and reading important data.
+ */
+#define BCM47XXPART_BYTES_TO_READ 0x404
+
/* Magics */
#define BOARD_DATA_MAGIC 0x5246504D /* MPFR */
#define POT_MAGIC1 0x54544f50 /* POTT */
@@ -57,17 +63,15 @@
struct trx_header *trx;
int trx_part = -1;
int last_trx_part = -1;
- int max_bytes_to_read = 0x8004;
+ int possible_nvram_sizes[] = { 0x8000, 0xF000, 0x10000, };
if (blocksize <= 0x10000)
blocksize = 0x10000;
- if (blocksize == 0x20000)
- max_bytes_to_read = 0x18004;
/* Alloc */
parts = kzalloc(sizeof(struct mtd_partition) * BCM47XXPART_MAX_PARTS,
GFP_KERNEL);
- buf = kzalloc(max_bytes_to_read, GFP_KERNEL);
+ buf = kzalloc(BCM47XXPART_BYTES_TO_READ, GFP_KERNEL);
/* Parse block by block looking for magics */
for (offset = 0; offset <= master->size - blocksize;
@@ -82,7 +86,7 @@
}
/* Read beginning of the block */
- if (mtd_read(master, offset, max_bytes_to_read,
+ if (mtd_read(master, offset, BCM47XXPART_BYTES_TO_READ,
&bytes_read, (uint8_t *)buf) < 0) {
pr_err("mtd_read error while parsing (offset: 0x%X)!\n",
offset);
@@ -96,20 +100,6 @@
continue;
}
- /* Standard NVRAM */
- if (buf[0x000 / 4] == NVRAM_HEADER ||
- buf[0x1000 / 4] == NVRAM_HEADER ||
- buf[0x8000 / 4] == NVRAM_HEADER ||
- (blocksize == 0x20000 && (
- buf[0x10000 / 4] == NVRAM_HEADER ||
- buf[0x11000 / 4] == NVRAM_HEADER ||
- buf[0x18000 / 4] == NVRAM_HEADER))) {
- bcm47xxpart_add_part(&parts[curr_part++], "nvram",
- offset, 0);
- offset = rounddown(offset, blocksize);
- continue;
- }
-
/*
* board_data starts with board_id which differs across boards,
* but we can use 'MPFR' (hopefully) magic at 0x100
@@ -178,6 +168,30 @@
continue;
}
}
+
+ /* Look for NVRAM at the end of the last block. */
+ for (i = 0; i < ARRAY_SIZE(possible_nvram_sizes); i++) {
+ if (curr_part > BCM47XXPART_MAX_PARTS) {
+ pr_warn("Reached maximum number of partitions, scanning stopped!\n");
+ break;
+ }
+
+ offset = master->size - possible_nvram_sizes[i];
+ if (mtd_read(master, offset, 0x4, &bytes_read,
+ (uint8_t *)buf) < 0) {
+ pr_err("mtd_read error while reading at offset 0x%X!\n",
+ offset);
+ continue;
+ }
+
+ /* Standard NVRAM */
+ if (buf[0] == NVRAM_HEADER) {
+ bcm47xxpart_add_part(&parts[curr_part++], "nvram",
+ master->size - blocksize, 0);
+ break;
+ }
+ }
+
kfree(buf);
/*
diff --git a/drivers/mtd/nand/nand_base.c b/drivers/mtd/nand/nand_base.c
index 4321415..42c6392 100644
--- a/drivers/mtd/nand/nand_base.c
+++ b/drivers/mtd/nand/nand_base.c
@@ -1523,6 +1523,14 @@
oobreadlen -= toread;
}
}
+
+ if (chip->options & NAND_NEED_READRDY) {
+ /* Apply delay or wait for ready/busy pin */
+ if (!chip->dev_ready)
+ udelay(chip->chip_delay);
+ else
+ nand_wait_ready(mtd);
+ }
} else {
memcpy(buf, chip->buffers->databuf + col, bytes);
buf += bytes;
@@ -1787,6 +1795,14 @@
len = min(len, readlen);
buf = nand_transfer_oob(chip, buf, ops, len);
+ if (chip->options & NAND_NEED_READRDY) {
+ /* Apply delay or wait for ready/busy pin */
+ if (!chip->dev_ready)
+ udelay(chip->chip_delay);
+ else
+ nand_wait_ready(mtd);
+ }
+
readlen -= len;
if (!readlen)
break;
diff --git a/drivers/mtd/nand/nand_ids.c b/drivers/mtd/nand/nand_ids.c
index e3aa274..9c61238 100644
--- a/drivers/mtd/nand/nand_ids.c
+++ b/drivers/mtd/nand/nand_ids.c
@@ -22,49 +22,51 @@
* 512 512 Byte page size
*/
struct nand_flash_dev nand_flash_ids[] = {
+#define SP_OPTIONS NAND_NEED_READRDY
+#define SP_OPTIONS16 (SP_OPTIONS | NAND_BUSWIDTH_16)
#ifdef CONFIG_MTD_NAND_MUSEUM_IDS
- {"NAND 1MiB 5V 8-bit", 0x6e, 256, 1, 0x1000, 0},
- {"NAND 2MiB 5V 8-bit", 0x64, 256, 2, 0x1000, 0},
- {"NAND 4MiB 5V 8-bit", 0x6b, 512, 4, 0x2000, 0},
- {"NAND 1MiB 3,3V 8-bit", 0xe8, 256, 1, 0x1000, 0},
- {"NAND 1MiB 3,3V 8-bit", 0xec, 256, 1, 0x1000, 0},
- {"NAND 2MiB 3,3V 8-bit", 0xea, 256, 2, 0x1000, 0},
- {"NAND 4MiB 3,3V 8-bit", 0xd5, 512, 4, 0x2000, 0},
- {"NAND 4MiB 3,3V 8-bit", 0xe3, 512, 4, 0x2000, 0},
- {"NAND 4MiB 3,3V 8-bit", 0xe5, 512, 4, 0x2000, 0},
- {"NAND 8MiB 3,3V 8-bit", 0xd6, 512, 8, 0x2000, 0},
+ {"NAND 1MiB 5V 8-bit", 0x6e, 256, 1, 0x1000, SP_OPTIONS},
+ {"NAND 2MiB 5V 8-bit", 0x64, 256, 2, 0x1000, SP_OPTIONS},
+ {"NAND 4MiB 5V 8-bit", 0x6b, 512, 4, 0x2000, SP_OPTIONS},
+ {"NAND 1MiB 3,3V 8-bit", 0xe8, 256, 1, 0x1000, SP_OPTIONS},
+ {"NAND 1MiB 3,3V 8-bit", 0xec, 256, 1, 0x1000, SP_OPTIONS},
+ {"NAND 2MiB 3,3V 8-bit", 0xea, 256, 2, 0x1000, SP_OPTIONS},
+ {"NAND 4MiB 3,3V 8-bit", 0xd5, 512, 4, 0x2000, SP_OPTIONS},
+ {"NAND 4MiB 3,3V 8-bit", 0xe3, 512, 4, 0x2000, SP_OPTIONS},
+ {"NAND 4MiB 3,3V 8-bit", 0xe5, 512, 4, 0x2000, SP_OPTIONS},
+ {"NAND 8MiB 3,3V 8-bit", 0xd6, 512, 8, 0x2000, SP_OPTIONS},
- {"NAND 8MiB 1,8V 8-bit", 0x39, 512, 8, 0x2000, 0},
- {"NAND 8MiB 3,3V 8-bit", 0xe6, 512, 8, 0x2000, 0},
- {"NAND 8MiB 1,8V 16-bit", 0x49, 512, 8, 0x2000, NAND_BUSWIDTH_16},
- {"NAND 8MiB 3,3V 16-bit", 0x59, 512, 8, 0x2000, NAND_BUSWIDTH_16},
+ {"NAND 8MiB 1,8V 8-bit", 0x39, 512, 8, 0x2000, SP_OPTIONS},
+ {"NAND 8MiB 3,3V 8-bit", 0xe6, 512, 8, 0x2000, SP_OPTIONS},
+ {"NAND 8MiB 1,8V 16-bit", 0x49, 512, 8, 0x2000, SP_OPTIONS16},
+ {"NAND 8MiB 3,3V 16-bit", 0x59, 512, 8, 0x2000, SP_OPTIONS16},
#endif
- {"NAND 16MiB 1,8V 8-bit", 0x33, 512, 16, 0x4000, 0},
- {"NAND 16MiB 3,3V 8-bit", 0x73, 512, 16, 0x4000, 0},
- {"NAND 16MiB 1,8V 16-bit", 0x43, 512, 16, 0x4000, NAND_BUSWIDTH_16},
- {"NAND 16MiB 3,3V 16-bit", 0x53, 512, 16, 0x4000, NAND_BUSWIDTH_16},
+ {"NAND 16MiB 1,8V 8-bit", 0x33, 512, 16, 0x4000, SP_OPTIONS},
+ {"NAND 16MiB 3,3V 8-bit", 0x73, 512, 16, 0x4000, SP_OPTIONS},
+ {"NAND 16MiB 1,8V 16-bit", 0x43, 512, 16, 0x4000, SP_OPTIONS16},
+ {"NAND 16MiB 3,3V 16-bit", 0x53, 512, 16, 0x4000, SP_OPTIONS16},
- {"NAND 32MiB 1,8V 8-bit", 0x35, 512, 32, 0x4000, 0},
- {"NAND 32MiB 3,3V 8-bit", 0x75, 512, 32, 0x4000, 0},
- {"NAND 32MiB 1,8V 16-bit", 0x45, 512, 32, 0x4000, NAND_BUSWIDTH_16},
- {"NAND 32MiB 3,3V 16-bit", 0x55, 512, 32, 0x4000, NAND_BUSWIDTH_16},
+ {"NAND 32MiB 1,8V 8-bit", 0x35, 512, 32, 0x4000, SP_OPTIONS},
+ {"NAND 32MiB 3,3V 8-bit", 0x75, 512, 32, 0x4000, SP_OPTIONS},
+ {"NAND 32MiB 1,8V 16-bit", 0x45, 512, 32, 0x4000, SP_OPTIONS16},
+ {"NAND 32MiB 3,3V 16-bit", 0x55, 512, 32, 0x4000, SP_OPTIONS16},
- {"NAND 64MiB 1,8V 8-bit", 0x36, 512, 64, 0x4000, 0},
- {"NAND 64MiB 3,3V 8-bit", 0x76, 512, 64, 0x4000, 0},
- {"NAND 64MiB 1,8V 16-bit", 0x46, 512, 64, 0x4000, NAND_BUSWIDTH_16},
- {"NAND 64MiB 3,3V 16-bit", 0x56, 512, 64, 0x4000, NAND_BUSWIDTH_16},
+ {"NAND 64MiB 1,8V 8-bit", 0x36, 512, 64, 0x4000, SP_OPTIONS},
+ {"NAND 64MiB 3,3V 8-bit", 0x76, 512, 64, 0x4000, SP_OPTIONS},
+ {"NAND 64MiB 1,8V 16-bit", 0x46, 512, 64, 0x4000, SP_OPTIONS16},
+ {"NAND 64MiB 3,3V 16-bit", 0x56, 512, 64, 0x4000, SP_OPTIONS16},
- {"NAND 128MiB 1,8V 8-bit", 0x78, 512, 128, 0x4000, 0},
- {"NAND 128MiB 1,8V 8-bit", 0x39, 512, 128, 0x4000, 0},
- {"NAND 128MiB 3,3V 8-bit", 0x79, 512, 128, 0x4000, 0},
- {"NAND 128MiB 1,8V 16-bit", 0x72, 512, 128, 0x4000, NAND_BUSWIDTH_16},
- {"NAND 128MiB 1,8V 16-bit", 0x49, 512, 128, 0x4000, NAND_BUSWIDTH_16},
- {"NAND 128MiB 3,3V 16-bit", 0x74, 512, 128, 0x4000, NAND_BUSWIDTH_16},
- {"NAND 128MiB 3,3V 16-bit", 0x59, 512, 128, 0x4000, NAND_BUSWIDTH_16},
+ {"NAND 128MiB 1,8V 8-bit", 0x78, 512, 128, 0x4000, SP_OPTIONS},
+ {"NAND 128MiB 1,8V 8-bit", 0x39, 512, 128, 0x4000, SP_OPTIONS},
+ {"NAND 128MiB 3,3V 8-bit", 0x79, 512, 128, 0x4000, SP_OPTIONS},
+ {"NAND 128MiB 1,8V 16-bit", 0x72, 512, 128, 0x4000, SP_OPTIONS16},
+ {"NAND 128MiB 1,8V 16-bit", 0x49, 512, 128, 0x4000, SP_OPTIONS16},
+ {"NAND 128MiB 3,3V 16-bit", 0x74, 512, 128, 0x4000, SP_OPTIONS16},
+ {"NAND 128MiB 3,3V 16-bit", 0x59, 512, 128, 0x4000, SP_OPTIONS16},
- {"NAND 256MiB 3,3V 8-bit", 0x71, 512, 256, 0x4000, 0},
+ {"NAND 256MiB 3,3V 8-bit", 0x71, 512, 256, 0x4000, SP_OPTIONS},
/*
* These are the new chips with large page size. The pagesize and the
diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
index 8b4e96e..6bbd90e 100644
--- a/drivers/net/bonding/bond_main.c
+++ b/drivers/net/bonding/bond_main.c
@@ -1746,6 +1746,8 @@
bond_compute_features(bond);
+ bond_update_speed_duplex(new_slave);
+
read_lock(&bond->lock);
new_slave->last_arp_rx = jiffies -
@@ -1798,8 +1800,6 @@
new_slave->link == BOND_LINK_DOWN ? "DOWN" :
(new_slave->link == BOND_LINK_UP ? "UP" : "BACK"));
- bond_update_speed_duplex(new_slave);
-
if (USES_PRIMARY(bond->params.mode) && bond->params.primary[0]) {
/* if there is a primary slave, remember it */
if (strcmp(bond->params.primary, new_slave->dev->name) == 0) {
@@ -2374,8 +2374,6 @@
bond_set_backup_slave(slave);
}
- bond_update_speed_duplex(slave);
-
pr_info("%s: link status definitely up for interface %s, %u Mbps %s duplex.\n",
bond->dev->name, slave->dev->name,
slave->speed, slave->duplex ? "full" : "half");
diff --git a/drivers/net/bonding/bond_sysfs.c b/drivers/net/bonding/bond_sysfs.c
index 1c9e09f..db103e03 100644
--- a/drivers/net/bonding/bond_sysfs.c
+++ b/drivers/net/bonding/bond_sysfs.c
@@ -183,6 +183,11 @@
sprintf(linkname, "slave_%s", slave->name);
ret = sysfs_create_link(&(master->dev.kobj), &(slave->dev.kobj),
linkname);
+
+ /* free the master link created earlier in case of error */
+ if (ret)
+ sysfs_remove_link(&(slave->dev.kobj), "master");
+
return ret;
}
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
index a923bc4..4046f97 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
@@ -2760,6 +2760,7 @@
bp->port.pmf = 0;
load_error1:
bnx2x_napi_disable(bp);
+ bnx2x_del_all_napi(bp);
/* clear pf_load status, as it was already set */
if (IS_PF(bp))
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_dcb.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_dcb.c
index 5682054..91ecd6a 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_dcb.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_dcb.c
@@ -2139,12 +2139,12 @@
break;
default:
BNX2X_ERR("Non valid capability ID\n");
- rval = -EINVAL;
+ rval = 1;
break;
}
} else {
DP(BNX2X_MSG_DCB, "DCB disabled\n");
- rval = -EINVAL;
+ rval = 1;
}
DP(BNX2X_MSG_DCB, "capid %d:%x\n", capid, *cap);
@@ -2170,12 +2170,12 @@
break;
default:
BNX2X_ERR("Non valid TC-ID\n");
- rval = -EINVAL;
+ rval = 1;
break;
}
} else {
DP(BNX2X_MSG_DCB, "DCB disabled\n");
- rval = -EINVAL;
+ rval = 1;
}
return rval;
@@ -2188,7 +2188,7 @@
return -EINVAL;
}
-static u8 bnx2x_dcbnl_get_pfc_state(struct net_device *netdev)
+static u8 bnx2x_dcbnl_get_pfc_state(struct net_device *netdev)
{
struct bnx2x *bp = netdev_priv(netdev);
DP(BNX2X_MSG_DCB, "state = %d\n", bp->dcbx_local_feat.pfc.enabled);
@@ -2390,12 +2390,12 @@
break;
default:
BNX2X_ERR("Non valid featrue-ID\n");
- rval = -EINVAL;
+ rval = 1;
break;
}
} else {
DP(BNX2X_MSG_DCB, "DCB disabled\n");
- rval = -EINVAL;
+ rval = 1;
}
return rval;
@@ -2431,12 +2431,12 @@
break;
default:
BNX2X_ERR("Non valid featrue-ID\n");
- rval = -EINVAL;
+ rval = 1;
break;
}
} else {
DP(BNX2X_MSG_DCB, "dcbnl call not valid\n");
- rval = -EINVAL;
+ rval = 1;
}
return rval;
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_stats.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_stats.h
index 364e37e..198f6f1 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_stats.h
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_stats.h
@@ -459,8 +459,9 @@
#define UPDATE_QSTAT(s, t) \
do { \
- qstats->t##_hi = qstats_old->t##_hi + le32_to_cpu(s.hi); \
qstats->t##_lo = qstats_old->t##_lo + le32_to_cpu(s.lo); \
+ qstats->t##_hi = qstats_old->t##_hi + le32_to_cpu(s.hi) \
+ + ((qstats->t##_lo < qstats_old->t##_lo) ? 1 : 0); \
} while (0)
#define UPDATE_QSTAT_OLD(f) \
diff --git a/drivers/net/ethernet/broadcom/tg3.c b/drivers/net/ethernet/broadcom/tg3.c
index 93729f9..67d2663 100644
--- a/drivers/net/ethernet/broadcom/tg3.c
+++ b/drivers/net/ethernet/broadcom/tg3.c
@@ -4130,6 +4130,14 @@
tp->link_config.active_speed = tp->link_config.speed;
tp->link_config.active_duplex = tp->link_config.duplex;
+ if (tg3_asic_rev(tp) == ASIC_REV_5714) {
+ /* With autoneg disabled, 5715 only links up when the
+ * advertisement register has the configured speed
+ * enabled.
+ */
+ tg3_writephy(tp, MII_ADVERTISE, ADVERTISE_ALL);
+ }
+
bmcr = 0;
switch (tp->link_config.speed) {
default:
diff --git a/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c b/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
index 4ce6203..8049268 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
+++ b/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
@@ -497,8 +497,9 @@
}
#define EEPROM_STAT_ADDR 0x7bfc
-#define VPD_BASE 0
#define VPD_LEN 512
+#define VPD_BASE 0x400
+#define VPD_BASE_OLD 0
/**
* t4_seeprom_wp - enable/disable EEPROM write protection
@@ -524,7 +525,7 @@
int get_vpd_params(struct adapter *adapter, struct vpd_params *p)
{
u32 cclk_param, cclk_val;
- int i, ret;
+ int i, ret, addr;
int ec, sn;
u8 *vpd, csum;
unsigned int vpdr_len, kw_offset, id_len;
@@ -533,7 +534,12 @@
if (!vpd)
return -ENOMEM;
- ret = pci_read_vpd(adapter->pdev, VPD_BASE, VPD_LEN, vpd);
+ ret = pci_read_vpd(adapter->pdev, VPD_BASE, sizeof(u32), vpd);
+ if (ret < 0)
+ goto out;
+ addr = *vpd == 0x82 ? VPD_BASE : VPD_BASE_OLD;
+
+ ret = pci_read_vpd(adapter->pdev, addr, VPD_LEN, vpd);
if (ret < 0)
goto out;
diff --git a/drivers/net/ethernet/dec/tulip/Kconfig b/drivers/net/ethernet/dec/tulip/Kconfig
index 0c37fb2..1df33c7 100644
--- a/drivers/net/ethernet/dec/tulip/Kconfig
+++ b/drivers/net/ethernet/dec/tulip/Kconfig
@@ -108,6 +108,7 @@
config DE4X5
tristate "Generic DECchip & DIGITAL EtherWORKS PCI/EISA"
depends on (PCI || EISA)
+ depends on VIRT_TO_BUS || ALPHA || PPC || SPARC
select CRC32
---help---
This is support for the DIGITAL series of PCI/EISA Ethernet cards.
diff --git a/drivers/net/ethernet/freescale/fec.c b/drivers/net/ethernet/freescale/fec.c
index 069a155..911d025 100644
--- a/drivers/net/ethernet/freescale/fec.c
+++ b/drivers/net/ethernet/freescale/fec.c
@@ -934,24 +934,28 @@
goto spin_unlock;
}
- /* Duplex link change */
if (phy_dev->link) {
- if (fep->full_duplex != phy_dev->duplex) {
- fec_restart(ndev, phy_dev->duplex);
- /* prevent unnecessary second fec_restart() below */
+ if (!fep->link) {
fep->link = phy_dev->link;
status_change = 1;
}
- }
- /* Link on or off change */
- if (phy_dev->link != fep->link) {
- fep->link = phy_dev->link;
- if (phy_dev->link)
+ if (fep->full_duplex != phy_dev->duplex)
+ status_change = 1;
+
+ if (phy_dev->speed != fep->speed) {
+ fep->speed = phy_dev->speed;
+ status_change = 1;
+ }
+
+ /* if any of the above changed restart the FEC */
+ if (status_change)
fec_restart(ndev, phy_dev->duplex);
- else
+ } else {
+ if (fep->link) {
fec_stop(ndev);
- status_change = 1;
+ status_change = 1;
+ }
}
spin_unlock:
@@ -1328,7 +1332,7 @@
static void fec_enet_free_buffers(struct net_device *ndev)
{
struct fec_enet_private *fep = netdev_priv(ndev);
- int i;
+ unsigned int i;
struct sk_buff *skb;
struct bufdesc *bdp;
@@ -1352,7 +1356,7 @@
static int fec_enet_alloc_buffers(struct net_device *ndev)
{
struct fec_enet_private *fep = netdev_priv(ndev);
- int i;
+ unsigned int i;
struct sk_buff *skb;
struct bufdesc *bdp;
@@ -1437,6 +1441,7 @@
struct fec_enet_private *fep = netdev_priv(ndev);
/* Don't know what to do yet. */
+ napi_disable(&fep->napi);
fep->opened = 0;
netif_stop_queue(ndev);
fec_stop(ndev);
@@ -1593,7 +1598,7 @@
struct fec_enet_private *fep = netdev_priv(ndev);
struct bufdesc *cbd_base;
struct bufdesc *bdp;
- int i;
+ unsigned int i;
/* Allocate memory for buffer descriptors. */
cbd_base = dma_alloc_coherent(NULL, PAGE_SIZE, &fep->bd_dma,
diff --git a/drivers/net/ethernet/freescale/fec.h b/drivers/net/ethernet/freescale/fec.h
index f539007..eb43729 100644
--- a/drivers/net/ethernet/freescale/fec.h
+++ b/drivers/net/ethernet/freescale/fec.h
@@ -240,6 +240,7 @@
phy_interface_t phy_interface;
int link;
int full_duplex;
+ int speed;
struct completion mdio_done;
int irq[FEC_IRQ_NUM];
int bufdesc_ex;
diff --git a/drivers/net/ethernet/freescale/fec_ptp.c b/drivers/net/ethernet/freescale/fec_ptp.c
index 1f17ca0..0d8df40 100644
--- a/drivers/net/ethernet/freescale/fec_ptp.c
+++ b/drivers/net/ethernet/freescale/fec_ptp.c
@@ -128,6 +128,7 @@
spin_unlock_irqrestore(&fep->tmreg_lock, flags);
}
+EXPORT_SYMBOL(fec_ptp_start_cyclecounter);
/**
* fec_ptp_adjfreq - adjust ptp cycle frequency
@@ -318,6 +319,7 @@
return copy_to_user(ifr->ifr_data, &config, sizeof(config)) ?
-EFAULT : 0;
}
+EXPORT_SYMBOL(fec_ptp_ioctl);
/**
* fec_time_keep - call timecounter_read every second to avoid timer overrun
@@ -383,3 +385,4 @@
pr_info("registered PHC device on %s\n", ndev->name);
}
}
+EXPORT_SYMBOL(fec_ptp_init);
diff --git a/drivers/net/ethernet/intel/igb/e1000_82575.c b/drivers/net/ethernet/intel/igb/e1000_82575.c
index b64542a..12b1d84 100644
--- a/drivers/net/ethernet/intel/igb/e1000_82575.c
+++ b/drivers/net/ethernet/intel/igb/e1000_82575.c
@@ -1818,27 +1818,32 @@
**/
void igb_vmdq_set_anti_spoofing_pf(struct e1000_hw *hw, bool enable, int pf)
{
- u32 dtxswc;
+ u32 reg_val, reg_offset;
switch (hw->mac.type) {
case e1000_82576:
+ reg_offset = E1000_DTXSWC;
+ break;
case e1000_i350:
- dtxswc = rd32(E1000_DTXSWC);
- if (enable) {
- dtxswc |= (E1000_DTXSWC_MAC_SPOOF_MASK |
- E1000_DTXSWC_VLAN_SPOOF_MASK);
- /* The PF can spoof - it has to in order to
- * support emulation mode NICs */
- dtxswc ^= (1 << pf | 1 << (pf + MAX_NUM_VFS));
- } else {
- dtxswc &= ~(E1000_DTXSWC_MAC_SPOOF_MASK |
- E1000_DTXSWC_VLAN_SPOOF_MASK);
- }
- wr32(E1000_DTXSWC, dtxswc);
+ reg_offset = E1000_TXSWC;
break;
default:
- break;
+ return;
}
+
+ reg_val = rd32(reg_offset);
+ if (enable) {
+ reg_val |= (E1000_DTXSWC_MAC_SPOOF_MASK |
+ E1000_DTXSWC_VLAN_SPOOF_MASK);
+ /* The PF can spoof - it has to in order to
+ * support emulation mode NICs
+ */
+ reg_val ^= (1 << pf | 1 << (pf + MAX_NUM_VFS));
+ } else {
+ reg_val &= ~(E1000_DTXSWC_MAC_SPOOF_MASK |
+ E1000_DTXSWC_VLAN_SPOOF_MASK);
+ }
+ wr32(reg_offset, reg_val);
}
/**
diff --git a/drivers/net/ethernet/intel/igb/igb_hwmon.c b/drivers/net/ethernet/intel/igb/igb_hwmon.c
index 4623502..0478a1a 100644
--- a/drivers/net/ethernet/intel/igb/igb_hwmon.c
+++ b/drivers/net/ethernet/intel/igb/igb_hwmon.c
@@ -39,7 +39,7 @@
#include <linux/pci.h>
#ifdef CONFIG_IGB_HWMON
-struct i2c_board_info i350_sensor_info = {
+static struct i2c_board_info i350_sensor_info = {
I2C_BOARD_INFO("i350bb", (0Xf8 >> 1)),
};
diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
index 4dbd629..8496adf 100644
--- a/drivers/net/ethernet/intel/igb/igb_main.c
+++ b/drivers/net/ethernet/intel/igb/igb_main.c
@@ -2542,8 +2542,8 @@
if ((hw->mac.type == e1000_i210) || (hw->mac.type == e1000_i211))
return;
- igb_enable_sriov(pdev, max_vfs);
pci_sriov_set_totalvfs(pdev, 7);
+ igb_enable_sriov(pdev, max_vfs);
#endif /* CONFIG_PCI_IOV */
}
@@ -2652,7 +2652,7 @@
if (max_vfs > 7) {
dev_warn(&pdev->dev,
"Maximum of 7 VFs per PF, using max\n");
- adapter->vfs_allocated_count = 7;
+ max_vfs = adapter->vfs_allocated_count = 7;
} else
adapter->vfs_allocated_count = max_vfs;
if (adapter->vfs_allocated_count)
diff --git a/drivers/net/ethernet/intel/igb/igb_ptp.c b/drivers/net/ethernet/intel/igb/igb_ptp.c
index 0987822..0a23750 100644
--- a/drivers/net/ethernet/intel/igb/igb_ptp.c
+++ b/drivers/net/ethernet/intel/igb/igb_ptp.c
@@ -740,7 +740,7 @@
case e1000_82576:
snprintf(adapter->ptp_caps.name, 16, "%pm", netdev->dev_addr);
adapter->ptp_caps.owner = THIS_MODULE;
- adapter->ptp_caps.max_adj = 1000000000;
+ adapter->ptp_caps.max_adj = 999999881;
adapter->ptp_caps.n_ext_ts = 0;
adapter->ptp_caps.pps = 0;
adapter->ptp_caps.adjfreq = igb_ptp_adjfreq_82576;
diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
index c3db6cd..2b6cb5c 100644
--- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
+++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
@@ -944,9 +944,17 @@
free_irq(adapter->msix_entries[vector].vector,
adapter->q_vector[vector]);
}
- pci_disable_msix(adapter->pdev);
- kfree(adapter->msix_entries);
- adapter->msix_entries = NULL;
+ /* This failure is non-recoverable - it indicates the system is
+ * out of MSIX vector resources and the VF driver cannot run
+ * without them. Set the number of msix vectors to zero
+ * indicating that not enough can be allocated. The error
+ * will be returned to the user indicating device open failed.
+ * Any further attempts to force the driver to open will also
+ * fail. The only way to recover is to unload the driver and
+ * reload it again. If the system has recovered some MSIX
+ * vectors then it may succeed.
+ */
+ adapter->num_msix_vectors = 0;
return err;
}
@@ -2572,6 +2580,15 @@
struct ixgbe_hw *hw = &adapter->hw;
int err;
+ /* A previous failure to open the device because of a lack of
+ * available MSIX vector resources may have reset the number
+ * of msix vectors variable to zero. The only way to recover
+ * is to unload/reload the driver and hope that the system has
+ * been able to recover some MSIX vector resources.
+ */
+ if (!adapter->num_msix_vectors)
+ return -ENOMEM;
+
/* disallow open during test */
if (test_bit(__IXGBEVF_TESTING, &adapter->state))
return -EBUSY;
@@ -2628,7 +2645,6 @@
err_req_irq:
ixgbevf_down(adapter);
- ixgbevf_free_irq(adapter);
err_setup_rx:
ixgbevf_free_all_rx_resources(adapter);
err_setup_tx:
diff --git a/drivers/net/ethernet/lantiq_etop.c b/drivers/net/ethernet/lantiq_etop.c
index 6a21274..bfdb0686 100644
--- a/drivers/net/ethernet/lantiq_etop.c
+++ b/drivers/net/ethernet/lantiq_etop.c
@@ -769,7 +769,7 @@
return 0;
err_free:
- kfree(dev);
+ free_netdev(dev);
err_out:
return err;
}
diff --git a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
index 995d4b6..f278b10 100644
--- a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
+++ b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
@@ -1637,6 +1637,17 @@
/* Flush multicast filter */
mlx4_SET_MCAST_FLTR(mdev->dev, priv->port, 0, 1, MLX4_MCAST_CONFIG);
+ /* Remove flow steering rules for the port*/
+ if (mdev->dev->caps.steering_mode ==
+ MLX4_STEERING_MODE_DEVICE_MANAGED) {
+ ASSERT_RTNL();
+ list_for_each_entry_safe(flow, tmp_flow,
+ &priv->ethtool_list, list) {
+ mlx4_flow_detach(mdev->dev, flow->id);
+ list_del(&flow->list);
+ }
+ }
+
mlx4_en_destroy_drop_qp(priv);
/* Free TX Rings */
@@ -1657,17 +1668,6 @@
if (!(mdev->dev->caps.flags2 & MLX4_DEV_CAP_FLAGS2_REASSIGN_MAC_EN))
mdev->mac_removed[priv->port] = 1;
- /* Remove flow steering rules for the port*/
- if (mdev->dev->caps.steering_mode ==
- MLX4_STEERING_MODE_DEVICE_MANAGED) {
- ASSERT_RTNL();
- list_for_each_entry_safe(flow, tmp_flow,
- &priv->ethtool_list, list) {
- mlx4_flow_detach(mdev->dev, flow->id);
- list_del(&flow->list);
- }
- }
-
/* Free RX Rings */
for (i = 0; i < priv->rx_ring_num; i++) {
mlx4_en_deactivate_rx_ring(priv, &priv->rx_ring[i]);
diff --git a/drivers/net/ethernet/mellanox/mlx4/eq.c b/drivers/net/ethernet/mellanox/mlx4/eq.c
index 251ae2f..8e3123a 100644
--- a/drivers/net/ethernet/mellanox/mlx4/eq.c
+++ b/drivers/net/ethernet/mellanox/mlx4/eq.c
@@ -771,7 +771,7 @@
struct mlx4_slave_event_eq_info *event_eq =
priv->mfunc.master.slave_state[slave].event_eq;
u32 in_modifier = vhcr->in_modifier;
- u32 eqn = in_modifier & 0x1FF;
+ u32 eqn = in_modifier & 0x3FF;
u64 in_param = vhcr->in_param;
int err = 0;
int i;
diff --git a/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c b/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c
index 2995687..1391b52 100644
--- a/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c
+++ b/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c
@@ -99,6 +99,7 @@
struct list_head mcg_list;
spinlock_t mcg_spl;
int local_qpn;
+ atomic_t ref_count;
};
enum res_mtt_states {
@@ -197,6 +198,7 @@
struct res_fs_rule {
struct res_common com;
+ int qpn;
};
static void *res_tracker_lookup(struct rb_root *root, u64 res_id)
@@ -355,7 +357,7 @@
return dev->caps.num_mpts - 1;
}
-static void *find_res(struct mlx4_dev *dev, int res_id,
+static void *find_res(struct mlx4_dev *dev, u64 res_id,
enum mlx4_resource type)
{
struct mlx4_priv *priv = mlx4_priv(dev);
@@ -447,6 +449,7 @@
ret->local_qpn = id;
INIT_LIST_HEAD(&ret->mcg_list);
spin_lock_init(&ret->mcg_spl);
+ atomic_set(&ret->ref_count, 0);
return &ret->com;
}
@@ -554,7 +557,7 @@
return &ret->com;
}
-static struct res_common *alloc_fs_rule_tr(u64 id)
+static struct res_common *alloc_fs_rule_tr(u64 id, int qpn)
{
struct res_fs_rule *ret;
@@ -564,7 +567,7 @@
ret->com.res_id = id;
ret->com.state = RES_FS_RULE_ALLOCATED;
-
+ ret->qpn = qpn;
return &ret->com;
}
@@ -602,7 +605,7 @@
ret = alloc_xrcdn_tr(id);
break;
case RES_FS_RULE:
- ret = alloc_fs_rule_tr(id);
+ ret = alloc_fs_rule_tr(id, extra);
break;
default:
return NULL;
@@ -671,10 +674,14 @@
static int remove_qp_ok(struct res_qp *res)
{
- if (res->com.state == RES_QP_BUSY)
+ if (res->com.state == RES_QP_BUSY || atomic_read(&res->ref_count) ||
+ !list_empty(&res->mcg_list)) {
+ pr_err("resource tracker: fail to remove qp, state %d, ref_count %d\n",
+ res->com.state, atomic_read(&res->ref_count));
return -EBUSY;
- else if (res->com.state != RES_QP_RESERVED)
+ } else if (res->com.state != RES_QP_RESERVED) {
return -EPERM;
+ }
return 0;
}
@@ -3124,6 +3131,7 @@
struct list_head *rlist = &tracker->slave_list[slave].res_list[RES_MAC];
int err;
int qpn;
+ struct res_qp *rqp;
struct mlx4_net_trans_rule_hw_ctrl *ctrl;
struct _rule_hw *rule_header;
int header_id;
@@ -3134,7 +3142,7 @@
ctrl = (struct mlx4_net_trans_rule_hw_ctrl *)inbox->buf;
qpn = be32_to_cpu(ctrl->qpn) & 0xffffff;
- err = get_res(dev, slave, qpn, RES_QP, NULL);
+ err = get_res(dev, slave, qpn, RES_QP, &rqp);
if (err) {
pr_err("Steering rule with qpn 0x%x rejected.\n", qpn);
return err;
@@ -3175,14 +3183,16 @@
if (err)
goto err_put;
- err = add_res_range(dev, slave, vhcr->out_param, 1, RES_FS_RULE, 0);
+ err = add_res_range(dev, slave, vhcr->out_param, 1, RES_FS_RULE, qpn);
if (err) {
mlx4_err(dev, "Fail to add flow steering resources.\n ");
/* detach rule*/
mlx4_cmd(dev, vhcr->out_param, 0, 0,
MLX4_QP_FLOW_STEERING_DETACH, MLX4_CMD_TIME_CLASS_A,
MLX4_CMD_NATIVE);
+ goto err_put;
}
+ atomic_inc(&rqp->ref_count);
err_put:
put_res(dev, slave, qpn, RES_QP);
return err;
@@ -3195,20 +3205,35 @@
struct mlx4_cmd_info *cmd)
{
int err;
+ struct res_qp *rqp;
+ struct res_fs_rule *rrule;
if (dev->caps.steering_mode !=
MLX4_STEERING_MODE_DEVICE_MANAGED)
return -EOPNOTSUPP;
+ err = get_res(dev, slave, vhcr->in_param, RES_FS_RULE, &rrule);
+ if (err)
+ return err;
+ /* Release the rule form busy state before removal */
+ put_res(dev, slave, vhcr->in_param, RES_FS_RULE);
+ err = get_res(dev, slave, rrule->qpn, RES_QP, &rqp);
+ if (err)
+ return err;
+
err = rem_res_range(dev, slave, vhcr->in_param, 1, RES_FS_RULE, 0);
if (err) {
mlx4_err(dev, "Fail to remove flow steering resources.\n ");
- return err;
+ goto out;
}
err = mlx4_cmd(dev, vhcr->in_param, 0, 0,
MLX4_QP_FLOW_STEERING_DETACH, MLX4_CMD_TIME_CLASS_A,
MLX4_CMD_NATIVE);
+ if (!err)
+ atomic_dec(&rqp->ref_count);
+out:
+ put_res(dev, slave, rrule->qpn, RES_QP);
return err;
}
@@ -3806,6 +3831,7 @@
mutex_lock(&priv->mfunc.master.res_tracker.slave_list[slave].mutex);
/*VLAN*/
rem_slave_macs(dev, slave);
+ rem_slave_fs_rule(dev, slave);
rem_slave_qps(dev, slave);
rem_slave_srqs(dev, slave);
rem_slave_cqs(dev, slave);
@@ -3814,6 +3840,5 @@
rem_slave_mtts(dev, slave);
rem_slave_counters(dev, slave);
rem_slave_xrcdns(dev, slave);
- rem_slave_fs_rule(dev, slave);
mutex_unlock(&priv->mfunc.master.res_tracker.slave_list[slave].mutex);
}
diff --git a/drivers/net/ethernet/nxp/lpc_eth.c b/drivers/net/ethernet/nxp/lpc_eth.c
index c4122c8..efa29b7 100644
--- a/drivers/net/ethernet/nxp/lpc_eth.c
+++ b/drivers/net/ethernet/nxp/lpc_eth.c
@@ -1472,7 +1472,8 @@
}
platform_set_drvdata(pdev, ndev);
- if (lpc_mii_init(pldat) != 0)
+ ret = lpc_mii_init(pldat);
+ if (ret)
goto err_out_unregister_netdev;
netdev_info(ndev, "LPC mac at 0x%08x irq %d\n",
diff --git a/drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_main.c b/drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_main.c
index 39ab4d0..73ce7dd 100644
--- a/drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_main.c
+++ b/drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_main.c
@@ -1726,9 +1726,9 @@
skb->protocol = eth_type_trans(skb, netdev);
if (tcp_ip_status & PCH_GBE_RXD_ACC_STAT_TCPIPOK)
- skb->ip_summed = CHECKSUM_NONE;
- else
skb->ip_summed = CHECKSUM_UNNECESSARY;
+ else
+ skb->ip_summed = CHECKSUM_NONE;
napi_gro_receive(&adapter->napi, skb);
(*work_done)++;
diff --git a/drivers/net/ethernet/renesas/sh_eth.c b/drivers/net/ethernet/renesas/sh_eth.c
index 33e9617..bf5e3cf 100644
--- a/drivers/net/ethernet/renesas/sh_eth.c
+++ b/drivers/net/ethernet/renesas/sh_eth.c
@@ -2220,6 +2220,7 @@
/* MDIO bus release function */
static int sh_mdio_release(struct net_device *ndev)
{
+ struct sh_eth_private *mdp = netdev_priv(ndev);
struct mii_bus *bus = dev_get_drvdata(&ndev->dev);
/* unregister mdio bus */
@@ -2234,6 +2235,9 @@
/* free bitbang info */
free_mdio_bitbang(bus);
+ /* free bitbang memory */
+ kfree(mdp->bitbang);
+
return 0;
}
@@ -2262,6 +2266,7 @@
bitbang->ctrl.ops = &bb_ops;
/* MII controller setting */
+ mdp->bitbang = bitbang;
mdp->mii_bus = alloc_mdio_bitbang(&bitbang->ctrl);
if (!mdp->mii_bus) {
ret = -ENOMEM;
@@ -2441,6 +2446,11 @@
}
mdp->tsu_addr = ioremap(rtsu->start,
resource_size(rtsu));
+ if (mdp->tsu_addr == NULL) {
+ ret = -ENOMEM;
+ dev_err(&pdev->dev, "TSU ioremap failed.\n");
+ goto out_release;
+ }
mdp->port = devno % 2;
ndev->features = NETIF_F_HW_VLAN_FILTER;
}
diff --git a/drivers/net/ethernet/renesas/sh_eth.h b/drivers/net/ethernet/renesas/sh_eth.h
index bae84fd..e665567 100644
--- a/drivers/net/ethernet/renesas/sh_eth.h
+++ b/drivers/net/ethernet/renesas/sh_eth.h
@@ -705,6 +705,7 @@
const u16 *reg_offset;
void __iomem *addr;
void __iomem *tsu_addr;
+ struct bb_info *bitbang;
u32 num_rx_ring;
u32 num_tx_ring;
dma_addr_t rx_desc_dma;
diff --git a/drivers/net/ethernet/sfc/nic.c b/drivers/net/ethernet/sfc/nic.c
index 0ad790c..eaa8e87 100644
--- a/drivers/net/ethernet/sfc/nic.c
+++ b/drivers/net/ethernet/sfc/nic.c
@@ -376,7 +376,8 @@
return false;
tx_queue->empty_read_count = 0;
- return ((empty_read_count ^ write_count) & ~EFX_EMPTY_COUNT_VALID) == 0;
+ return ((empty_read_count ^ write_count) & ~EFX_EMPTY_COUNT_VALID) == 0
+ && tx_queue->write_count - write_count == 1;
}
/* For each entry inserted into the software descriptor ring, create a
diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
index 01ffbc4..df32a09 100644
--- a/drivers/net/ethernet/ti/cpsw.c
+++ b/drivers/net/ethernet/ti/cpsw.c
@@ -905,7 +905,7 @@
/* If there is no more tx desc left free then we need to
* tell the kernel to stop sending us tx frames.
*/
- if (unlikely(cpdma_check_free_tx_desc(priv->txch)))
+ if (unlikely(!cpdma_check_free_tx_desc(priv->txch)))
netif_stop_queue(ndev);
return NETDEV_TX_OK;
@@ -1364,7 +1364,7 @@
struct platform_device *mdio;
parp = of_get_property(slave_node, "phy_id", &lenp);
- if ((parp == NULL) && (lenp != (sizeof(void *) * 2))) {
+ if ((parp == NULL) || (lenp != (sizeof(void *) * 2))) {
pr_err("Missing slave[%d] phy_id property\n", i);
ret = -EINVAL;
goto error_ret;
diff --git a/drivers/net/ethernet/ti/davinci_emac.c b/drivers/net/ethernet/ti/davinci_emac.c
index 52c0536..ae1b77a 100644
--- a/drivers/net/ethernet/ti/davinci_emac.c
+++ b/drivers/net/ethernet/ti/davinci_emac.c
@@ -1102,7 +1102,7 @@
/* If there is no more tx desc left free then we need to
* tell the kernel to stop sending us tx frames.
*/
- if (unlikely(cpdma_check_free_tx_desc(priv->txchan)))
+ if (unlikely(!cpdma_check_free_tx_desc(priv->txchan)))
netif_stop_queue(ndev);
return NETDEV_TX_OK;
diff --git a/drivers/net/netconsole.c b/drivers/net/netconsole.c
index 37add21..59ac143 100644
--- a/drivers/net/netconsole.c
+++ b/drivers/net/netconsole.c
@@ -666,6 +666,7 @@
goto done;
spin_lock_irqsave(&target_list_lock, flags);
+restart:
list_for_each_entry(nt, &target_list, list) {
netconsole_target_get(nt);
if (nt->np.dev == dev) {
@@ -678,15 +679,17 @@
case NETDEV_UNREGISTER:
/*
* rtnl_lock already held
+ * we might sleep in __netpoll_cleanup()
*/
- if (nt->np.dev) {
- __netpoll_cleanup(&nt->np);
- dev_put(nt->np.dev);
- nt->np.dev = NULL;
- }
+ spin_unlock_irqrestore(&target_list_lock, flags);
+ __netpoll_cleanup(&nt->np);
+ spin_lock_irqsave(&target_list_lock, flags);
+ dev_put(nt->np.dev);
+ nt->np.dev = NULL;
nt->enabled = 0;
stopped = true;
- break;
+ netconsole_target_put(nt);
+ goto restart;
}
}
netconsole_target_put(nt);
diff --git a/drivers/net/usb/Kconfig b/drivers/net/usb/Kconfig
index 3b6e9b8..7c769d8 100644
--- a/drivers/net/usb/Kconfig
+++ b/drivers/net/usb/Kconfig
@@ -268,7 +268,7 @@
select CRC16
select CRC32
help
- This option adds support for SMSC LAN95XX based USB 2.0
+ This option adds support for SMSC LAN75XX based USB 2.0
Gigabit Ethernet adapters.
config USB_NET_SMSC95XX
diff --git a/drivers/net/usb/cdc_mbim.c b/drivers/net/usb/cdc_mbim.c
index 248d2dc..16c8429 100644
--- a/drivers/net/usb/cdc_mbim.c
+++ b/drivers/net/usb/cdc_mbim.c
@@ -68,18 +68,9 @@
struct cdc_ncm_ctx *ctx;
struct usb_driver *subdriver = ERR_PTR(-ENODEV);
int ret = -ENODEV;
- u8 data_altsetting = CDC_NCM_DATA_ALTSETTING_NCM;
+ u8 data_altsetting = cdc_ncm_select_altsetting(dev, intf);
struct cdc_mbim_state *info = (void *)&dev->data;
- /* see if interface supports MBIM alternate setting */
- if (intf->num_altsetting == 2) {
- if (!cdc_ncm_comm_intf_is_mbim(intf->cur_altsetting))
- usb_set_interface(dev->udev,
- intf->cur_altsetting->desc.bInterfaceNumber,
- CDC_NCM_COMM_ALTSETTING_MBIM);
- data_altsetting = CDC_NCM_DATA_ALTSETTING_MBIM;
- }
-
/* Probably NCM, defer for cdc_ncm_bind */
if (!cdc_ncm_comm_intf_is_mbim(intf->cur_altsetting))
goto err;
diff --git a/drivers/net/usb/cdc_ncm.c b/drivers/net/usb/cdc_ncm.c
index 61b74a2..4709fa3 100644
--- a/drivers/net/usb/cdc_ncm.c
+++ b/drivers/net/usb/cdc_ncm.c
@@ -55,6 +55,14 @@
#define DRIVER_VERSION "14-Mar-2012"
+#if IS_ENABLED(CONFIG_USB_NET_CDC_MBIM)
+static bool prefer_mbim = true;
+#else
+static bool prefer_mbim;
+#endif
+module_param(prefer_mbim, bool, S_IRUGO | S_IWUSR);
+MODULE_PARM_DESC(prefer_mbim, "Prefer MBIM setting on dual NCM/MBIM functions");
+
static void cdc_ncm_txpath_bh(unsigned long param);
static void cdc_ncm_tx_timeout_start(struct cdc_ncm_ctx *ctx);
static enum hrtimer_restart cdc_ncm_tx_timer_cb(struct hrtimer *hr_timer);
@@ -550,9 +558,12 @@
}
EXPORT_SYMBOL_GPL(cdc_ncm_unbind);
-static int cdc_ncm_bind(struct usbnet *dev, struct usb_interface *intf)
+/* Select the MBIM altsetting iff it is preferred and available,
+ * returning the number of the corresponding data interface altsetting
+ */
+u8 cdc_ncm_select_altsetting(struct usbnet *dev, struct usb_interface *intf)
{
- int ret;
+ struct usb_host_interface *alt;
/* The MBIM spec defines a NCM compatible default altsetting,
* which we may have matched:
@@ -568,23 +579,27 @@
* endpoint descriptors, shall be constructed according to
* the rules given in section 6 (USB Device Model) of this
* specification."
- *
- * Do not bind to such interfaces, allowing cdc_mbim to handle
- * them
*/
-#if IS_ENABLED(CONFIG_USB_NET_CDC_MBIM)
- if ((intf->num_altsetting == 2) &&
- !usb_set_interface(dev->udev,
- intf->cur_altsetting->desc.bInterfaceNumber,
- CDC_NCM_COMM_ALTSETTING_MBIM)) {
- if (cdc_ncm_comm_intf_is_mbim(intf->cur_altsetting))
- return -ENODEV;
- else
- usb_set_interface(dev->udev,
- intf->cur_altsetting->desc.bInterfaceNumber,
- CDC_NCM_COMM_ALTSETTING_NCM);
+ if (prefer_mbim && intf->num_altsetting == 2) {
+ alt = usb_altnum_to_altsetting(intf, CDC_NCM_COMM_ALTSETTING_MBIM);
+ if (alt && cdc_ncm_comm_intf_is_mbim(alt) &&
+ !usb_set_interface(dev->udev,
+ intf->cur_altsetting->desc.bInterfaceNumber,
+ CDC_NCM_COMM_ALTSETTING_MBIM))
+ return CDC_NCM_DATA_ALTSETTING_MBIM;
}
-#endif
+ return CDC_NCM_DATA_ALTSETTING_NCM;
+}
+EXPORT_SYMBOL_GPL(cdc_ncm_select_altsetting);
+
+static int cdc_ncm_bind(struct usbnet *dev, struct usb_interface *intf)
+{
+ int ret;
+
+ /* MBIM backwards compatible function? */
+ cdc_ncm_select_altsetting(dev, intf);
+ if (cdc_ncm_comm_intf_is_mbim(intf->cur_altsetting))
+ return -ENODEV;
/* NCM data altsetting is always 1 */
ret = cdc_ncm_bind_common(dev, intf, 1);
diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
index efb5c7c..968d5d5 100644
--- a/drivers/net/usb/qmi_wwan.c
+++ b/drivers/net/usb/qmi_wwan.c
@@ -139,16 +139,9 @@
BUILD_BUG_ON((sizeof(((struct usbnet *)0)->data) < sizeof(struct qmi_wwan_state)));
- /* control and data is shared? */
- if (intf->cur_altsetting->desc.bNumEndpoints == 3) {
- info->control = intf;
- info->data = intf;
- goto shared;
- }
-
- /* else require a single interrupt status endpoint on control intf */
- if (intf->cur_altsetting->desc.bNumEndpoints != 1)
- goto err;
+ /* set up initial state */
+ info->control = intf;
+ info->data = intf;
/* and a number of CDC descriptors */
while (len > 3) {
@@ -207,25 +200,14 @@
buf += h->bLength;
}
- /* did we find all the required ones? */
- if (!(found & (1 << USB_CDC_HEADER_TYPE)) ||
- !(found & (1 << USB_CDC_UNION_TYPE))) {
- dev_err(&intf->dev, "CDC functional descriptors missing\n");
- goto err;
- }
-
- /* verify CDC Union */
- if (desc->bInterfaceNumber != cdc_union->bMasterInterface0) {
- dev_err(&intf->dev, "bogus CDC Union: master=%u\n", cdc_union->bMasterInterface0);
- goto err;
- }
-
- /* need to save these for unbind */
- info->control = intf;
- info->data = usb_ifnum_to_if(dev->udev, cdc_union->bSlaveInterface0);
- if (!info->data) {
- dev_err(&intf->dev, "bogus CDC Union: slave=%u\n", cdc_union->bSlaveInterface0);
- goto err;
+ /* Use separate control and data interfaces if we found a CDC Union */
+ if (cdc_union) {
+ info->data = usb_ifnum_to_if(dev->udev, cdc_union->bSlaveInterface0);
+ if (desc->bInterfaceNumber != cdc_union->bMasterInterface0 || !info->data) {
+ dev_err(&intf->dev, "bogus CDC Union: master=%u, slave=%u\n",
+ cdc_union->bMasterInterface0, cdc_union->bSlaveInterface0);
+ goto err;
+ }
}
/* errors aren't fatal - we can live with the dynamic address */
@@ -235,11 +217,12 @@
}
/* claim data interface and set it up */
- status = usb_driver_claim_interface(driver, info->data, dev);
- if (status < 0)
- goto err;
+ if (info->control != info->data) {
+ status = usb_driver_claim_interface(driver, info->data, dev);
+ if (status < 0)
+ goto err;
+ }
-shared:
status = qmi_wwan_register_subdriver(dev);
if (status < 0 && info->control != info->data) {
usb_set_intfdata(info->data, NULL);
diff --git a/drivers/net/wireless/ath/ath9k/ar9003_calib.c b/drivers/net/wireless/ath/ath9k/ar9003_calib.c
index 4cc1394..f76c3ca 100644
--- a/drivers/net/wireless/ath/ath9k/ar9003_calib.c
+++ b/drivers/net/wireless/ath/ath9k/ar9003_calib.c
@@ -1023,6 +1023,7 @@
AR_PHY_AGC_CONTROL_FLTR_CAL |
AR_PHY_AGC_CONTROL_PKDET_CAL;
+ /* Use chip chainmask only for calibration */
ar9003_hw_set_chain_masks(ah, ah->caps.rx_chainmask, ah->caps.tx_chainmask);
if (rtt) {
@@ -1150,6 +1151,9 @@
ar9003_hw_rtt_disable(ah);
}
+ /* Revert chainmask to runtime parameters */
+ ar9003_hw_set_chain_masks(ah, ah->rxchainmask, ah->txchainmask);
+
/* Initialize list pointers */
ah->cal_list = ah->cal_list_last = ah->cal_list_curr = NULL;
diff --git a/drivers/net/wireless/ath/ath9k/link.c b/drivers/net/wireless/ath/ath9k/link.c
index ade3afb..39c84ec 100644
--- a/drivers/net/wireless/ath/ath9k/link.c
+++ b/drivers/net/wireless/ath/ath9k/link.c
@@ -28,21 +28,21 @@
int i;
bool needreset = false;
- for (i = 0; i < ATH9K_NUM_TX_QUEUES; i++)
- if (ATH_TXQ_SETUP(sc, i)) {
- txq = &sc->tx.txq[i];
- ath_txq_lock(sc, txq);
- if (txq->axq_depth) {
- if (txq->axq_tx_inprogress) {
- needreset = true;
- ath_txq_unlock(sc, txq);
- break;
- } else {
- txq->axq_tx_inprogress = true;
- }
+ for (i = 0; i < IEEE80211_NUM_ACS; i++) {
+ txq = sc->tx.txq_map[i];
+
+ ath_txq_lock(sc, txq);
+ if (txq->axq_depth) {
+ if (txq->axq_tx_inprogress) {
+ needreset = true;
+ ath_txq_unlock(sc, txq);
+ break;
+ } else {
+ txq->axq_tx_inprogress = true;
}
- ath_txq_unlock_complete(sc, txq);
}
+ ath_txq_unlock_complete(sc, txq);
+ }
if (needreset) {
ath_dbg(ath9k_hw_common(sc->sc_ah), RESET,
diff --git a/drivers/net/wireless/iwlegacy/3945-mac.c b/drivers/net/wireless/iwlegacy/3945-mac.c
index 3630a41..c353b5f 100644
--- a/drivers/net/wireless/iwlegacy/3945-mac.c
+++ b/drivers/net/wireless/iwlegacy/3945-mac.c
@@ -475,6 +475,7 @@
dma_addr_t txcmd_phys;
int txq_id = skb_get_queue_mapping(skb);
u16 len, idx, hdr_len;
+ u16 firstlen, secondlen;
u8 id;
u8 unicast;
u8 sta_id;
@@ -589,21 +590,22 @@
len =
sizeof(struct il3945_tx_cmd) + sizeof(struct il_cmd_header) +
hdr_len;
- len = (len + 3) & ~3;
+ firstlen = (len + 3) & ~3;
/* Physical address of this Tx command's header (not MAC header!),
* within command buffer array. */
txcmd_phys =
- pci_map_single(il->pci_dev, &out_cmd->hdr, len, PCI_DMA_TODEVICE);
+ pci_map_single(il->pci_dev, &out_cmd->hdr, firstlen,
+ PCI_DMA_TODEVICE);
if (unlikely(pci_dma_mapping_error(il->pci_dev, txcmd_phys)))
goto drop_unlock;
/* Set up TFD's 2nd entry to point directly to remainder of skb,
* if any (802.11 null frames have no payload). */
- len = skb->len - hdr_len;
- if (len) {
+ secondlen = skb->len - hdr_len;
+ if (secondlen > 0) {
phys_addr =
- pci_map_single(il->pci_dev, skb->data + hdr_len, len,
+ pci_map_single(il->pci_dev, skb->data + hdr_len, secondlen,
PCI_DMA_TODEVICE);
if (unlikely(pci_dma_mapping_error(il->pci_dev, phys_addr)))
goto drop_unlock;
@@ -611,12 +613,12 @@
/* Add buffer containing Tx command and MAC(!) header to TFD's
* first entry */
- il->ops->txq_attach_buf_to_tfd(il, txq, txcmd_phys, len, 1, 0);
+ il->ops->txq_attach_buf_to_tfd(il, txq, txcmd_phys, firstlen, 1, 0);
dma_unmap_addr_set(out_meta, mapping, txcmd_phys);
- dma_unmap_len_set(out_meta, len, len);
- if (len)
- il->ops->txq_attach_buf_to_tfd(il, txq, phys_addr, len, 0,
- U32_PAD(len));
+ dma_unmap_len_set(out_meta, len, firstlen);
+ if (secondlen > 0)
+ il->ops->txq_attach_buf_to_tfd(il, txq, phys_addr, secondlen, 0,
+ U32_PAD(secondlen));
if (!ieee80211_has_morefrags(hdr->frame_control)) {
txq->need_update = 1;
diff --git a/drivers/net/wireless/mwifiex/cmdevt.c b/drivers/net/wireless/mwifiex/cmdevt.c
index 20a6c55..b5c8b96 100644
--- a/drivers/net/wireless/mwifiex/cmdevt.c
+++ b/drivers/net/wireless/mwifiex/cmdevt.c
@@ -157,6 +157,20 @@
return -1;
}
+ cmd_code = le16_to_cpu(host_cmd->command);
+ cmd_size = le16_to_cpu(host_cmd->size);
+
+ if (adapter->hw_status == MWIFIEX_HW_STATUS_RESET &&
+ cmd_code != HostCmd_CMD_FUNC_SHUTDOWN &&
+ cmd_code != HostCmd_CMD_FUNC_INIT) {
+ dev_err(adapter->dev,
+ "DNLD_CMD: FW in reset state, ignore cmd %#x\n",
+ cmd_code);
+ mwifiex_complete_cmd(adapter, cmd_node);
+ mwifiex_insert_cmd_to_free_q(adapter, cmd_node);
+ return -1;
+ }
+
/* Set command sequence number */
adapter->seq_num++;
host_cmd->seq_num = cpu_to_le16(HostCmd_SET_SEQ_NO_BSS_INFO
@@ -168,9 +182,6 @@
adapter->curr_cmd = cmd_node;
spin_unlock_irqrestore(&adapter->mwifiex_cmd_lock, flags);
- cmd_code = le16_to_cpu(host_cmd->command);
- cmd_size = le16_to_cpu(host_cmd->size);
-
/* Adjust skb length */
if (cmd_node->cmd_skb->len > cmd_size)
/*
@@ -484,8 +495,6 @@
ret = mwifiex_send_cmd_async(priv, cmd_no, cmd_action, cmd_oid,
data_buf);
- if (!ret)
- ret = mwifiex_wait_queue_complete(adapter);
return ret;
}
@@ -588,9 +597,10 @@
if (cmd_no == HostCmd_CMD_802_11_SCAN) {
mwifiex_queue_scan_cmd(priv, cmd_node);
} else {
- adapter->cmd_queued = cmd_node;
mwifiex_insert_cmd_to_pending_q(adapter, cmd_node, true);
queue_work(adapter->workqueue, &adapter->main_work);
+ if (cmd_node->wait_q_enabled)
+ ret = mwifiex_wait_queue_complete(adapter, cmd_node);
}
return ret;
diff --git a/drivers/net/wireless/mwifiex/init.c b/drivers/net/wireless/mwifiex/init.c
index e38aa9b..0ff4c37 100644
--- a/drivers/net/wireless/mwifiex/init.c
+++ b/drivers/net/wireless/mwifiex/init.c
@@ -709,6 +709,14 @@
return ret;
}
+ /* cancel current command */
+ if (adapter->curr_cmd) {
+ dev_warn(adapter->dev, "curr_cmd is still in processing\n");
+ del_timer(&adapter->cmd_timer);
+ mwifiex_insert_cmd_to_free_q(adapter, adapter->curr_cmd);
+ adapter->curr_cmd = NULL;
+ }
+
/* shut down mwifiex */
dev_dbg(adapter->dev, "info: shutdown mwifiex...\n");
diff --git a/drivers/net/wireless/mwifiex/join.c b/drivers/net/wireless/mwifiex/join.c
index 246aa62..2fe0ceb 100644
--- a/drivers/net/wireless/mwifiex/join.c
+++ b/drivers/net/wireless/mwifiex/join.c
@@ -1117,10 +1117,9 @@
adhoc_join->bss_descriptor.bssid,
adhoc_join->bss_descriptor.ssid);
- for (i = 0; bss_desc->supported_rates[i] &&
- i < MWIFIEX_SUPPORTED_RATES;
- i++)
- ;
+ for (i = 0; i < MWIFIEX_SUPPORTED_RATES &&
+ bss_desc->supported_rates[i]; i++)
+ ;
rates_size = i;
/* Copy Data Rates from the Rates recorded in scan response */
diff --git a/drivers/net/wireless/mwifiex/main.h b/drivers/net/wireless/mwifiex/main.h
index 553adfb..7035ade 100644
--- a/drivers/net/wireless/mwifiex/main.h
+++ b/drivers/net/wireless/mwifiex/main.h
@@ -723,7 +723,6 @@
u16 cmd_wait_q_required;
struct mwifiex_wait_queue cmd_wait_q;
u8 scan_wait_q_woken;
- struct cmd_ctrl_node *cmd_queued;
spinlock_t queue_lock; /* lock for tx queues */
struct completion fw_load;
u8 country_code[IEEE80211_COUNTRY_STRING_LEN];
@@ -1018,7 +1017,8 @@
struct mwifiex_multicast_list *mcast_list);
int mwifiex_copy_mcast_addr(struct mwifiex_multicast_list *mlist,
struct net_device *dev);
-int mwifiex_wait_queue_complete(struct mwifiex_adapter *adapter);
+int mwifiex_wait_queue_complete(struct mwifiex_adapter *adapter,
+ struct cmd_ctrl_node *cmd_queued);
int mwifiex_bss_start(struct mwifiex_private *priv, struct cfg80211_bss *bss,
struct cfg80211_ssid *req_ssid);
int mwifiex_cancel_hs(struct mwifiex_private *priv, int cmd_type);
diff --git a/drivers/net/wireless/mwifiex/scan.c b/drivers/net/wireless/mwifiex/scan.c
index bb60c27..d215b4d 100644
--- a/drivers/net/wireless/mwifiex/scan.c
+++ b/drivers/net/wireless/mwifiex/scan.c
@@ -1388,10 +1388,13 @@
list_del(&cmd_node->list);
spin_unlock_irqrestore(&adapter->scan_pending_q_lock,
flags);
- adapter->cmd_queued = cmd_node;
mwifiex_insert_cmd_to_pending_q(adapter, cmd_node,
true);
queue_work(adapter->workqueue, &adapter->main_work);
+
+ /* Perform internal scan synchronously */
+ if (!priv->scan_request)
+ mwifiex_wait_queue_complete(adapter, cmd_node);
} else {
spin_unlock_irqrestore(&adapter->scan_pending_q_lock,
flags);
@@ -1946,9 +1949,6 @@
/* Normal scan */
ret = mwifiex_scan_networks(priv, NULL);
- if (!ret)
- ret = mwifiex_wait_queue_complete(priv->adapter);
-
up(&priv->async_sem);
return ret;
diff --git a/drivers/net/wireless/mwifiex/sta_ioctl.c b/drivers/net/wireless/mwifiex/sta_ioctl.c
index 9f33c92..13100f8 100644
--- a/drivers/net/wireless/mwifiex/sta_ioctl.c
+++ b/drivers/net/wireless/mwifiex/sta_ioctl.c
@@ -54,16 +54,10 @@
* This function waits on a cmd wait queue. It also cancels the pending
* request after waking up, in case of errors.
*/
-int mwifiex_wait_queue_complete(struct mwifiex_adapter *adapter)
+int mwifiex_wait_queue_complete(struct mwifiex_adapter *adapter,
+ struct cmd_ctrl_node *cmd_queued)
{
int status;
- struct cmd_ctrl_node *cmd_queued;
-
- if (!adapter->cmd_queued)
- return 0;
-
- cmd_queued = adapter->cmd_queued;
- adapter->cmd_queued = NULL;
dev_dbg(adapter->dev, "cmd pending\n");
atomic_inc(&adapter->cmd_pending);
diff --git a/drivers/net/wireless/rt2x00/Kconfig b/drivers/net/wireless/rt2x00/Kconfig
index 44d6ead..2bf4efa 100644
--- a/drivers/net/wireless/rt2x00/Kconfig
+++ b/drivers/net/wireless/rt2x00/Kconfig
@@ -55,10 +55,10 @@
config RT2800PCI
tristate "Ralink rt27xx/rt28xx/rt30xx (PCI/PCIe/PCMCIA) support"
- depends on PCI || RALINK_RT288X || RALINK_RT305X
+ depends on PCI || SOC_RT288X || SOC_RT305X
select RT2800_LIB
select RT2X00_LIB_PCI if PCI
- select RT2X00_LIB_SOC if RALINK_RT288X || RALINK_RT305X
+ select RT2X00_LIB_SOC if SOC_RT288X || SOC_RT305X
select RT2X00_LIB_FIRMWARE
select RT2X00_LIB_CRYPTO
select CRC_CCITT
diff --git a/drivers/net/wireless/rt2x00/rt2800pci.c b/drivers/net/wireless/rt2x00/rt2800pci.c
index 48a01aa..ded73da 100644
--- a/drivers/net/wireless/rt2x00/rt2800pci.c
+++ b/drivers/net/wireless/rt2x00/rt2800pci.c
@@ -89,7 +89,7 @@
rt2x00pci_register_write(rt2x00dev, H2M_MAILBOX_CID, ~0);
}
-#if defined(CONFIG_RALINK_RT288X) || defined(CONFIG_RALINK_RT305X)
+#if defined(CONFIG_SOC_RT288X) || defined(CONFIG_SOC_RT305X)
static int rt2800pci_read_eeprom_soc(struct rt2x00_dev *rt2x00dev)
{
void __iomem *base_addr = ioremap(0x1F040000, EEPROM_SIZE);
@@ -107,7 +107,7 @@
{
return -ENOMEM;
}
-#endif /* CONFIG_RALINK_RT288X || CONFIG_RALINK_RT305X */
+#endif /* CONFIG_SOC_RT288X || CONFIG_SOC_RT305X */
#ifdef CONFIG_PCI
static void rt2800pci_eepromregister_read(struct eeprom_93cx6 *eeprom)
@@ -1177,7 +1177,7 @@
#endif /* CONFIG_PCI */
MODULE_LICENSE("GPL");
-#if defined(CONFIG_RALINK_RT288X) || defined(CONFIG_RALINK_RT305X)
+#if defined(CONFIG_SOC_RT288X) || defined(CONFIG_SOC_RT305X)
static int rt2800soc_probe(struct platform_device *pdev)
{
return rt2x00soc_probe(pdev, &rt2800pci_ops);
@@ -1194,7 +1194,7 @@
.suspend = rt2x00soc_suspend,
.resume = rt2x00soc_resume,
};
-#endif /* CONFIG_RALINK_RT288X || CONFIG_RALINK_RT305X */
+#endif /* CONFIG_SOC_RT288X || CONFIG_SOC_RT305X */
#ifdef CONFIG_PCI
static int rt2800pci_probe(struct pci_dev *pci_dev,
@@ -1217,7 +1217,7 @@
{
int ret = 0;
-#if defined(CONFIG_RALINK_RT288X) || defined(CONFIG_RALINK_RT305X)
+#if defined(CONFIG_SOC_RT288X) || defined(CONFIG_SOC_RT305X)
ret = platform_driver_register(&rt2800soc_driver);
if (ret)
return ret;
@@ -1225,7 +1225,7 @@
#ifdef CONFIG_PCI
ret = pci_register_driver(&rt2800pci_driver);
if (ret) {
-#if defined(CONFIG_RALINK_RT288X) || defined(CONFIG_RALINK_RT305X)
+#if defined(CONFIG_SOC_RT288X) || defined(CONFIG_SOC_RT305X)
platform_driver_unregister(&rt2800soc_driver);
#endif
return ret;
@@ -1240,7 +1240,7 @@
#ifdef CONFIG_PCI
pci_unregister_driver(&rt2800pci_driver);
#endif
-#if defined(CONFIG_RALINK_RT288X) || defined(CONFIG_RALINK_RT305X)
+#if defined(CONFIG_SOC_RT288X) || defined(CONFIG_SOC_RT305X)
platform_driver_unregister(&rt2800soc_driver);
#endif
}
diff --git a/drivers/net/wireless/rtlwifi/rtl8192cu/hw.c b/drivers/net/wireless/rtlwifi/rtl8192cu/hw.c
index b1ccff4..c08d0f4 100644
--- a/drivers/net/wireless/rtlwifi/rtl8192cu/hw.c
+++ b/drivers/net/wireless/rtlwifi/rtl8192cu/hw.c
@@ -1377,74 +1377,57 @@
void rtl92cu_set_check_bssid(struct ieee80211_hw *hw, bool check_bssid)
{
- /* dummy routine needed for callback from rtl_op_configure_filter() */
+ struct rtl_priv *rtlpriv = rtl_priv(hw);
+ struct rtl_hal *rtlhal = rtl_hal(rtlpriv);
+ u32 reg_rcr = rtl_read_dword(rtlpriv, REG_RCR);
+
+ if (rtlpriv->psc.rfpwr_state != ERFON)
+ return;
+
+ if (check_bssid) {
+ u8 tmp;
+ if (IS_NORMAL_CHIP(rtlhal->version)) {
+ reg_rcr |= (RCR_CBSSID_DATA | RCR_CBSSID_BCN);
+ tmp = BIT(4);
+ } else {
+ reg_rcr |= RCR_CBSSID;
+ tmp = BIT(4) | BIT(5);
+ }
+ rtlpriv->cfg->ops->set_hw_reg(hw, HW_VAR_RCR,
+ (u8 *) (®_rcr));
+ _rtl92cu_set_bcn_ctrl_reg(hw, 0, tmp);
+ } else {
+ u8 tmp;
+ if (IS_NORMAL_CHIP(rtlhal->version)) {
+ reg_rcr &= ~(RCR_CBSSID_DATA | RCR_CBSSID_BCN);
+ tmp = BIT(4);
+ } else {
+ reg_rcr &= ~RCR_CBSSID;
+ tmp = BIT(4) | BIT(5);
+ }
+ reg_rcr &= (~(RCR_CBSSID_DATA | RCR_CBSSID_BCN));
+ rtlpriv->cfg->ops->set_hw_reg(hw,
+ HW_VAR_RCR, (u8 *) (®_rcr));
+ _rtl92cu_set_bcn_ctrl_reg(hw, tmp, 0);
+ }
}
/*========================================================================== */
-static void _rtl92cu_set_check_bssid(struct ieee80211_hw *hw,
- enum nl80211_iftype type)
-{
- struct rtl_priv *rtlpriv = rtl_priv(hw);
- u32 reg_rcr = rtl_read_dword(rtlpriv, REG_RCR);
- struct rtl_hal *rtlhal = rtl_hal(rtlpriv);
- struct rtl_phy *rtlphy = &(rtlpriv->phy);
- u8 filterout_non_associated_bssid = false;
-
- switch (type) {
- case NL80211_IFTYPE_ADHOC:
- case NL80211_IFTYPE_STATION:
- filterout_non_associated_bssid = true;
- break;
- case NL80211_IFTYPE_UNSPECIFIED:
- case NL80211_IFTYPE_AP:
- default:
- break;
- }
- if (filterout_non_associated_bssid) {
- if (IS_NORMAL_CHIP(rtlhal->version)) {
- switch (rtlphy->current_io_type) {
- case IO_CMD_RESUME_DM_BY_SCAN:
- reg_rcr |= (RCR_CBSSID_DATA | RCR_CBSSID_BCN);
- rtlpriv->cfg->ops->set_hw_reg(hw,
- HW_VAR_RCR, (u8 *)(®_rcr));
- /* enable update TSF */
- _rtl92cu_set_bcn_ctrl_reg(hw, 0, BIT(4));
- break;
- case IO_CMD_PAUSE_DM_BY_SCAN:
- reg_rcr &= ~(RCR_CBSSID_DATA | RCR_CBSSID_BCN);
- rtlpriv->cfg->ops->set_hw_reg(hw,
- HW_VAR_RCR, (u8 *)(®_rcr));
- /* disable update TSF */
- _rtl92cu_set_bcn_ctrl_reg(hw, BIT(4), 0);
- break;
- }
- } else {
- reg_rcr |= (RCR_CBSSID);
- rtlpriv->cfg->ops->set_hw_reg(hw, HW_VAR_RCR,
- (u8 *)(®_rcr));
- _rtl92cu_set_bcn_ctrl_reg(hw, 0, (BIT(4)|BIT(5)));
- }
- } else if (filterout_non_associated_bssid == false) {
- if (IS_NORMAL_CHIP(rtlhal->version)) {
- reg_rcr &= (~(RCR_CBSSID_DATA | RCR_CBSSID_BCN));
- rtlpriv->cfg->ops->set_hw_reg(hw, HW_VAR_RCR,
- (u8 *)(®_rcr));
- _rtl92cu_set_bcn_ctrl_reg(hw, BIT(4), 0);
- } else {
- reg_rcr &= (~RCR_CBSSID);
- rtlpriv->cfg->ops->set_hw_reg(hw, HW_VAR_RCR,
- (u8 *)(®_rcr));
- _rtl92cu_set_bcn_ctrl_reg(hw, (BIT(4)|BIT(5)), 0);
- }
- }
-}
-
int rtl92cu_set_network_type(struct ieee80211_hw *hw, enum nl80211_iftype type)
{
+ struct rtl_priv *rtlpriv = rtl_priv(hw);
+
if (_rtl92cu_set_media_status(hw, type))
return -EOPNOTSUPP;
- _rtl92cu_set_check_bssid(hw, type);
+
+ if (rtlpriv->mac80211.link_state == MAC80211_LINKED) {
+ if (type != NL80211_IFTYPE_AP)
+ rtl92cu_set_check_bssid(hw, true);
+ } else {
+ rtl92cu_set_check_bssid(hw, false);
+ }
+
return 0;
}
@@ -2058,8 +2041,6 @@
(shortgi_rate << 4) | (shortgi_rate);
}
rtl_write_dword(rtlpriv, REG_ARFR0 + ratr_index * 4, ratr_value);
- RT_TRACE(rtlpriv, COMP_RATR, DBG_DMESG, "%x\n",
- rtl_read_dword(rtlpriv, REG_ARFR0));
}
void rtl92cu_update_hal_rate_mask(struct ieee80211_hw *hw, u8 rssi_level)
diff --git a/drivers/net/wireless/rtlwifi/usb.c b/drivers/net/wireless/rtlwifi/usb.c
index 156b527..5847d6d 100644
--- a/drivers/net/wireless/rtlwifi/usb.c
+++ b/drivers/net/wireless/rtlwifi/usb.c
@@ -851,6 +851,7 @@
if (unlikely(!_urb)) {
RT_TRACE(rtlpriv, COMP_ERR, DBG_EMERG,
"Can't allocate urb. Drop skb!\n");
+ kfree_skb(skb);
return;
}
_rtl_submit_tx_urb(hw, _urb);
diff --git a/drivers/pci/rom.c b/drivers/pci/rom.c
index ab886b7..b41ac77 100644
--- a/drivers/pci/rom.c
+++ b/drivers/pci/rom.c
@@ -100,6 +100,27 @@
return min((size_t)(image - rom), size);
}
+static loff_t pci_find_rom(struct pci_dev *pdev, size_t *size)
+{
+ struct resource *res = &pdev->resource[PCI_ROM_RESOURCE];
+ loff_t start;
+
+ /* assign the ROM an address if it doesn't have one */
+ if (res->parent == NULL && pci_assign_resource(pdev, PCI_ROM_RESOURCE))
+ return 0;
+ start = pci_resource_start(pdev, PCI_ROM_RESOURCE);
+ *size = pci_resource_len(pdev, PCI_ROM_RESOURCE);
+
+ if (*size == 0)
+ return 0;
+
+ /* Enable ROM space decodes */
+ if (pci_enable_rom(pdev))
+ return 0;
+
+ return start;
+}
+
/**
* pci_map_rom - map a PCI ROM to kernel space
* @pdev: pointer to pci device struct
@@ -114,21 +135,15 @@
void __iomem *pci_map_rom(struct pci_dev *pdev, size_t *size)
{
struct resource *res = &pdev->resource[PCI_ROM_RESOURCE];
- loff_t start;
+ loff_t start = 0;
void __iomem *rom;
/*
- * Some devices may provide ROMs via a source other than the BAR
- */
- if (pdev->rom && pdev->romlen) {
- *size = pdev->romlen;
- return phys_to_virt(pdev->rom);
- /*
* IORESOURCE_ROM_SHADOW set on x86, x86_64 and IA64 supports legacy
* memory map if the VGA enable bit of the Bridge Control register is
* set for embedded VGA.
*/
- } else if (res->flags & IORESOURCE_ROM_SHADOW) {
+ if (res->flags & IORESOURCE_ROM_SHADOW) {
/* primary video rom always starts here */
start = (loff_t)0xC0000;
*size = 0x20000; /* cover C000:0 through E000:0 */
@@ -139,21 +154,21 @@
return (void __iomem *)(unsigned long)
pci_resource_start(pdev, PCI_ROM_RESOURCE);
} else {
- /* assign the ROM an address if it doesn't have one */
- if (res->parent == NULL &&
- pci_assign_resource(pdev,PCI_ROM_RESOURCE))
- return NULL;
- start = pci_resource_start(pdev, PCI_ROM_RESOURCE);
- *size = pci_resource_len(pdev, PCI_ROM_RESOURCE);
- if (*size == 0)
- return NULL;
-
- /* Enable ROM space decodes */
- if (pci_enable_rom(pdev))
- return NULL;
+ start = pci_find_rom(pdev, size);
}
}
+ /*
+ * Some devices may provide ROMs via a source other than the BAR
+ */
+ if (!start && pdev->rom && pdev->romlen) {
+ *size = pdev->romlen;
+ return phys_to_virt(pdev->rom);
+ }
+
+ if (!start)
+ return NULL;
+
rom = ioremap(start, *size);
if (!rom) {
/* restore enable if ioremap fails */
diff --git a/drivers/pinctrl/mvebu/pinctrl-mvebu.c b/drivers/pinctrl/mvebu/pinctrl-mvebu.c
index c689c04..2d2f0a4 100644
--- a/drivers/pinctrl/mvebu/pinctrl-mvebu.c
+++ b/drivers/pinctrl/mvebu/pinctrl-mvebu.c
@@ -620,7 +620,7 @@
/* special soc specific control */
if (ctrl->mpp_get || ctrl->mpp_set) {
- if (!ctrl->name || !ctrl->mpp_set || !ctrl->mpp_set) {
+ if (!ctrl->name || !ctrl->mpp_get || !ctrl->mpp_set) {
dev_err(&pdev->dev, "wrong soc control info\n");
return -EINVAL;
}
diff --git a/drivers/pinctrl/pinconf.c b/drivers/pinctrl/pinconf.c
index ac8d382..d611ecf 100644
--- a/drivers/pinctrl/pinconf.c
+++ b/drivers/pinctrl/pinconf.c
@@ -622,7 +622,7 @@
static int pinconf_dbg_state_print(struct seq_file *s, void *d)
{
if (strlen(dbg_state_name))
- seq_printf(s, "%s\n", dbg_pinname);
+ seq_printf(s, "%s\n", dbg_state_name);
else
seq_printf(s, "No pin state set\n");
return 0;
diff --git a/drivers/pinctrl/pinconf.h b/drivers/pinctrl/pinconf.h
index e3ed8cb..bfda73d 100644
--- a/drivers/pinctrl/pinconf.h
+++ b/drivers/pinctrl/pinconf.h
@@ -90,7 +90,7 @@
* pin config.
*/
-#ifdef CONFIG_GENERIC_PINCONF
+#if defined(CONFIG_GENERIC_PINCONF) && defined(CONFIG_DEBUG_FS)
void pinconf_generic_dump_pin(struct pinctrl_dev *pctldev,
struct seq_file *s, unsigned pin);
diff --git a/drivers/pinctrl/pinctrl-abx500.c b/drivers/pinctrl/pinctrl-abx500.c
index caecdd3..c542a97 100644
--- a/drivers/pinctrl/pinctrl-abx500.c
+++ b/drivers/pinctrl/pinctrl-abx500.c
@@ -422,7 +422,7 @@
}
/* check if pin use AlternateFunction register */
- if ((af.alt_bit1 == UNUSED) && (af.alt_bit1 == UNUSED))
+ if ((af.alt_bit1 == UNUSED) && (af.alt_bit2 == UNUSED))
return mode;
/*
* if pin GPIOSEL bit is set and pin supports alternate function,
diff --git a/drivers/pinctrl/pinctrl-at91.c b/drivers/pinctrl/pinctrl-at91.c
index 75933a6..efb7f10 100644
--- a/drivers/pinctrl/pinctrl-at91.c
+++ b/drivers/pinctrl/pinctrl-at91.c
@@ -1277,21 +1277,80 @@
}
#ifdef CONFIG_PM
+
+static u32 wakeups[MAX_GPIO_BANKS];
+static u32 backups[MAX_GPIO_BANKS];
+
static int gpio_irq_set_wake(struct irq_data *d, unsigned state)
{
struct at91_gpio_chip *at91_gpio = irq_data_get_irq_chip_data(d);
unsigned bank = at91_gpio->pioc_idx;
+ unsigned mask = 1 << d->hwirq;
if (unlikely(bank >= MAX_GPIO_BANKS))
return -EINVAL;
+ if (state)
+ wakeups[bank] |= mask;
+ else
+ wakeups[bank] &= ~mask;
+
irq_set_irq_wake(at91_gpio->pioc_virq, state);
return 0;
}
+
+void at91_pinctrl_gpio_suspend(void)
+{
+ int i;
+
+ for (i = 0; i < gpio_banks; i++) {
+ void __iomem *pio;
+
+ if (!gpio_chips[i])
+ continue;
+
+ pio = gpio_chips[i]->regbase;
+
+ backups[i] = __raw_readl(pio + PIO_IMR);
+ __raw_writel(backups[i], pio + PIO_IDR);
+ __raw_writel(wakeups[i], pio + PIO_IER);
+
+ if (!wakeups[i]) {
+ clk_unprepare(gpio_chips[i]->clock);
+ clk_disable(gpio_chips[i]->clock);
+ } else {
+ printk(KERN_DEBUG "GPIO-%c may wake for %08x\n",
+ 'A'+i, wakeups[i]);
+ }
+ }
+}
+
+void at91_pinctrl_gpio_resume(void)
+{
+ int i;
+
+ for (i = 0; i < gpio_banks; i++) {
+ void __iomem *pio;
+
+ if (!gpio_chips[i])
+ continue;
+
+ pio = gpio_chips[i]->regbase;
+
+ if (!wakeups[i]) {
+ if (clk_prepare(gpio_chips[i]->clock) == 0)
+ clk_enable(gpio_chips[i]->clock);
+ }
+
+ __raw_writel(wakeups[i], pio + PIO_IDR);
+ __raw_writel(backups[i], pio + PIO_IER);
+ }
+}
+
#else
#define gpio_irq_set_wake NULL
-#endif
+#endif /* CONFIG_PM */
static struct irq_chip gpio_irqchip = {
.name = "GPIO",
diff --git a/drivers/pinctrl/pinmux.c b/drivers/pinctrl/pinmux.c
index 1a00658..bd83c8b 100644
--- a/drivers/pinctrl/pinmux.c
+++ b/drivers/pinctrl/pinmux.c
@@ -194,6 +194,11 @@
}
if (!gpio_range) {
+ /*
+ * A pin should not be freed more times than allocated.
+ */
+ if (WARN_ON(!desc->mux_usecount))
+ return NULL;
desc->mux_usecount--;
if (desc->mux_usecount)
return NULL;
diff --git a/drivers/rtc/rtc-at91rm9200.c b/drivers/rtc/rtc-at91rm9200.c
index 434ebc3..0a9f27e 100644
--- a/drivers/rtc/rtc-at91rm9200.c
+++ b/drivers/rtc/rtc-at91rm9200.c
@@ -44,6 +44,7 @@
static unsigned int at91_alarm_year = AT91_RTC_EPOCH;
static void __iomem *at91_rtc_regs;
static int irq;
+static u32 at91_rtc_imr;
/*
* Decode time/date into rtc_time structure
@@ -108,9 +109,11 @@
cr = at91_rtc_read(AT91_RTC_CR);
at91_rtc_write(AT91_RTC_CR, cr | AT91_RTC_UPDCAL | AT91_RTC_UPDTIM);
+ at91_rtc_imr |= AT91_RTC_ACKUPD;
at91_rtc_write(AT91_RTC_IER, AT91_RTC_ACKUPD);
wait_for_completion(&at91_rtc_updated); /* wait for ACKUPD interrupt */
at91_rtc_write(AT91_RTC_IDR, AT91_RTC_ACKUPD);
+ at91_rtc_imr &= ~AT91_RTC_ACKUPD;
at91_rtc_write(AT91_RTC_TIMR,
bin2bcd(tm->tm_sec) << 0
@@ -142,7 +145,7 @@
tm->tm_yday = rtc_year_days(tm->tm_mday, tm->tm_mon, tm->tm_year);
tm->tm_year = at91_alarm_year - 1900;
- alrm->enabled = (at91_rtc_read(AT91_RTC_IMR) & AT91_RTC_ALARM)
+ alrm->enabled = (at91_rtc_imr & AT91_RTC_ALARM)
? 1 : 0;
dev_dbg(dev, "%s(): %4d-%02d-%02d %02d:%02d:%02d\n", __func__,
@@ -168,6 +171,7 @@
tm.tm_sec = alrm->time.tm_sec;
at91_rtc_write(AT91_RTC_IDR, AT91_RTC_ALARM);
+ at91_rtc_imr &= ~AT91_RTC_ALARM;
at91_rtc_write(AT91_RTC_TIMALR,
bin2bcd(tm.tm_sec) << 0
| bin2bcd(tm.tm_min) << 8
@@ -180,6 +184,7 @@
if (alrm->enabled) {
at91_rtc_write(AT91_RTC_SCCR, AT91_RTC_ALARM);
+ at91_rtc_imr |= AT91_RTC_ALARM;
at91_rtc_write(AT91_RTC_IER, AT91_RTC_ALARM);
}
@@ -196,9 +201,12 @@
if (enabled) {
at91_rtc_write(AT91_RTC_SCCR, AT91_RTC_ALARM);
+ at91_rtc_imr |= AT91_RTC_ALARM;
at91_rtc_write(AT91_RTC_IER, AT91_RTC_ALARM);
- } else
+ } else {
at91_rtc_write(AT91_RTC_IDR, AT91_RTC_ALARM);
+ at91_rtc_imr &= ~AT91_RTC_ALARM;
+ }
return 0;
}
@@ -207,12 +215,10 @@
*/
static int at91_rtc_proc(struct device *dev, struct seq_file *seq)
{
- unsigned long imr = at91_rtc_read(AT91_RTC_IMR);
-
seq_printf(seq, "update_IRQ\t: %s\n",
- (imr & AT91_RTC_ACKUPD) ? "yes" : "no");
+ (at91_rtc_imr & AT91_RTC_ACKUPD) ? "yes" : "no");
seq_printf(seq, "periodic_IRQ\t: %s\n",
- (imr & AT91_RTC_SECEV) ? "yes" : "no");
+ (at91_rtc_imr & AT91_RTC_SECEV) ? "yes" : "no");
return 0;
}
@@ -227,7 +233,7 @@
unsigned int rtsr;
unsigned long events = 0;
- rtsr = at91_rtc_read(AT91_RTC_SR) & at91_rtc_read(AT91_RTC_IMR);
+ rtsr = at91_rtc_read(AT91_RTC_SR) & at91_rtc_imr;
if (rtsr) { /* this interrupt is shared! Is it ours? */
if (rtsr & AT91_RTC_ALARM)
events |= (RTC_AF | RTC_IRQF);
@@ -291,6 +297,7 @@
at91_rtc_write(AT91_RTC_IDR, AT91_RTC_ACKUPD | AT91_RTC_ALARM |
AT91_RTC_SECEV | AT91_RTC_TIMEV |
AT91_RTC_CALEV);
+ at91_rtc_imr = 0;
ret = request_irq(irq, at91_rtc_interrupt,
IRQF_SHARED,
@@ -329,6 +336,7 @@
at91_rtc_write(AT91_RTC_IDR, AT91_RTC_ACKUPD | AT91_RTC_ALARM |
AT91_RTC_SECEV | AT91_RTC_TIMEV |
AT91_RTC_CALEV);
+ at91_rtc_imr = 0;
free_irq(irq, pdev);
rtc_device_unregister(rtc);
@@ -341,31 +349,35 @@
/* AT91RM9200 RTC Power management control */
-static u32 at91_rtc_imr;
+static u32 at91_rtc_bkpimr;
+
static int at91_rtc_suspend(struct device *dev)
{
/* this IRQ is shared with DBGU and other hardware which isn't
* necessarily doing PM like we are...
*/
- at91_rtc_imr = at91_rtc_read(AT91_RTC_IMR)
- & (AT91_RTC_ALARM|AT91_RTC_SECEV);
- if (at91_rtc_imr) {
- if (device_may_wakeup(dev))
+ at91_rtc_bkpimr = at91_rtc_imr & (AT91_RTC_ALARM|AT91_RTC_SECEV);
+ if (at91_rtc_bkpimr) {
+ if (device_may_wakeup(dev)) {
enable_irq_wake(irq);
- else
- at91_rtc_write(AT91_RTC_IDR, at91_rtc_imr);
- }
+ } else {
+ at91_rtc_write(AT91_RTC_IDR, at91_rtc_bkpimr);
+ at91_rtc_imr &= ~at91_rtc_bkpimr;
+ }
+}
return 0;
}
static int at91_rtc_resume(struct device *dev)
{
- if (at91_rtc_imr) {
- if (device_may_wakeup(dev))
+ if (at91_rtc_bkpimr) {
+ if (device_may_wakeup(dev)) {
disable_irq_wake(irq);
- else
- at91_rtc_write(AT91_RTC_IER, at91_rtc_imr);
+ } else {
+ at91_rtc_imr |= at91_rtc_bkpimr;
+ at91_rtc_write(AT91_RTC_IER, at91_rtc_bkpimr);
+ }
}
return 0;
}
diff --git a/drivers/rtc/rtc-at91rm9200.h b/drivers/rtc/rtc-at91rm9200.h
index da1945e..5f940b6 100644
--- a/drivers/rtc/rtc-at91rm9200.h
+++ b/drivers/rtc/rtc-at91rm9200.h
@@ -64,7 +64,6 @@
#define AT91_RTC_SCCR 0x1c /* Status Clear Command Register */
#define AT91_RTC_IER 0x20 /* Interrupt Enable Register */
#define AT91_RTC_IDR 0x24 /* Interrupt Disable Register */
-#define AT91_RTC_IMR 0x28 /* Interrupt Mask Register */
#define AT91_RTC_VER 0x2c /* Valid Entry Register */
#define AT91_RTC_NVTIM (1 << 0) /* Non valid Time */
diff --git a/drivers/rtc/rtc-da9052.c b/drivers/rtc/rtc-da9052.c
index 0dde688..969abba 100644
--- a/drivers/rtc/rtc-da9052.c
+++ b/drivers/rtc/rtc-da9052.c
@@ -239,11 +239,9 @@
rtc->da9052 = dev_get_drvdata(pdev->dev.parent);
platform_set_drvdata(pdev, rtc);
- rtc->irq = platform_get_irq_byname(pdev, "ALM");
- ret = devm_request_threaded_irq(&pdev->dev, rtc->irq, NULL,
- da9052_rtc_irq,
- IRQF_TRIGGER_LOW | IRQF_ONESHOT,
- "ALM", rtc);
+ rtc->irq = DA9052_IRQ_ALARM;
+ ret = da9052_request_irq(rtc->da9052, rtc->irq, "ALM",
+ da9052_rtc_irq, rtc);
if (ret != 0) {
rtc_err(rtc->da9052, "irq registration failed: %d\n", ret);
return ret;
diff --git a/drivers/s390/block/scm_blk.c b/drivers/s390/block/scm_blk.c
index 9978ad4..5ac9c93 100644
--- a/drivers/s390/block/scm_blk.c
+++ b/drivers/s390/block/scm_blk.c
@@ -135,6 +135,11 @@
.release = scm_release,
};
+static bool scm_permit_request(struct scm_blk_dev *bdev, struct request *req)
+{
+ return rq_data_dir(req) != WRITE || bdev->state != SCM_WR_PROHIBIT;
+}
+
static void scm_request_prepare(struct scm_request *scmrq)
{
struct scm_blk_dev *bdev = scmrq->bdev;
@@ -195,14 +200,18 @@
scm_release_cluster(scmrq);
blk_requeue_request(bdev->rq, scmrq->request);
+ atomic_dec(&bdev->queued_reqs);
scm_request_done(scmrq);
scm_ensure_queue_restart(bdev);
}
void scm_request_finish(struct scm_request *scmrq)
{
+ struct scm_blk_dev *bdev = scmrq->bdev;
+
scm_release_cluster(scmrq);
blk_end_request_all(scmrq->request, scmrq->error);
+ atomic_dec(&bdev->queued_reqs);
scm_request_done(scmrq);
}
@@ -218,6 +227,10 @@
if (req->cmd_type != REQ_TYPE_FS)
continue;
+ if (!scm_permit_request(bdev, req)) {
+ scm_ensure_queue_restart(bdev);
+ return;
+ }
scmrq = scm_request_fetch();
if (!scmrq) {
SCM_LOG(5, "no request");
@@ -231,11 +244,13 @@
return;
}
if (scm_need_cluster_request(scmrq)) {
+ atomic_inc(&bdev->queued_reqs);
blk_start_request(req);
scm_initiate_cluster_request(scmrq);
return;
}
scm_request_prepare(scmrq);
+ atomic_inc(&bdev->queued_reqs);
blk_start_request(req);
ret = scm_start_aob(scmrq->aob);
@@ -244,7 +259,6 @@
scm_request_requeue(scmrq);
return;
}
- atomic_inc(&bdev->queued_reqs);
}
}
@@ -280,6 +294,38 @@
tasklet_hi_schedule(&bdev->tasklet);
}
+static void scm_blk_handle_error(struct scm_request *scmrq)
+{
+ struct scm_blk_dev *bdev = scmrq->bdev;
+ unsigned long flags;
+
+ if (scmrq->error != -EIO)
+ goto restart;
+
+ /* For -EIO the response block is valid. */
+ switch (scmrq->aob->response.eqc) {
+ case EQC_WR_PROHIBIT:
+ spin_lock_irqsave(&bdev->lock, flags);
+ if (bdev->state != SCM_WR_PROHIBIT)
+ pr_info("%lu: Write access to the SCM increment is suspended\n",
+ (unsigned long) bdev->scmdev->address);
+ bdev->state = SCM_WR_PROHIBIT;
+ spin_unlock_irqrestore(&bdev->lock, flags);
+ goto requeue;
+ default:
+ break;
+ }
+
+restart:
+ if (!scm_start_aob(scmrq->aob))
+ return;
+
+requeue:
+ spin_lock_irqsave(&bdev->rq_lock, flags);
+ scm_request_requeue(scmrq);
+ spin_unlock_irqrestore(&bdev->rq_lock, flags);
+}
+
static void scm_blk_tasklet(struct scm_blk_dev *bdev)
{
struct scm_request *scmrq;
@@ -293,11 +339,8 @@
spin_unlock_irqrestore(&bdev->lock, flags);
if (scmrq->error && scmrq->retries-- > 0) {
- if (scm_start_aob(scmrq->aob)) {
- spin_lock_irqsave(&bdev->rq_lock, flags);
- scm_request_requeue(scmrq);
- spin_unlock_irqrestore(&bdev->rq_lock, flags);
- }
+ scm_blk_handle_error(scmrq);
+
/* Request restarted or requeued, handle next. */
spin_lock_irqsave(&bdev->lock, flags);
continue;
@@ -310,7 +353,6 @@
}
scm_request_finish(scmrq);
- atomic_dec(&bdev->queued_reqs);
spin_lock_irqsave(&bdev->lock, flags);
}
spin_unlock_irqrestore(&bdev->lock, flags);
@@ -332,6 +374,7 @@
}
bdev->scmdev = scmdev;
+ bdev->state = SCM_OPER;
spin_lock_init(&bdev->rq_lock);
spin_lock_init(&bdev->lock);
INIT_LIST_HEAD(&bdev->finished_requests);
@@ -396,6 +439,18 @@
put_disk(bdev->gendisk);
}
+void scm_blk_set_available(struct scm_blk_dev *bdev)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&bdev->lock, flags);
+ if (bdev->state == SCM_WR_PROHIBIT)
+ pr_info("%lu: Write access to the SCM increment is restored\n",
+ (unsigned long) bdev->scmdev->address);
+ bdev->state = SCM_OPER;
+ spin_unlock_irqrestore(&bdev->lock, flags);
+}
+
static int __init scm_blk_init(void)
{
int ret = -EINVAL;
diff --git a/drivers/s390/block/scm_blk.h b/drivers/s390/block/scm_blk.h
index 3c1ccf4..8b387b3 100644
--- a/drivers/s390/block/scm_blk.h
+++ b/drivers/s390/block/scm_blk.h
@@ -21,6 +21,7 @@
spinlock_t rq_lock; /* guard the request queue */
spinlock_t lock; /* guard the rest of the blockdev */
atomic_t queued_reqs;
+ enum {SCM_OPER, SCM_WR_PROHIBIT} state;
struct list_head finished_requests;
#ifdef CONFIG_SCM_BLOCK_CLUSTER_WRITE
struct list_head cluster_list;
@@ -48,6 +49,7 @@
int scm_blk_dev_setup(struct scm_blk_dev *, struct scm_device *);
void scm_blk_dev_cleanup(struct scm_blk_dev *);
+void scm_blk_set_available(struct scm_blk_dev *);
void scm_blk_irq(struct scm_device *, void *, int);
void scm_request_finish(struct scm_request *);
diff --git a/drivers/s390/block/scm_drv.c b/drivers/s390/block/scm_drv.c
index 9fa0a90..5f6180d 100644
--- a/drivers/s390/block/scm_drv.c
+++ b/drivers/s390/block/scm_drv.c
@@ -13,12 +13,23 @@
#include <asm/eadm.h>
#include "scm_blk.h"
-static void notify(struct scm_device *scmdev)
+static void scm_notify(struct scm_device *scmdev, enum scm_event event)
{
- pr_info("%lu: The capabilities of the SCM increment changed\n",
- (unsigned long) scmdev->address);
- SCM_LOG(2, "State changed");
- SCM_LOG_STATE(2, scmdev);
+ struct scm_blk_dev *bdev = dev_get_drvdata(&scmdev->dev);
+
+ switch (event) {
+ case SCM_CHANGE:
+ pr_info("%lu: The capabilities of the SCM increment changed\n",
+ (unsigned long) scmdev->address);
+ SCM_LOG(2, "State changed");
+ SCM_LOG_STATE(2, scmdev);
+ break;
+ case SCM_AVAIL:
+ SCM_LOG(2, "Increment available");
+ SCM_LOG_STATE(2, scmdev);
+ scm_blk_set_available(bdev);
+ break;
+ }
}
static int scm_probe(struct scm_device *scmdev)
@@ -64,7 +75,7 @@
.name = "scm_block",
.owner = THIS_MODULE,
},
- .notify = notify,
+ .notify = scm_notify,
.probe = scm_probe,
.remove = scm_remove,
.handler = scm_blk_irq,
diff --git a/drivers/s390/char/sclp_cmd.c b/drivers/s390/char/sclp_cmd.c
index 30a2255..cd79838 100644
--- a/drivers/s390/char/sclp_cmd.c
+++ b/drivers/s390/char/sclp_cmd.c
@@ -627,6 +627,8 @@
struct read_storage_sccb *sccb;
int i, id, assigned, rc;
+ if (OLDMEM_BASE) /* No standby memory in kdump mode */
+ return 0;
if (!early_read_info_sccb_valid)
return 0;
if ((sclp_facilities & 0xe00000000000ULL) != 0xe00000000000ULL)
diff --git a/drivers/s390/cio/chsc.c b/drivers/s390/cio/chsc.c
index 31ceef1..e16c553 100644
--- a/drivers/s390/cio/chsc.c
+++ b/drivers/s390/cio/chsc.c
@@ -433,6 +433,20 @@
" failed (rc=%d).\n", ret);
}
+static void chsc_process_sei_scm_avail(struct chsc_sei_nt0_area *sei_area)
+{
+ int ret;
+
+ CIO_CRW_EVENT(4, "chsc: scm available information\n");
+ if (sei_area->rs != 7)
+ return;
+
+ ret = scm_process_availability_information();
+ if (ret)
+ CIO_CRW_EVENT(0, "chsc: process availability information"
+ " failed (rc=%d).\n", ret);
+}
+
static void chsc_process_sei_nt2(struct chsc_sei_nt2_area *sei_area)
{
switch (sei_area->cc) {
@@ -468,6 +482,9 @@
case 12: /* scm change notification */
chsc_process_sei_scm_change(sei_area);
break;
+ case 14: /* scm available notification */
+ chsc_process_sei_scm_avail(sei_area);
+ break;
default: /* other stuff */
CIO_CRW_EVENT(2, "chsc: sei nt0 unhandled cc=%d\n",
sei_area->cc);
diff --git a/drivers/s390/cio/chsc.h b/drivers/s390/cio/chsc.h
index 227e05f..349d5fc 100644
--- a/drivers/s390/cio/chsc.h
+++ b/drivers/s390/cio/chsc.h
@@ -156,8 +156,10 @@
#ifdef CONFIG_SCM_BUS
int scm_update_information(void);
+int scm_process_availability_information(void);
#else /* CONFIG_SCM_BUS */
static inline int scm_update_information(void) { return 0; }
+static inline int scm_process_availability_information(void) { return 0; }
#endif /* CONFIG_SCM_BUS */
diff --git a/drivers/s390/cio/scm.c b/drivers/s390/cio/scm.c
index bcf20f3..46ec256 100644
--- a/drivers/s390/cio/scm.c
+++ b/drivers/s390/cio/scm.c
@@ -211,7 +211,7 @@
goto out;
scmdrv = to_scm_drv(scmdev->dev.driver);
if (changed && scmdrv->notify)
- scmdrv->notify(scmdev);
+ scmdrv->notify(scmdev, SCM_CHANGE);
out:
device_unlock(&scmdev->dev);
if (changed)
@@ -297,6 +297,22 @@
return ret;
}
+static int scm_dev_avail(struct device *dev, void *unused)
+{
+ struct scm_driver *scmdrv = to_scm_drv(dev->driver);
+ struct scm_device *scmdev = to_scm_dev(dev);
+
+ if (dev->driver && scmdrv->notify)
+ scmdrv->notify(scmdev, SCM_AVAIL);
+
+ return 0;
+}
+
+int scm_process_availability_information(void)
+{
+ return bus_for_each_dev(&scm_bus_type, NULL, NULL, scm_dev_avail);
+}
+
static int __init scm_init(void)
{
int ret;
diff --git a/drivers/s390/net/qeth_core.h b/drivers/s390/net/qeth_core.h
index d87961d..8c06223 100644
--- a/drivers/s390/net/qeth_core.h
+++ b/drivers/s390/net/qeth_core.h
@@ -916,6 +916,7 @@
void *reply_param);
int qeth_get_priority_queue(struct qeth_card *, struct sk_buff *, int, int);
int qeth_get_elements_no(struct qeth_card *, void *, struct sk_buff *, int);
+int qeth_get_elements_for_frags(struct sk_buff *);
int qeth_do_send_packet_fast(struct qeth_card *, struct qeth_qdio_out_q *,
struct sk_buff *, struct qeth_hdr *, int, int, int);
int qeth_do_send_packet(struct qeth_card *, struct qeth_qdio_out_q *,
diff --git a/drivers/s390/net/qeth_core_main.c b/drivers/s390/net/qeth_core_main.c
index 0d8cdff..0d73a99 100644
--- a/drivers/s390/net/qeth_core_main.c
+++ b/drivers/s390/net/qeth_core_main.c
@@ -3679,6 +3679,25 @@
}
EXPORT_SYMBOL_GPL(qeth_get_priority_queue);
+int qeth_get_elements_for_frags(struct sk_buff *skb)
+{
+ int cnt, length, e, elements = 0;
+ struct skb_frag_struct *frag;
+ char *data;
+
+ for (cnt = 0; cnt < skb_shinfo(skb)->nr_frags; cnt++) {
+ frag = &skb_shinfo(skb)->frags[cnt];
+ data = (char *)page_to_phys(skb_frag_page(frag)) +
+ frag->page_offset;
+ length = frag->size;
+ e = PFN_UP((unsigned long)data + length - 1) -
+ PFN_DOWN((unsigned long)data);
+ elements += e;
+ }
+ return elements;
+}
+EXPORT_SYMBOL_GPL(qeth_get_elements_for_frags);
+
int qeth_get_elements_no(struct qeth_card *card, void *hdr,
struct sk_buff *skb, int elems)
{
@@ -3686,7 +3705,8 @@
int elements_needed = PFN_UP((unsigned long)skb->data + dlen - 1) -
PFN_DOWN((unsigned long)skb->data);
- elements_needed += skb_shinfo(skb)->nr_frags;
+ elements_needed += qeth_get_elements_for_frags(skb);
+
if ((elements_needed + elems) > QETH_MAX_BUFFER_ELEMENTS(card)) {
QETH_DBF_MESSAGE(2, "Invalid size of IP packet "
"(Number=%d / Length=%d). Discarded.\n",
@@ -3771,12 +3791,23 @@
for (cnt = 0; cnt < skb_shinfo(skb)->nr_frags; cnt++) {
frag = &skb_shinfo(skb)->frags[cnt];
- buffer->element[element].addr = (char *)
- page_to_phys(skb_frag_page(frag))
- + frag->page_offset;
- buffer->element[element].length = frag->size;
- buffer->element[element].eflags = SBAL_EFLAGS_MIDDLE_FRAG;
- element++;
+ data = (char *)page_to_phys(skb_frag_page(frag)) +
+ frag->page_offset;
+ length = frag->size;
+ while (length > 0) {
+ length_here = PAGE_SIZE -
+ ((unsigned long) data % PAGE_SIZE);
+ if (length < length_here)
+ length_here = length;
+
+ buffer->element[element].addr = data;
+ buffer->element[element].length = length_here;
+ buffer->element[element].eflags =
+ SBAL_EFLAGS_MIDDLE_FRAG;
+ length -= length_here;
+ data += length_here;
+ element++;
+ }
}
if (buffer->element[element - 1].eflags)
diff --git a/drivers/s390/net/qeth_l3_main.c b/drivers/s390/net/qeth_l3_main.c
index 091ca0efa..8710337 100644
--- a/drivers/s390/net/qeth_l3_main.c
+++ b/drivers/s390/net/qeth_l3_main.c
@@ -623,7 +623,7 @@
return rc;
}
-static void qeth_l3_correct_routing_type(struct qeth_card *card,
+static int qeth_l3_correct_routing_type(struct qeth_card *card,
enum qeth_routing_types *type, enum qeth_prot_versions prot)
{
if (card->info.type == QETH_CARD_TYPE_IQD) {
@@ -632,7 +632,7 @@
case PRIMARY_CONNECTOR:
case SECONDARY_CONNECTOR:
case MULTICAST_ROUTER:
- return;
+ return 0;
default:
goto out_inval;
}
@@ -641,17 +641,18 @@
case NO_ROUTER:
case PRIMARY_ROUTER:
case SECONDARY_ROUTER:
- return;
+ return 0;
case MULTICAST_ROUTER:
if (qeth_is_ipafunc_supported(card, prot,
IPA_OSA_MC_ROUTER))
- return;
+ return 0;
default:
goto out_inval;
}
}
out_inval:
*type = NO_ROUTER;
+ return -EINVAL;
}
int qeth_l3_setrouting_v4(struct qeth_card *card)
@@ -660,8 +661,10 @@
QETH_CARD_TEXT(card, 3, "setrtg4");
- qeth_l3_correct_routing_type(card, &card->options.route4.type,
+ rc = qeth_l3_correct_routing_type(card, &card->options.route4.type,
QETH_PROT_IPV4);
+ if (rc)
+ return rc;
rc = qeth_l3_send_setrouting(card, card->options.route4.type,
QETH_PROT_IPV4);
@@ -683,8 +686,10 @@
if (!qeth_is_supported(card, IPA_IPV6))
return 0;
- qeth_l3_correct_routing_type(card, &card->options.route6.type,
+ rc = qeth_l3_correct_routing_type(card, &card->options.route6.type,
QETH_PROT_IPV6);
+ if (rc)
+ return rc;
rc = qeth_l3_send_setrouting(card, card->options.route6.type,
QETH_PROT_IPV6);
@@ -2898,7 +2903,9 @@
tcp_hdr(skb)->doff * 4;
int tcpd_len = skb->len - (tcpd - (unsigned long)skb->data);
int elements = PFN_UP(tcpd + tcpd_len - 1) - PFN_DOWN(tcpd);
- elements += skb_shinfo(skb)->nr_frags;
+
+ elements += qeth_get_elements_for_frags(skb);
+
return elements;
}
@@ -3348,7 +3355,6 @@
rc = -ENODEV;
goto out_remove;
}
- qeth_trace_features(card);
if (!card->dev && qeth_l3_setup_netdev(card)) {
rc = -ENODEV;
@@ -3425,6 +3431,7 @@
qeth_l3_set_multicast_list(card->dev);
rtnl_unlock();
}
+ qeth_trace_features(card);
/* let user_space know that device is online */
kobject_uevent(&gdev->dev.kobj, KOBJ_CHANGE);
mutex_unlock(&card->conf_mutex);
diff --git a/drivers/s390/net/qeth_l3_sys.c b/drivers/s390/net/qeth_l3_sys.c
index ebc3794..e70af24 100644
--- a/drivers/s390/net/qeth_l3_sys.c
+++ b/drivers/s390/net/qeth_l3_sys.c
@@ -87,6 +87,8 @@
rc = qeth_l3_setrouting_v6(card);
}
out:
+ if (rc)
+ route->type = old_route_type;
mutex_unlock(&card->conf_mutex);
return rc ? rc : count;
}
diff --git a/drivers/target/iscsi/iscsi_target_auth.c b/drivers/target/iscsi/iscsi_target_auth.c
index db0cf7c..a0fc7b9 100644
--- a/drivers/target/iscsi/iscsi_target_auth.c
+++ b/drivers/target/iscsi/iscsi_target_auth.c
@@ -166,6 +166,7 @@
{
char *endptr;
unsigned long id;
+ unsigned char id_as_uchar;
unsigned char digest[MD5_SIGNATURE_SIZE];
unsigned char type, response[MD5_SIGNATURE_SIZE * 2 + 2];
unsigned char identifier[10], *challenge = NULL;
@@ -355,7 +356,9 @@
goto out;
}
- sg_init_one(&sg, &id, 1);
+ /* To handle both endiannesses */
+ id_as_uchar = id;
+ sg_init_one(&sg, &id_as_uchar, 1);
ret = crypto_hash_update(&desc, &sg, 1);
if (ret < 0) {
pr_err("crypto_hash_update() failed for id\n");
diff --git a/drivers/target/target_core_file.h b/drivers/target/target_core_file.h
index bc02b01..37ffc5b 100644
--- a/drivers/target/target_core_file.h
+++ b/drivers/target/target_core_file.h
@@ -7,7 +7,7 @@
#define FD_DEVICE_QUEUE_DEPTH 32
#define FD_MAX_DEVICE_QUEUE_DEPTH 128
#define FD_BLOCKSIZE 512
-#define FD_MAX_SECTORS 1024
+#define FD_MAX_SECTORS 2048
#define RRF_EMULATE_CDB 0x01
#define RRF_GOT_LBA 0x02
diff --git a/drivers/target/target_core_pscsi.c b/drivers/target/target_core_pscsi.c
index 82e78d7..e992b27 100644
--- a/drivers/target/target_core_pscsi.c
+++ b/drivers/target/target_core_pscsi.c
@@ -883,7 +883,14 @@
pr_debug("PSCSI: i: %d page: %p len: %d off: %d\n", i,
page, len, off);
- while (len > 0 && data_len > 0) {
+ /*
+ * We only have one page of data in each sg element,
+ * we can not cross a page boundary.
+ */
+ if (off + len > PAGE_SIZE)
+ goto fail;
+
+ if (len > 0 && data_len > 0) {
bytes = min_t(unsigned int, len, PAGE_SIZE - off);
bytes = min(bytes, data_len);
@@ -940,9 +947,7 @@
bio = NULL;
}
- len -= bytes;
data_len -= bytes;
- off = 0;
}
}
diff --git a/drivers/target/target_core_sbc.c b/drivers/target/target_core_sbc.c
index 290230d..60d4b51 100644
--- a/drivers/target/target_core_sbc.c
+++ b/drivers/target/target_core_sbc.c
@@ -464,8 +464,11 @@
break;
case SYNCHRONIZE_CACHE:
case SYNCHRONIZE_CACHE_16:
- if (!ops->execute_sync_cache)
- return TCM_UNSUPPORTED_SCSI_OPCODE;
+ if (!ops->execute_sync_cache) {
+ size = 0;
+ cmd->execute_cmd = sbc_emulate_noop;
+ break;
+ }
/*
* Extract LBA and range to be flushed for emulated SYNCHRONIZE_CACHE
diff --git a/drivers/target/target_core_tpg.c b/drivers/target/target_core_tpg.c
index 9169d6a..aac9d27 100644
--- a/drivers/target/target_core_tpg.c
+++ b/drivers/target/target_core_tpg.c
@@ -711,7 +711,8 @@
if (se_tpg->se_tpg_type == TRANSPORT_TPG_TYPE_NORMAL) {
if (core_tpg_setup_virtual_lun0(se_tpg) < 0) {
- kfree(se_tpg);
+ array_free(se_tpg->tpg_lun_list,
+ TRANSPORT_MAX_LUNS_PER_TPG);
return -ENOMEM;
}
}
diff --git a/drivers/thermal/dove_thermal.c b/drivers/thermal/dove_thermal.c
index 7b0bfa0..3078c40 100644
--- a/drivers/thermal/dove_thermal.c
+++ b/drivers/thermal/dove_thermal.c
@@ -143,22 +143,18 @@
if (!priv)
return -ENOMEM;
- priv->sensor = devm_request_and_ioremap(&pdev->dev, res);
- if (!priv->sensor) {
- dev_err(&pdev->dev, "Failed to request_ioremap memory\n");
- return -EADDRNOTAVAIL;
- }
+ priv->sensor = devm_ioremap_resource(&pdev->dev, res);
+ if (IS_ERR(priv->sensor))
+ return PTR_ERR(priv->sensor);
res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
if (!res) {
dev_err(&pdev->dev, "Failed to get platform resource\n");
return -ENODEV;
}
- priv->control = devm_request_and_ioremap(&pdev->dev, res);
- if (!priv->control) {
- dev_err(&pdev->dev, "Failed to request_ioremap memory\n");
- return -EADDRNOTAVAIL;
- }
+ priv->control = devm_ioremap_resource(&pdev->dev, res);
+ if (IS_ERR(priv->control))
+ return PTR_ERR(priv->control);
ret = dove_init_sensor(priv);
if (ret) {
diff --git a/drivers/thermal/exynos_thermal.c b/drivers/thermal/exynos_thermal.c
index e04ebd8..46568c0 100644
--- a/drivers/thermal/exynos_thermal.c
+++ b/drivers/thermal/exynos_thermal.c
@@ -476,7 +476,7 @@
if (IS_ERR(th_zone->therm_dev)) {
pr_err("Failed to register thermal zone device\n");
- ret = -EINVAL;
+ ret = PTR_ERR(th_zone->therm_dev);
goto err_unregister;
}
th_zone->mode = THERMAL_DEVICE_ENABLED;
diff --git a/drivers/thermal/kirkwood_thermal.c b/drivers/thermal/kirkwood_thermal.c
index 65cb4f0..e5500ed 100644
--- a/drivers/thermal/kirkwood_thermal.c
+++ b/drivers/thermal/kirkwood_thermal.c
@@ -85,11 +85,9 @@
if (!priv)
return -ENOMEM;
- priv->sensor = devm_request_and_ioremap(&pdev->dev, res);
- if (!priv->sensor) {
- dev_err(&pdev->dev, "Failed to request_ioremap memory\n");
- return -EADDRNOTAVAIL;
- }
+ priv->sensor = devm_ioremap_resource(&pdev->dev, res);
+ if (IS_ERR(priv->sensor))
+ return PTR_ERR(priv->sensor);
thermal = thermal_zone_device_register("kirkwood_thermal", 0, 0,
priv, &ops, NULL, 0, 0);
diff --git a/drivers/thermal/rcar_thermal.c b/drivers/thermal/rcar_thermal.c
index 28f0919..2cc5b61 100644
--- a/drivers/thermal/rcar_thermal.c
+++ b/drivers/thermal/rcar_thermal.c
@@ -145,6 +145,7 @@
struct device *dev = rcar_priv_to_dev(priv);
int i;
int ctemp, old, new;
+ int ret = -EINVAL;
mutex_lock(&priv->lock);
@@ -174,7 +175,7 @@
if (!ctemp) {
dev_err(dev, "thermal sensor was broken\n");
- return -EINVAL;
+ goto err_out_unlock;
}
/*
@@ -192,10 +193,10 @@
dev_dbg(dev, "thermal%d %d -> %d\n", priv->id, priv->ctemp, ctemp);
priv->ctemp = ctemp;
-
+ ret = 0;
+err_out_unlock:
mutex_unlock(&priv->lock);
-
- return 0;
+ return ret;
}
static int rcar_thermal_get_temp(struct thermal_zone_device *zone,
@@ -363,6 +364,7 @@
struct resource *res, *irq;
int mres = 0;
int i;
+ int ret = -ENODEV;
int idle = IDLE_INTERVAL;
common = devm_kzalloc(dev, sizeof(*common), GFP_KERNEL);
@@ -399,11 +401,9 @@
/*
* rcar_has_irq_support() will be enabled
*/
- common->base = devm_request_and_ioremap(dev, res);
- if (!common->base) {
- dev_err(dev, "Unable to ioremap thermal register\n");
- return -ENOMEM;
- }
+ common->base = devm_ioremap_resource(dev, res);
+ if (IS_ERR(common->base))
+ return PTR_ERR(common->base);
/* enable temperature comparation */
rcar_thermal_common_write(common, ENR, 0x00030303);
@@ -422,11 +422,9 @@
return -ENOMEM;
}
- priv->base = devm_request_and_ioremap(dev, res);
- if (!priv->base) {
- dev_err(dev, "Unable to ioremap priv register\n");
- return -ENOMEM;
- }
+ priv->base = devm_ioremap_resource(dev, res);
+ if (IS_ERR(priv->base))
+ return PTR_ERR(priv->base);
priv->common = common;
priv->id = i;
@@ -441,6 +439,7 @@
idle);
if (IS_ERR(priv->zone)) {
dev_err(dev, "can't register thermal zone\n");
+ ret = PTR_ERR(priv->zone);
goto error_unregister;
}
@@ -460,7 +459,7 @@
rcar_thermal_for_each_priv(priv, common)
thermal_zone_device_unregister(priv->zone);
- return -ENODEV;
+ return ret;
}
static int rcar_thermal_remove(struct platform_device *pdev)
diff --git a/drivers/tty/serial/sunsu.c b/drivers/tty/serial/sunsu.c
index e343d66..451687c 100644
--- a/drivers/tty/serial/sunsu.c
+++ b/drivers/tty/serial/sunsu.c
@@ -968,6 +968,7 @@
#define UART_NR 4
static struct uart_sunsu_port sunsu_ports[UART_NR];
+static int nr_inst; /* Number of already registered ports */
#ifdef CONFIG_SERIO
@@ -1337,13 +1338,8 @@
printk("Console: ttyS%d (SU)\n",
(sunsu_reg.minor - 64) + co->index);
- /*
- * Check whether an invalid uart number has been specified, and
- * if so, search for the first available port that does have
- * console support.
- */
- if (co->index >= UART_NR)
- co->index = 0;
+ if (co->index > nr_inst)
+ return -ENODEV;
port = &sunsu_ports[co->index].port;
/*
@@ -1408,7 +1404,6 @@
static int su_probe(struct platform_device *op)
{
- static int inst;
struct device_node *dp = op->dev.of_node;
struct uart_sunsu_port *up;
struct resource *rp;
@@ -1418,16 +1413,16 @@
type = su_get_type(dp);
if (type == SU_PORT_PORT) {
- if (inst >= UART_NR)
+ if (nr_inst >= UART_NR)
return -EINVAL;
- up = &sunsu_ports[inst];
+ up = &sunsu_ports[nr_inst];
} else {
up = kzalloc(sizeof(*up), GFP_KERNEL);
if (!up)
return -ENOMEM;
}
- up->port.line = inst;
+ up->port.line = nr_inst;
spin_lock_init(&up->port.lock);
@@ -1461,6 +1456,8 @@
}
dev_set_drvdata(&op->dev, up);
+ nr_inst++;
+
return 0;
}
@@ -1488,7 +1485,7 @@
dev_set_drvdata(&op->dev, up);
- inst++;
+ nr_inst++;
return 0;
diff --git a/drivers/tty/vt/vc_screen.c b/drivers/tty/vt/vc_screen.c
index e4ca345..d7799de 100644
--- a/drivers/tty/vt/vc_screen.c
+++ b/drivers/tty/vt/vc_screen.c
@@ -93,7 +93,7 @@
static struct vcs_poll_data *
vcs_poll_data_get(struct file *file)
{
- struct vcs_poll_data *poll = file->private_data;
+ struct vcs_poll_data *poll = file->private_data, *kill = NULL;
if (poll)
return poll;
@@ -122,10 +122,12 @@
file->private_data = poll;
} else {
/* someone else raced ahead of us */
- vcs_poll_data_free(poll);
+ kill = poll;
poll = file->private_data;
}
spin_unlock(&file->f_lock);
+ if (kill)
+ vcs_poll_data_free(kill);
return poll;
}
diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
index 8ac25ad..387dc6c 100644
--- a/drivers/usb/class/cdc-acm.c
+++ b/drivers/usb/class/cdc-acm.c
@@ -593,7 +593,6 @@
dev_dbg(&acm->control->dev, "%s\n", __func__);
- tty_unregister_device(acm_tty_driver, acm->minor);
acm_release_minor(acm);
usb_put_intf(acm->control);
kfree(acm->country_codes);
@@ -977,6 +976,8 @@
int num_rx_buf;
int i;
int combined_interfaces = 0;
+ struct device *tty_dev;
+ int rv = -ENOMEM;
/* normal quirks */
quirks = (unsigned long)id->driver_info;
@@ -1339,11 +1340,24 @@
usb_set_intfdata(data_interface, acm);
usb_get_intf(control_interface);
- tty_port_register_device(&acm->port, acm_tty_driver, minor,
+ tty_dev = tty_port_register_device(&acm->port, acm_tty_driver, minor,
&control_interface->dev);
+ if (IS_ERR(tty_dev)) {
+ rv = PTR_ERR(tty_dev);
+ goto alloc_fail8;
+ }
return 0;
+alloc_fail8:
+ if (acm->country_codes) {
+ device_remove_file(&acm->control->dev,
+ &dev_attr_wCountryCodes);
+ device_remove_file(&acm->control->dev,
+ &dev_attr_iCountryCodeRelDate);
+ }
+ device_remove_file(&acm->control->dev, &dev_attr_bmCapabilities);
alloc_fail7:
+ usb_set_intfdata(intf, NULL);
for (i = 0; i < ACM_NW; i++)
usb_free_urb(acm->wb[i].urb);
alloc_fail6:
@@ -1359,7 +1373,7 @@
acm_release_minor(acm);
kfree(acm);
alloc_fail:
- return -ENOMEM;
+ return rv;
}
static void stop_data_traffic(struct acm *acm)
@@ -1411,6 +1425,8 @@
stop_data_traffic(acm);
+ tty_unregister_device(acm_tty_driver, acm->minor);
+
usb_free_urb(acm->ctrlurb);
for (i = 0; i < ACM_NW; i++)
usb_free_urb(acm->wb[i].urb);
diff --git a/drivers/usb/core/hcd-pci.c b/drivers/usb/core/hcd-pci.c
index 622b4a4..2b487d4 100644
--- a/drivers/usb/core/hcd-pci.c
+++ b/drivers/usb/core/hcd-pci.c
@@ -173,6 +173,7 @@
struct hc_driver *driver;
struct usb_hcd *hcd;
int retval;
+ int hcd_irq = 0;
if (usb_disabled())
return -ENODEV;
@@ -187,15 +188,19 @@
return -ENODEV;
dev->current_state = PCI_D0;
- /* The xHCI driver supports MSI and MSI-X,
- * so don't fail if the BIOS doesn't provide a legacy IRQ.
+ /*
+ * The xHCI driver has its own irq management
+ * make sure irq setup is not touched for xhci in generic hcd code
*/
- if (!dev->irq && (driver->flags & HCD_MASK) != HCD_USB3) {
- dev_err(&dev->dev,
- "Found HC with no IRQ. Check BIOS/PCI %s setup!\n",
- pci_name(dev));
- retval = -ENODEV;
- goto disable_pci;
+ if ((driver->flags & HCD_MASK) != HCD_USB3) {
+ if (!dev->irq) {
+ dev_err(&dev->dev,
+ "Found HC with no IRQ. Check BIOS/PCI %s setup!\n",
+ pci_name(dev));
+ retval = -ENODEV;
+ goto disable_pci;
+ }
+ hcd_irq = dev->irq;
}
hcd = usb_create_hcd(driver, &dev->dev, pci_name(dev));
@@ -245,7 +250,7 @@
pci_set_master(dev);
- retval = usb_add_hcd(hcd, dev->irq, IRQF_SHARED);
+ retval = usb_add_hcd(hcd, hcd_irq, IRQF_SHARED);
if (retval != 0)
goto unmap_registers;
set_hs_companion(dev, hcd);
diff --git a/drivers/usb/gadget/f_rndis.c b/drivers/usb/gadget/f_rndis.c
index 71beeb8..cc9c49c 100644
--- a/drivers/usb/gadget/f_rndis.c
+++ b/drivers/usb/gadget/f_rndis.c
@@ -447,14 +447,13 @@
static void rndis_command_complete(struct usb_ep *ep, struct usb_request *req)
{
struct f_rndis *rndis = req->context;
- struct usb_composite_dev *cdev = rndis->port.func.config->cdev;
int status;
/* received RNDIS command from USB_CDC_SEND_ENCAPSULATED_COMMAND */
// spin_lock(&dev->lock);
status = rndis_msg_parser(rndis->config, (u8 *) req->buf);
if (status < 0)
- ERROR(cdev, "RNDIS command error %d, %d/%d\n",
+ pr_err("RNDIS command error %d, %d/%d\n",
status, req->actual, req->length);
// spin_unlock(&dev->lock);
}
diff --git a/drivers/usb/gadget/g_ffs.c b/drivers/usb/gadget/g_ffs.c
index 3953dd4..3b343b2 100644
--- a/drivers/usb/gadget/g_ffs.c
+++ b/drivers/usb/gadget/g_ffs.c
@@ -357,7 +357,7 @@
goto error;
gfs_dev_desc.iProduct = gfs_strings[USB_GADGET_PRODUCT_IDX].id;
- for (i = func_num; --i; ) {
+ for (i = func_num; i--; ) {
ret = functionfs_bind(ffs_tab[i].ffs_data, cdev);
if (unlikely(ret < 0)) {
while (++i < func_num)
@@ -413,7 +413,7 @@
gether_cleanup();
gfs_ether_setup = false;
- for (i = func_num; --i; )
+ for (i = func_num; i--; )
if (ffs_tab[i].ffs_data)
functionfs_unbind(ffs_tab[i].ffs_data);
diff --git a/drivers/usb/gadget/net2272.c b/drivers/usb/gadget/net2272.c
index d226058..32524b63 100644
--- a/drivers/usb/gadget/net2272.c
+++ b/drivers/usb/gadget/net2272.c
@@ -59,7 +59,7 @@
};
#define DMA_ADDR_INVALID (~(dma_addr_t)0)
-#ifdef CONFIG_USB_GADGET_NET2272_DMA
+#ifdef CONFIG_USB_NET2272_DMA
/*
* use_dma: the NET2272 can use an external DMA controller.
* Note that since there is no generic DMA api, some functions,
@@ -1495,6 +1495,13 @@
for (i = 0; i < 4; ++i)
net2272_dequeue_all(&dev->ep[i]);
+ /* report disconnect; the driver is already quiesced */
+ if (driver) {
+ spin_unlock(&dev->lock);
+ driver->disconnect(&dev->gadget);
+ spin_lock(&dev->lock);
+ }
+
net2272_usb_reinit(dev);
}
diff --git a/drivers/usb/gadget/net2280.c b/drivers/usb/gadget/net2280.c
index a1b650e..3bd0f99 100644
--- a/drivers/usb/gadget/net2280.c
+++ b/drivers/usb/gadget/net2280.c
@@ -1924,7 +1924,6 @@
err_func:
device_remove_file (&dev->pdev->dev, &dev_attr_function);
err_unbind:
- driver->unbind (&dev->gadget);
dev->gadget.dev.driver = NULL;
dev->driver = NULL;
return retval;
@@ -1946,6 +1945,13 @@
for (i = 0; i < 7; i++)
nuke (&dev->ep [i]);
+ /* report disconnect; the driver is already quiesced */
+ if (driver) {
+ spin_unlock(&dev->lock);
+ driver->disconnect(&dev->gadget);
+ spin_lock(&dev->lock);
+ }
+
usb_reinit (dev);
}
diff --git a/drivers/usb/gadget/u_serial.c b/drivers/usb/gadget/u_serial.c
index c5034d9..b369292 100644
--- a/drivers/usb/gadget/u_serial.c
+++ b/drivers/usb/gadget/u_serial.c
@@ -136,7 +136,7 @@
pr_debug(fmt, ##arg)
#endif /* pr_vdebug */
#else
-#ifndef pr_vdebig
+#ifndef pr_vdebug
#define pr_vdebug(fmt, arg...) \
({ if (0) pr_debug(fmt, ##arg); })
#endif /* pr_vdebug */
diff --git a/drivers/usb/gadget/udc-core.c b/drivers/usb/gadget/udc-core.c
index 2a9cd36..f8f62c3 100644
--- a/drivers/usb/gadget/udc-core.c
+++ b/drivers/usb/gadget/udc-core.c
@@ -216,7 +216,7 @@
usb_gadget_disconnect(udc->gadget);
udc->driver->disconnect(udc->gadget);
udc->driver->unbind(udc->gadget);
- usb_gadget_udc_stop(udc->gadget, udc->driver);
+ usb_gadget_udc_stop(udc->gadget, NULL);
udc->driver = NULL;
udc->dev.driver = NULL;
diff --git a/drivers/usb/host/ehci-hcd.c b/drivers/usb/host/ehci-hcd.c
index 5726cb1..416a6dc 100644
--- a/drivers/usb/host/ehci-hcd.c
+++ b/drivers/usb/host/ehci-hcd.c
@@ -302,6 +302,7 @@
static void end_unlink_async(struct ehci_hcd *ehci);
static void unlink_empty_async(struct ehci_hcd *ehci);
+static void unlink_empty_async_suspended(struct ehci_hcd *ehci);
static void ehci_work(struct ehci_hcd *ehci);
static void start_unlink_intr(struct ehci_hcd *ehci, struct ehci_qh *qh);
static void end_unlink_intr(struct ehci_hcd *ehci, struct ehci_qh *qh);
diff --git a/drivers/usb/host/ehci-hub.c b/drivers/usb/host/ehci-hub.c
index 4d3b294..7d06e77 100644
--- a/drivers/usb/host/ehci-hub.c
+++ b/drivers/usb/host/ehci-hub.c
@@ -328,7 +328,7 @@
ehci->rh_state = EHCI_RH_SUSPENDED;
end_unlink_async(ehci);
- unlink_empty_async(ehci);
+ unlink_empty_async_suspended(ehci);
ehci_handle_intr_unlinks(ehci);
end_free_itds(ehci);
diff --git a/drivers/usb/host/ehci-q.c b/drivers/usb/host/ehci-q.c
index 5464665..23d1369 100644
--- a/drivers/usb/host/ehci-q.c
+++ b/drivers/usb/host/ehci-q.c
@@ -1316,6 +1316,19 @@
}
}
+/* The root hub is suspended; unlink all the async QHs */
+static void unlink_empty_async_suspended(struct ehci_hcd *ehci)
+{
+ struct ehci_qh *qh;
+
+ while (ehci->async->qh_next.qh) {
+ qh = ehci->async->qh_next.qh;
+ WARN_ON(!list_empty(&qh->qtd_list));
+ single_unlink_async(ehci, qh);
+ }
+ start_iaa_cycle(ehci, false);
+}
+
/* makes sure the async qh will become idle */
/* caller must own ehci->lock */
diff --git a/drivers/usb/host/ehci-timer.c b/drivers/usb/host/ehci-timer.c
index 20dbdcb..c3fa130 100644
--- a/drivers/usb/host/ehci-timer.c
+++ b/drivers/usb/host/ehci-timer.c
@@ -304,7 +304,7 @@
* (a) SMP races against real IAA firing and retriggering, and
* (b) clean HC shutdown, when IAA watchdog was pending.
*/
- if (ehci->async_iaa) {
+ if (1) {
u32 cmd, status;
/* If we get here, IAA is *REALLY* late. It's barely
diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
index f1f01a8..849470b 100644
--- a/drivers/usb/host/xhci.c
+++ b/drivers/usb/host/xhci.c
@@ -350,7 +350,7 @@
* generate interrupts. Don't even try to enable MSI.
*/
if (xhci->quirks & XHCI_BROKEN_MSI)
- return 0;
+ goto legacy_irq;
/* unregister the legacy interrupt */
if (hcd->irq)
@@ -371,6 +371,7 @@
return -EINVAL;
}
+ legacy_irq:
/* fall back to legacy interrupt*/
ret = request_irq(pdev->irq, &usb_hcd_irq, IRQF_SHARED,
hcd->irq_descr, hcd);
diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
index f791bd0..2c510e4 100644
--- a/drivers/usb/host/xhci.h
+++ b/drivers/usb/host/xhci.h
@@ -206,8 +206,8 @@
/* bits 12:31 are reserved (and should be preserved on writes). */
/* IMAN - Interrupt Management Register */
-#define IMAN_IP (1 << 1)
-#define IMAN_IE (1 << 0)
+#define IMAN_IE (1 << 1)
+#define IMAN_IP (1 << 0)
/* USBSTS - USB status - status bitmasks */
/* HC not running - set to 1 when run/stop bit is cleared. */
diff --git a/drivers/usb/musb/da8xx.c b/drivers/usb/musb/da8xx.c
index 7c71769d..41613a2 100644
--- a/drivers/usb/musb/da8xx.c
+++ b/drivers/usb/musb/da8xx.c
@@ -327,7 +327,7 @@
u8 devctl = musb_readb(mregs, MUSB_DEVCTL);
int err;
- err = musb->int_usb & USB_INTR_VBUSERROR;
+ err = musb->int_usb & MUSB_INTR_VBUSERROR;
if (err) {
/*
* The Mentor core doesn't debounce VBUS as needed
diff --git a/drivers/usb/musb/musb_gadget.c b/drivers/usb/musb/musb_gadget.c
index be18537..83edded 100644
--- a/drivers/usb/musb/musb_gadget.c
+++ b/drivers/usb/musb/musb_gadget.c
@@ -141,7 +141,9 @@
static inline void unmap_dma_buffer(struct musb_request *request,
struct musb *musb)
{
- if (!is_buffer_mapped(request))
+ struct musb_ep *musb_ep = request->ep;
+
+ if (!is_buffer_mapped(request) || !musb_ep->dma)
return;
if (request->request.dma == DMA_ADDR_INVALID) {
@@ -195,7 +197,10 @@
ep->busy = 1;
spin_unlock(&musb->lock);
- unmap_dma_buffer(req, musb);
+
+ if (!dma_mapping_error(&musb->g.dev, request->dma))
+ unmap_dma_buffer(req, musb);
+
if (request->status == 0)
dev_dbg(musb->controller, "%s done request %p, %d/%d\n",
ep->end_point.name, request,
diff --git a/drivers/usb/serial/ark3116.c b/drivers/usb/serial/ark3116.c
index cbd904b..4775f82 100644
--- a/drivers/usb/serial/ark3116.c
+++ b/drivers/usb/serial/ark3116.c
@@ -62,7 +62,6 @@
}
struct ark3116_private {
- wait_queue_head_t delta_msr_wait;
struct async_icount icount;
int irda; /* 1 for irda device */
@@ -146,7 +145,6 @@
if (!priv)
return -ENOMEM;
- init_waitqueue_head(&priv->delta_msr_wait);
mutex_init(&priv->hw_lock);
spin_lock_init(&priv->status_lock);
@@ -456,10 +454,14 @@
case TIOCMIWAIT:
for (;;) {
struct async_icount prev = priv->icount;
- interruptible_sleep_on(&priv->delta_msr_wait);
+ interruptible_sleep_on(&port->delta_msr_wait);
/* see if a signal did it */
if (signal_pending(current))
return -ERESTARTSYS;
+
+ if (port->serial->disconnected)
+ return -EIO;
+
if ((prev.rng == priv->icount.rng) &&
(prev.dsr == priv->icount.dsr) &&
(prev.dcd == priv->icount.dcd) &&
@@ -580,7 +582,7 @@
priv->icount.dcd++;
if (msr & UART_MSR_TERI)
priv->icount.rng++;
- wake_up_interruptible(&priv->delta_msr_wait);
+ wake_up_interruptible(&port->delta_msr_wait);
}
}
diff --git a/drivers/usb/serial/ch341.c b/drivers/usb/serial/ch341.c
index d255f66..07d4650 100644
--- a/drivers/usb/serial/ch341.c
+++ b/drivers/usb/serial/ch341.c
@@ -80,7 +80,6 @@
struct ch341_private {
spinlock_t lock; /* access lock */
- wait_queue_head_t delta_msr_wait; /* wait queue for modem status */
unsigned baud_rate; /* set baud rate */
u8 line_control; /* set line control value RTS/DTR */
u8 line_status; /* active status of modem control inputs */
@@ -252,7 +251,6 @@
return -ENOMEM;
spin_lock_init(&priv->lock);
- init_waitqueue_head(&priv->delta_msr_wait);
priv->baud_rate = DEFAULT_BAUD_RATE;
priv->line_control = CH341_BIT_RTS | CH341_BIT_DTR;
@@ -298,7 +296,7 @@
priv->line_control &= ~(CH341_BIT_RTS | CH341_BIT_DTR);
spin_unlock_irqrestore(&priv->lock, flags);
ch341_set_handshake(port->serial->dev, priv->line_control);
- wake_up_interruptible(&priv->delta_msr_wait);
+ wake_up_interruptible(&port->delta_msr_wait);
}
static void ch341_close(struct usb_serial_port *port)
@@ -491,7 +489,7 @@
tty_kref_put(tty);
}
- wake_up_interruptible(&priv->delta_msr_wait);
+ wake_up_interruptible(&port->delta_msr_wait);
}
exit:
@@ -517,11 +515,14 @@
spin_unlock_irqrestore(&priv->lock, flags);
while (!multi_change) {
- interruptible_sleep_on(&priv->delta_msr_wait);
+ interruptible_sleep_on(&port->delta_msr_wait);
/* see if a signal did it */
if (signal_pending(current))
return -ERESTARTSYS;
+ if (port->serial->disconnected)
+ return -EIO;
+
spin_lock_irqsave(&priv->lock, flags);
status = priv->line_status;
multi_change = priv->multi_status_change;
diff --git a/drivers/usb/serial/cypress_m8.c b/drivers/usb/serial/cypress_m8.c
index 8efa19d..ba7352e 100644
--- a/drivers/usb/serial/cypress_m8.c
+++ b/drivers/usb/serial/cypress_m8.c
@@ -111,7 +111,6 @@
int baud_rate; /* stores current baud rate in
integer form */
int isthrottled; /* if throttled, discard reads */
- wait_queue_head_t delta_msr_wait; /* used for TIOCMIWAIT */
char prev_status, diff_status; /* used for TIOCMIWAIT */
/* we pass a pointer to this as the argument sent to
cypress_set_termios old_termios */
@@ -449,7 +448,6 @@
kfree(priv);
return -ENOMEM;
}
- init_waitqueue_head(&priv->delta_msr_wait);
usb_reset_configuration(serial->dev);
@@ -868,12 +866,16 @@
switch (cmd) {
/* This code comes from drivers/char/serial.c and ftdi_sio.c */
case TIOCMIWAIT:
- while (priv != NULL) {
- interruptible_sleep_on(&priv->delta_msr_wait);
+ for (;;) {
+ interruptible_sleep_on(&port->delta_msr_wait);
/* see if a signal did it */
if (signal_pending(current))
return -ERESTARTSYS;
- else {
+
+ if (port->serial->disconnected)
+ return -EIO;
+
+ {
char diff = priv->diff_status;
if (diff == 0)
return -EIO; /* no change => error */
@@ -1187,7 +1189,7 @@
if (priv->current_status != priv->prev_status) {
priv->diff_status |= priv->current_status ^
priv->prev_status;
- wake_up_interruptible(&priv->delta_msr_wait);
+ wake_up_interruptible(&port->delta_msr_wait);
priv->prev_status = priv->current_status;
}
spin_unlock_irqrestore(&priv->lock, flags);
diff --git a/drivers/usb/serial/f81232.c b/drivers/usb/serial/f81232.c
index b1b2dc6..a172ad5 100644
--- a/drivers/usb/serial/f81232.c
+++ b/drivers/usb/serial/f81232.c
@@ -47,7 +47,6 @@
struct f81232_private {
spinlock_t lock;
- wait_queue_head_t delta_msr_wait;
u8 line_control;
u8 line_status;
};
@@ -111,7 +110,7 @@
line_status = priv->line_status;
priv->line_status &= ~UART_STATE_TRANSIENT_MASK;
spin_unlock_irqrestore(&priv->lock, flags);
- wake_up_interruptible(&priv->delta_msr_wait);
+ wake_up_interruptible(&port->delta_msr_wait);
if (!urb->actual_length)
return;
@@ -256,11 +255,14 @@
spin_unlock_irqrestore(&priv->lock, flags);
while (1) {
- interruptible_sleep_on(&priv->delta_msr_wait);
+ interruptible_sleep_on(&port->delta_msr_wait);
/* see if a signal did it */
if (signal_pending(current))
return -ERESTARTSYS;
+ if (port->serial->disconnected)
+ return -EIO;
+
spin_lock_irqsave(&priv->lock, flags);
status = priv->line_status;
spin_unlock_irqrestore(&priv->lock, flags);
@@ -322,7 +324,6 @@
return -ENOMEM;
spin_lock_init(&priv->lock);
- init_waitqueue_head(&priv->delta_msr_wait);
usb_set_serial_port_data(port, priv);
diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c
index edd162d..d4809d5 100644
--- a/drivers/usb/serial/ftdi_sio.c
+++ b/drivers/usb/serial/ftdi_sio.c
@@ -69,9 +69,7 @@
int flags; /* some ASYNC_xxxx flags are supported */
unsigned long last_dtr_rts; /* saved modem control outputs */
struct async_icount icount;
- wait_queue_head_t delta_msr_wait; /* Used for TIOCMIWAIT */
char prev_status; /* Used for TIOCMIWAIT */
- bool dev_gone; /* Used to abort TIOCMIWAIT */
char transmit_empty; /* If transmitter is empty or not */
__u16 interface; /* FT2232C, FT2232H or FT4232H port interface
(0 for FT232/245) */
@@ -1691,10 +1689,8 @@
kref_init(&priv->kref);
mutex_init(&priv->cfg_lock);
- init_waitqueue_head(&priv->delta_msr_wait);
priv->flags = ASYNC_LOW_LATENCY;
- priv->dev_gone = false;
if (quirk && quirk->port_probe)
quirk->port_probe(priv);
@@ -1840,8 +1836,7 @@
{
struct ftdi_private *priv = usb_get_serial_port_data(port);
- priv->dev_gone = true;
- wake_up_interruptible_all(&priv->delta_msr_wait);
+ wake_up_interruptible(&port->delta_msr_wait);
remove_sysfs_attrs(port);
@@ -1989,7 +1984,7 @@
if (diff_status & FTDI_RS0_RLSD)
priv->icount.dcd++;
- wake_up_interruptible_all(&priv->delta_msr_wait);
+ wake_up_interruptible(&port->delta_msr_wait);
priv->prev_status = status;
}
@@ -2440,11 +2435,15 @@
*/
case TIOCMIWAIT:
cprev = priv->icount;
- while (!priv->dev_gone) {
- interruptible_sleep_on(&priv->delta_msr_wait);
+ for (;;) {
+ interruptible_sleep_on(&port->delta_msr_wait);
/* see if a signal did it */
if (signal_pending(current))
return -ERESTARTSYS;
+
+ if (port->serial->disconnected)
+ return -EIO;
+
cnow = priv->icount;
if (((arg & TIOCM_RNG) && (cnow.rng != cprev.rng)) ||
((arg & TIOCM_DSR) && (cnow.dsr != cprev.dsr)) ||
@@ -2454,8 +2453,6 @@
}
cprev = cnow;
}
- return -EIO;
- break;
case TIOCSERGETLSR:
return get_lsr_info(port, (struct serial_struct __user *)arg);
break;
diff --git a/drivers/usb/serial/garmin_gps.c b/drivers/usb/serial/garmin_gps.c
index 1a07b12..81caf56 100644
--- a/drivers/usb/serial/garmin_gps.c
+++ b/drivers/usb/serial/garmin_gps.c
@@ -956,10 +956,7 @@
if (!serial)
return;
- mutex_lock(&port->serial->disc_mutex);
-
- if (!port->serial->disconnected)
- garmin_clear(garmin_data_p);
+ garmin_clear(garmin_data_p);
/* shutdown our urbs */
usb_kill_urb(port->read_urb);
@@ -968,8 +965,6 @@
/* keep reset state so we know that we must start a new session */
if (garmin_data_p->state != STATE_RESET)
garmin_data_p->state = STATE_DISCONNECTED;
-
- mutex_unlock(&port->serial->disc_mutex);
}
diff --git a/drivers/usb/serial/io_edgeport.c b/drivers/usb/serial/io_edgeport.c
index b00e5cb..efd8b97 100644
--- a/drivers/usb/serial/io_edgeport.c
+++ b/drivers/usb/serial/io_edgeport.c
@@ -110,7 +110,6 @@
wait_queue_head_t wait_chase; /* for handling sleeping while waiting for chase to finish */
wait_queue_head_t wait_open; /* for handling sleeping while waiting for open to finish */
wait_queue_head_t wait_command; /* for handling sleeping while waiting for command to finish */
- wait_queue_head_t delta_msr_wait; /* for handling sleeping while waiting for msr change to happen */
struct async_icount icount;
struct usb_serial_port *port; /* loop back to the owner of this object */
@@ -884,7 +883,6 @@
/* initialize our wait queues */
init_waitqueue_head(&edge_port->wait_open);
init_waitqueue_head(&edge_port->wait_chase);
- init_waitqueue_head(&edge_port->delta_msr_wait);
init_waitqueue_head(&edge_port->wait_command);
/* initialize our icount structure */
@@ -1669,13 +1667,17 @@
dev_dbg(&port->dev, "%s (%d) TIOCMIWAIT\n", __func__, port->number);
cprev = edge_port->icount;
while (1) {
- prepare_to_wait(&edge_port->delta_msr_wait,
+ prepare_to_wait(&port->delta_msr_wait,
&wait, TASK_INTERRUPTIBLE);
schedule();
- finish_wait(&edge_port->delta_msr_wait, &wait);
+ finish_wait(&port->delta_msr_wait, &wait);
/* see if a signal did it */
if (signal_pending(current))
return -ERESTARTSYS;
+
+ if (port->serial->disconnected)
+ return -EIO;
+
cnow = edge_port->icount;
if (cnow.rng == cprev.rng && cnow.dsr == cprev.dsr &&
cnow.dcd == cprev.dcd && cnow.cts == cprev.cts)
@@ -2051,7 +2053,7 @@
icount->dcd++;
if (newMsr & EDGEPORT_MSR_DELTA_RI)
icount->rng++;
- wake_up_interruptible(&edge_port->delta_msr_wait);
+ wake_up_interruptible(&edge_port->port->delta_msr_wait);
}
/* Save the new modem status */
diff --git a/drivers/usb/serial/io_ti.c b/drivers/usb/serial/io_ti.c
index c237766..7777172 100644
--- a/drivers/usb/serial/io_ti.c
+++ b/drivers/usb/serial/io_ti.c
@@ -87,9 +87,6 @@
int close_pending;
int lsr_event;
struct async_icount icount;
- wait_queue_head_t delta_msr_wait; /* for handling sleeping while
- waiting for msr change to
- happen */
struct edgeport_serial *edge_serial;
struct usb_serial_port *port;
__u8 bUartMode; /* Port type, 0: RS232, etc. */
@@ -1459,7 +1456,7 @@
icount->dcd++;
if (msr & EDGEPORT_MSR_DELTA_RI)
icount->rng++;
- wake_up_interruptible(&edge_port->delta_msr_wait);
+ wake_up_interruptible(&edge_port->port->delta_msr_wait);
}
/* Save the new modem status */
@@ -1754,7 +1751,6 @@
dev = port->serial->dev;
memset(&(edge_port->icount), 0x00, sizeof(edge_port->icount));
- init_waitqueue_head(&edge_port->delta_msr_wait);
/* turn off loopback */
status = ti_do_config(edge_port, UMPC_SET_CLR_LOOPBACK, 0);
@@ -2434,10 +2430,14 @@
dev_dbg(&port->dev, "%s - TIOCMIWAIT\n", __func__);
cprev = edge_port->icount;
while (1) {
- interruptible_sleep_on(&edge_port->delta_msr_wait);
+ interruptible_sleep_on(&port->delta_msr_wait);
/* see if a signal did it */
if (signal_pending(current))
return -ERESTARTSYS;
+
+ if (port->serial->disconnected)
+ return -EIO;
+
cnow = edge_port->icount;
if (cnow.rng == cprev.rng && cnow.dsr == cprev.dsr &&
cnow.dcd == cprev.dcd && cnow.cts == cprev.cts)
@@ -2649,6 +2649,7 @@
.set_termios = edge_set_termios,
.tiocmget = edge_tiocmget,
.tiocmset = edge_tiocmset,
+ .get_icount = edge_get_icount,
.write = edge_write,
.write_room = edge_write_room,
.chars_in_buffer = edge_chars_in_buffer,
diff --git a/drivers/usb/serial/mct_u232.c b/drivers/usb/serial/mct_u232.c
index a64d420..06d5a60 100644
--- a/drivers/usb/serial/mct_u232.c
+++ b/drivers/usb/serial/mct_u232.c
@@ -114,8 +114,6 @@
unsigned char last_msr; /* Modem Status Register */
unsigned int rx_flags; /* Throttling flags */
struct async_icount icount;
- wait_queue_head_t msr_wait; /* for handling sleeping while waiting
- for msr change to happen */
};
#define THROTTLED 0x01
@@ -409,7 +407,6 @@
return -ENOMEM;
spin_lock_init(&priv->lock);
- init_waitqueue_head(&priv->msr_wait);
usb_set_serial_port_data(port, priv);
@@ -601,7 +598,7 @@
tty_kref_put(tty);
}
#endif
- wake_up_interruptible(&priv->msr_wait);
+ wake_up_interruptible(&port->delta_msr_wait);
spin_unlock_irqrestore(&priv->lock, flags);
exit:
retval = usb_submit_urb(urb, GFP_ATOMIC);
@@ -810,13 +807,17 @@
cprev = mct_u232_port->icount;
spin_unlock_irqrestore(&mct_u232_port->lock, flags);
for ( ; ; ) {
- prepare_to_wait(&mct_u232_port->msr_wait,
+ prepare_to_wait(&port->delta_msr_wait,
&wait, TASK_INTERRUPTIBLE);
schedule();
- finish_wait(&mct_u232_port->msr_wait, &wait);
+ finish_wait(&port->delta_msr_wait, &wait);
/* see if a signal did it */
if (signal_pending(current))
return -ERESTARTSYS;
+
+ if (port->serial->disconnected)
+ return -EIO;
+
spin_lock_irqsave(&mct_u232_port->lock, flags);
cnow = mct_u232_port->icount;
spin_unlock_irqrestore(&mct_u232_port->lock, flags);
diff --git a/drivers/usb/serial/mos7840.c b/drivers/usb/serial/mos7840.c
index 809fb32..b8051fa 100644
--- a/drivers/usb/serial/mos7840.c
+++ b/drivers/usb/serial/mos7840.c
@@ -219,7 +219,6 @@
char open;
char open_ports;
wait_queue_head_t wait_chase; /* for handling sleeping while waiting for chase to finish */
- wait_queue_head_t delta_msr_wait; /* for handling sleeping while waiting for msr change to happen */
int delta_msr_cond;
struct async_icount icount;
struct usb_serial_port *port; /* loop back to the owner of this object */
@@ -423,6 +422,9 @@
icount->rng++;
smp_wmb();
}
+
+ mos7840_port->delta_msr_cond = 1;
+ wake_up_interruptible(&port->port->delta_msr_wait);
}
}
@@ -1127,7 +1129,6 @@
/* initialize our wait queues */
init_waitqueue_head(&mos7840_port->wait_chase);
- init_waitqueue_head(&mos7840_port->delta_msr_wait);
/* initialize our icount structure */
memset(&(mos7840_port->icount), 0x00, sizeof(mos7840_port->icount));
@@ -2017,8 +2018,6 @@
mos7840_port->read_urb_busy = false;
}
}
- wake_up(&mos7840_port->delta_msr_wait);
- mos7840_port->delta_msr_cond = 1;
dev_dbg(&port->dev, "%s - mos7840_port->shadowLCR is End %x\n", __func__,
mos7840_port->shadowLCR);
}
@@ -2219,13 +2218,18 @@
while (1) {
/* interruptible_sleep_on(&mos7840_port->delta_msr_wait); */
mos7840_port->delta_msr_cond = 0;
- wait_event_interruptible(mos7840_port->delta_msr_wait,
- (mos7840_port->
+ wait_event_interruptible(port->delta_msr_wait,
+ (port->serial->disconnected ||
+ mos7840_port->
delta_msr_cond == 1));
/* see if a signal did it */
if (signal_pending(current))
return -ERESTARTSYS;
+
+ if (port->serial->disconnected)
+ return -EIO;
+
cnow = mos7840_port->icount;
smp_rmb();
if (cnow.rng == cprev.rng && cnow.dsr == cprev.dsr &&
diff --git a/drivers/usb/serial/oti6858.c b/drivers/usb/serial/oti6858.c
index a958fd4..87c71cc 100644
--- a/drivers/usb/serial/oti6858.c
+++ b/drivers/usb/serial/oti6858.c
@@ -188,7 +188,6 @@
u8 setup_done;
struct delayed_work delayed_setup_work;
- wait_queue_head_t intr_wait;
struct usb_serial_port *port; /* USB port with which associated */
};
@@ -339,7 +338,6 @@
return -ENOMEM;
spin_lock_init(&priv->lock);
- init_waitqueue_head(&priv->intr_wait);
priv->port = port;
INIT_DELAYED_WORK(&priv->delayed_setup_work, setup_line);
INIT_DELAYED_WORK(&priv->delayed_write_work, send_data);
@@ -664,11 +662,15 @@
spin_unlock_irqrestore(&priv->lock, flags);
while (1) {
- wait_event_interruptible(priv->intr_wait,
+ wait_event_interruptible(port->delta_msr_wait,
+ port->serial->disconnected ||
priv->status.pin_state != prev);
if (signal_pending(current))
return -ERESTARTSYS;
+ if (port->serial->disconnected)
+ return -EIO;
+
spin_lock_irqsave(&priv->lock, flags);
status = priv->status.pin_state & PIN_MASK;
spin_unlock_irqrestore(&priv->lock, flags);
@@ -763,7 +765,7 @@
if (!priv->transient) {
if (xs->pin_state != priv->status.pin_state)
- wake_up_interruptible(&priv->intr_wait);
+ wake_up_interruptible(&port->delta_msr_wait);
memcpy(&priv->status, xs, OTI6858_CTRL_PKT_SIZE);
}
diff --git a/drivers/usb/serial/pl2303.c b/drivers/usb/serial/pl2303.c
index 54adc91..3b10018 100644
--- a/drivers/usb/serial/pl2303.c
+++ b/drivers/usb/serial/pl2303.c
@@ -139,7 +139,6 @@
struct pl2303_private {
spinlock_t lock;
- wait_queue_head_t delta_msr_wait;
u8 line_control;
u8 line_status;
};
@@ -233,7 +232,6 @@
return -ENOMEM;
spin_lock_init(&priv->lock);
- init_waitqueue_head(&priv->delta_msr_wait);
usb_set_serial_port_data(port, priv);
@@ -607,11 +605,14 @@
spin_unlock_irqrestore(&priv->lock, flags);
while (1) {
- interruptible_sleep_on(&priv->delta_msr_wait);
+ interruptible_sleep_on(&port->delta_msr_wait);
/* see if a signal did it */
if (signal_pending(current))
return -ERESTARTSYS;
+ if (port->serial->disconnected)
+ return -EIO;
+
spin_lock_irqsave(&priv->lock, flags);
status = priv->line_status;
spin_unlock_irqrestore(&priv->lock, flags);
@@ -719,7 +720,7 @@
spin_unlock_irqrestore(&priv->lock, flags);
if (priv->line_status & UART_BREAK_ERROR)
usb_serial_handle_break(port);
- wake_up_interruptible(&priv->delta_msr_wait);
+ wake_up_interruptible(&port->delta_msr_wait);
tty = tty_port_tty_get(&port->port);
if (!tty)
@@ -783,7 +784,7 @@
line_status = priv->line_status;
priv->line_status &= ~UART_STATE_TRANSIENT_MASK;
spin_unlock_irqrestore(&priv->lock, flags);
- wake_up_interruptible(&priv->delta_msr_wait);
+ wake_up_interruptible(&port->delta_msr_wait);
if (!urb->actual_length)
return;
diff --git a/drivers/usb/serial/quatech2.c b/drivers/usb/serial/quatech2.c
index d643a4d..75f125d 100644
--- a/drivers/usb/serial/quatech2.c
+++ b/drivers/usb/serial/quatech2.c
@@ -128,7 +128,6 @@
u8 shadowLSR;
u8 shadowMSR;
- wait_queue_head_t delta_msr_wait; /* Used for TIOCMIWAIT */
struct async_icount icount;
struct usb_serial_port *port;
@@ -506,8 +505,9 @@
spin_unlock_irqrestore(&priv->lock, flags);
while (1) {
- wait_event_interruptible(priv->delta_msr_wait,
- ((priv->icount.rng != prev.rng) ||
+ wait_event_interruptible(port->delta_msr_wait,
+ (port->serial->disconnected ||
+ (priv->icount.rng != prev.rng) ||
(priv->icount.dsr != prev.dsr) ||
(priv->icount.dcd != prev.dcd) ||
(priv->icount.cts != prev.cts)));
@@ -515,6 +515,9 @@
if (signal_pending(current))
return -ERESTARTSYS;
+ if (port->serial->disconnected)
+ return -EIO;
+
spin_lock_irqsave(&priv->lock, flags);
cur = priv->icount;
spin_unlock_irqrestore(&priv->lock, flags);
@@ -827,7 +830,6 @@
spin_lock_init(&port_priv->lock);
spin_lock_init(&port_priv->urb_lock);
- init_waitqueue_head(&port_priv->delta_msr_wait);
port_priv->port = port;
port_priv->write_urb = usb_alloc_urb(0, GFP_KERNEL);
@@ -970,7 +972,7 @@
if (newMSR & UART_MSR_TERI)
port_priv->icount.rng++;
- wake_up_interruptible(&port_priv->delta_msr_wait);
+ wake_up_interruptible(&port->delta_msr_wait);
}
}
diff --git a/drivers/usb/serial/spcp8x5.c b/drivers/usb/serial/spcp8x5.c
index 91ff8e3..549ef68 100644
--- a/drivers/usb/serial/spcp8x5.c
+++ b/drivers/usb/serial/spcp8x5.c
@@ -149,7 +149,6 @@
struct spcp8x5_private {
spinlock_t lock;
enum spcp8x5_type type;
- wait_queue_head_t delta_msr_wait;
u8 line_control;
u8 line_status;
};
@@ -179,7 +178,6 @@
return -ENOMEM;
spin_lock_init(&priv->lock);
- init_waitqueue_head(&priv->delta_msr_wait);
priv->type = type;
usb_set_serial_port_data(port , priv);
@@ -475,7 +473,7 @@
priv->line_status &= ~UART_STATE_TRANSIENT_MASK;
spin_unlock_irqrestore(&priv->lock, flags);
/* wake up the wait for termios */
- wake_up_interruptible(&priv->delta_msr_wait);
+ wake_up_interruptible(&port->delta_msr_wait);
if (!urb->actual_length)
return;
@@ -526,12 +524,15 @@
while (1) {
/* wake up in bulk read */
- interruptible_sleep_on(&priv->delta_msr_wait);
+ interruptible_sleep_on(&port->delta_msr_wait);
/* see if a signal did it */
if (signal_pending(current))
return -ERESTARTSYS;
+ if (port->serial->disconnected)
+ return -EIO;
+
spin_lock_irqsave(&priv->lock, flags);
status = priv->line_status;
spin_unlock_irqrestore(&priv->lock, flags);
diff --git a/drivers/usb/serial/ssu100.c b/drivers/usb/serial/ssu100.c
index b57cf84..4b2a197 100644
--- a/drivers/usb/serial/ssu100.c
+++ b/drivers/usb/serial/ssu100.c
@@ -61,7 +61,6 @@
spinlock_t status_lock;
u8 shadowLSR;
u8 shadowMSR;
- wait_queue_head_t delta_msr_wait; /* Used for TIOCMIWAIT */
struct async_icount icount;
};
@@ -355,8 +354,9 @@
spin_unlock_irqrestore(&priv->status_lock, flags);
while (1) {
- wait_event_interruptible(priv->delta_msr_wait,
- ((priv->icount.rng != prev.rng) ||
+ wait_event_interruptible(port->delta_msr_wait,
+ (port->serial->disconnected ||
+ (priv->icount.rng != prev.rng) ||
(priv->icount.dsr != prev.dsr) ||
(priv->icount.dcd != prev.dcd) ||
(priv->icount.cts != prev.cts)));
@@ -364,6 +364,9 @@
if (signal_pending(current))
return -ERESTARTSYS;
+ if (port->serial->disconnected)
+ return -EIO;
+
spin_lock_irqsave(&priv->status_lock, flags);
cur = priv->icount;
spin_unlock_irqrestore(&priv->status_lock, flags);
@@ -445,7 +448,6 @@
return -ENOMEM;
spin_lock_init(&priv->status_lock);
- init_waitqueue_head(&priv->delta_msr_wait);
usb_set_serial_port_data(port, priv);
@@ -537,7 +539,7 @@
priv->icount.dcd++;
if (msr & UART_MSR_TERI)
priv->icount.rng++;
- wake_up_interruptible(&priv->delta_msr_wait);
+ wake_up_interruptible(&port->delta_msr_wait);
}
}
diff --git a/drivers/usb/serial/ti_usb_3410_5052.c b/drivers/usb/serial/ti_usb_3410_5052.c
index 39cb9b8..73deb029f 100644
--- a/drivers/usb/serial/ti_usb_3410_5052.c
+++ b/drivers/usb/serial/ti_usb_3410_5052.c
@@ -74,7 +74,6 @@
int tp_flags;
int tp_closing_wait;/* in .01 secs */
struct async_icount tp_icount;
- wait_queue_head_t tp_msr_wait; /* wait for msr change */
wait_queue_head_t tp_write_wait;
struct ti_device *tp_tdev;
struct usb_serial_port *tp_port;
@@ -432,7 +431,6 @@
else
tport->tp_uart_base_addr = TI_UART2_BASE_ADDR;
tport->tp_closing_wait = closing_wait;
- init_waitqueue_head(&tport->tp_msr_wait);
init_waitqueue_head(&tport->tp_write_wait);
if (kfifo_alloc(&tport->write_fifo, TI_WRITE_BUF_SIZE, GFP_KERNEL)) {
kfree(tport);
@@ -784,9 +782,13 @@
dev_dbg(&port->dev, "%s - TIOCMIWAIT\n", __func__);
cprev = tport->tp_icount;
while (1) {
- interruptible_sleep_on(&tport->tp_msr_wait);
+ interruptible_sleep_on(&port->delta_msr_wait);
if (signal_pending(current))
return -ERESTARTSYS;
+
+ if (port->serial->disconnected)
+ return -EIO;
+
cnow = tport->tp_icount;
if (cnow.rng == cprev.rng && cnow.dsr == cprev.dsr &&
cnow.dcd == cprev.dcd && cnow.cts == cprev.cts)
@@ -1392,7 +1394,7 @@
icount->dcd++;
if (msr & TI_MSR_DELTA_RI)
icount->rng++;
- wake_up_interruptible(&tport->tp_msr_wait);
+ wake_up_interruptible(&tport->tp_port->delta_msr_wait);
spin_unlock_irqrestore(&tport->tp_lock, flags);
}
diff --git a/drivers/usb/serial/usb-serial.c b/drivers/usb/serial/usb-serial.c
index a19ed74..2e70efa 100644
--- a/drivers/usb/serial/usb-serial.c
+++ b/drivers/usb/serial/usb-serial.c
@@ -151,6 +151,7 @@
}
}
+ usb_put_intf(serial->interface);
usb_put_dev(serial->dev);
kfree(serial);
}
@@ -620,7 +621,7 @@
}
serial->dev = usb_get_dev(dev);
serial->type = driver;
- serial->interface = interface;
+ serial->interface = usb_get_intf(interface);
kref_init(&serial->kref);
mutex_init(&serial->disc_mutex);
serial->minor = SERIAL_TTY_NO_MINOR;
diff --git a/drivers/usb/storage/unusual_devs.h b/drivers/usb/storage/unusual_devs.h
index da04a07..179933528 100644
--- a/drivers/usb/storage/unusual_devs.h
+++ b/drivers/usb/storage/unusual_devs.h
@@ -496,6 +496,13 @@
USB_SC_DEVICE, USB_PR_DEVICE, NULL,
US_FL_MAX_SECTORS_64 | US_FL_BULK_IGNORE_TAG),
+/* Added by Dmitry Artamonow <mad_soft@inbox.ru> */
+UNUSUAL_DEV( 0x04e8, 0x5136, 0x0000, 0x9999,
+ "Samsung",
+ "YP-Z3",
+ USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+ US_FL_MAX_SECTORS_64),
+
/* Entry and supporting patch by Theodore Kilgore <kilgota@auburn.edu>.
* Device uses standards-violating 32-byte Bulk Command Block Wrappers and
* reports itself as "Proprietary SCSI Bulk." Cf. device entry 0x084d:0x0011.
diff --git a/drivers/vfio/pci/vfio_pci_config.c b/drivers/vfio/pci/vfio_pci_config.c
index 964ff22..aeb00fc 100644
--- a/drivers/vfio/pci/vfio_pci_config.c
+++ b/drivers/vfio/pci/vfio_pci_config.c
@@ -27,6 +27,7 @@
#include <linux/pci.h>
#include <linux/uaccess.h>
#include <linux/vfio.h>
+#include <linux/slab.h>
#include "vfio_pci_private.h"
diff --git a/drivers/vfio/pci/vfio_pci_intrs.c b/drivers/vfio/pci/vfio_pci_intrs.c
index 3639371..a965091 100644
--- a/drivers/vfio/pci/vfio_pci_intrs.c
+++ b/drivers/vfio/pci/vfio_pci_intrs.c
@@ -22,6 +22,7 @@
#include <linux/vfio.h>
#include <linux/wait.h>
#include <linux/workqueue.h>
+#include <linux/slab.h>
#include "vfio_pci_private.h"
diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index 959b1cd..ec6fb3f 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -339,7 +339,8 @@
msg.msg_controllen = 0;
ubufs = NULL;
} else {
- struct ubuf_info *ubuf = &vq->ubuf_info[head];
+ struct ubuf_info *ubuf;
+ ubuf = vq->ubuf_info + vq->upend_idx;
vq->heads[vq->upend_idx].len =
VHOST_DMA_IN_PROGRESS;
diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c
index 9951297..43fb11e 100644
--- a/drivers/vhost/tcm_vhost.c
+++ b/drivers/vhost/tcm_vhost.c
@@ -850,7 +850,7 @@
for (index = 0; index < vs->dev.nvqs; ++index) {
if (!vhost_vq_access_ok(&vs->vqs[index])) {
ret = -EFAULT;
- goto err;
+ goto err_dev;
}
}
for (i = 0; i < VHOST_SCSI_MAX_TARGET; i++) {
@@ -860,10 +860,11 @@
if (!tv_tpg)
continue;
+ mutex_lock(&tv_tpg->tv_tpg_mutex);
tv_tport = tv_tpg->tport;
if (!tv_tport) {
ret = -ENODEV;
- goto err;
+ goto err_tpg;
}
if (strcmp(tv_tport->tport_name, t->vhost_wwpn)) {
@@ -872,16 +873,19 @@
tv_tport->tport_name, tv_tpg->tport_tpgt,
t->vhost_wwpn, t->vhost_tpgt);
ret = -EINVAL;
- goto err;
+ goto err_tpg;
}
tv_tpg->tv_tpg_vhost_count--;
vs->vs_tpg[target] = NULL;
vs->vs_endpoint = false;
+ mutex_unlock(&tv_tpg->tv_tpg_mutex);
}
mutex_unlock(&vs->dev.mutex);
return 0;
-err:
+err_tpg:
+ mutex_unlock(&tv_tpg->tv_tpg_mutex);
+err_dev:
mutex_unlock(&vs->dev.mutex);
return ret;
}
@@ -937,6 +941,7 @@
for (i = 0; i < VHOST_SCSI_MAX_VQ; i++)
vhost_scsi_flush_vq(vs, i);
+ vhost_work_flush(&vs->dev, &vs->vs_completion_work);
}
static int vhost_scsi_set_features(struct vhost_scsi *vs, u64 features)
diff --git a/drivers/video/atmel_lcdfb.c b/drivers/video/atmel_lcdfb.c
index 12cf5f3..025428e 100644
--- a/drivers/video/atmel_lcdfb.c
+++ b/drivers/video/atmel_lcdfb.c
@@ -422,17 +422,22 @@
= var->bits_per_pixel;
break;
case 16:
+ /* Older SOCs use IBGR:555 rather than BGR:565. */
+ if (sinfo->have_intensity_bit)
+ var->green.length = 5;
+ else
+ var->green.length = 6;
+
if (sinfo->lcd_wiring_mode == ATMEL_LCDC_WIRING_RGB) {
- /* RGB:565 mode */
- var->red.offset = 11;
+ /* RGB:5X5 mode */
+ var->red.offset = var->green.length + 5;
var->blue.offset = 0;
} else {
- /* BGR:565 mode */
+ /* BGR:5X5 mode */
var->red.offset = 0;
- var->blue.offset = 11;
+ var->blue.offset = var->green.length + 5;
}
var->green.offset = 5;
- var->green.length = 6;
var->red.length = var->blue.length = 5;
break;
case 32:
@@ -679,8 +684,7 @@
case FB_VISUAL_PSEUDOCOLOR:
if (regno < 256) {
- if (cpu_is_at91sam9261() || cpu_is_at91sam9263()
- || cpu_is_at91sam9rl()) {
+ if (sinfo->have_intensity_bit) {
/* old style I+BGR:555 */
val = ((red >> 11) & 0x001f);
val |= ((green >> 6) & 0x03e0);
@@ -870,6 +874,10 @@
}
sinfo->info = info;
sinfo->pdev = pdev;
+ if (cpu_is_at91sam9261() || cpu_is_at91sam9263() ||
+ cpu_is_at91sam9rl()) {
+ sinfo->have_intensity_bit = true;
+ }
strcpy(info->fix.id, sinfo->pdev->name);
info->flags = ATMEL_LCDFB_FBINFO_DEFAULT;
diff --git a/drivers/video/ep93xx-fb.c b/drivers/video/ep93xx-fb.c
index 3f2519d..e06cd5d 100644
--- a/drivers/video/ep93xx-fb.c
+++ b/drivers/video/ep93xx-fb.c
@@ -23,6 +23,7 @@
#include <linux/slab.h>
#include <linux/clk.h>
#include <linux/fb.h>
+#include <linux/io.h>
#include <linux/platform_data/video-ep93xx.h>
diff --git a/drivers/video/mxsfb.c b/drivers/video/mxsfb.c
index 755556c..45169cb 100644
--- a/drivers/video/mxsfb.c
+++ b/drivers/video/mxsfb.c
@@ -169,6 +169,7 @@
unsigned dotclk_delay;
const struct mxsfb_devdata *devdata;
int mapped;
+ u32 sync;
};
#define mxsfb_is_v3(host) (host->devdata->ipversion == 3)
@@ -456,9 +457,9 @@
vdctrl0 |= VDCTRL0_HSYNC_ACT_HIGH;
if (fb_info->var.sync & FB_SYNC_VERT_HIGH_ACT)
vdctrl0 |= VDCTRL0_VSYNC_ACT_HIGH;
- if (fb_info->var.sync & FB_SYNC_DATA_ENABLE_HIGH_ACT)
+ if (host->sync & MXSFB_SYNC_DATA_ENABLE_HIGH_ACT)
vdctrl0 |= VDCTRL0_ENABLE_ACT_HIGH;
- if (fb_info->var.sync & FB_SYNC_DOTCLK_FAILING_ACT)
+ if (host->sync & MXSFB_SYNC_DOTCLK_FAILING_ACT)
vdctrl0 |= VDCTRL0_DOTCLK_ACT_FAILING;
writel(vdctrl0, host->base + LCDC_VDCTRL0);
@@ -861,6 +862,8 @@
INIT_LIST_HEAD(&fb_info->modelist);
+ host->sync = pdata->sync;
+
ret = mxsfb_init_fbinfo(host);
if (ret != 0)
goto error_init_fb;
diff --git a/drivers/watchdog/sp5100_tco.c b/drivers/watchdog/sp5100_tco.c
index e3b8f75..0e9d8c4 100644
--- a/drivers/watchdog/sp5100_tco.c
+++ b/drivers/watchdog/sp5100_tco.c
@@ -40,13 +40,12 @@
#include "sp5100_tco.h"
/* Module and version information */
-#define TCO_VERSION "0.03"
+#define TCO_VERSION "0.05"
#define TCO_MODULE_NAME "SP5100 TCO timer"
#define TCO_DRIVER_NAME TCO_MODULE_NAME ", v" TCO_VERSION
/* internal variables */
static u32 tcobase_phys;
-static u32 resbase_phys;
static u32 tco_wdt_fired;
static void __iomem *tcobase;
static unsigned int pm_iobase;
@@ -54,10 +53,6 @@
static unsigned long timer_alive;
static char tco_expect_close;
static struct pci_dev *sp5100_tco_pci;
-static struct resource wdt_res = {
- .name = "Watchdog Timer",
- .flags = IORESOURCE_MEM,
-};
/* the watchdog platform device */
static struct platform_device *sp5100_tco_platform_device;
@@ -75,12 +70,6 @@
MODULE_PARM_DESC(nowayout, "Watchdog cannot be stopped once started."
" (default=" __MODULE_STRING(WATCHDOG_NOWAYOUT) ")");
-static unsigned int force_addr;
-module_param(force_addr, uint, 0);
-MODULE_PARM_DESC(force_addr, "Force the use of specified MMIO address."
- " ONLY USE THIS PARAMETER IF YOU REALLY KNOW"
- " WHAT YOU ARE DOING (default=none)");
-
/*
* Some TCO specific functions
*/
@@ -176,39 +165,6 @@
}
}
-static void tco_timer_disable(void)
-{
- int val;
-
- if (sp5100_tco_pci->revision >= 0x40) {
- /* For SB800 or later */
- /* Enable watchdog decode bit and Disable watchdog timer */
- outb(SB800_PM_WATCHDOG_CONTROL, SB800_IO_PM_INDEX_REG);
- val = inb(SB800_IO_PM_DATA_REG);
- val |= SB800_PCI_WATCHDOG_DECODE_EN;
- val |= SB800_PM_WATCHDOG_DISABLE;
- outb(val, SB800_IO_PM_DATA_REG);
- } else {
- /* For SP5100 or SB7x0 */
- /* Enable watchdog decode bit */
- pci_read_config_dword(sp5100_tco_pci,
- SP5100_PCI_WATCHDOG_MISC_REG,
- &val);
-
- val |= SP5100_PCI_WATCHDOG_DECODE_EN;
-
- pci_write_config_dword(sp5100_tco_pci,
- SP5100_PCI_WATCHDOG_MISC_REG,
- val);
-
- /* Disable Watchdog timer */
- outb(SP5100_PM_WATCHDOG_CONTROL, SP5100_IO_PM_INDEX_REG);
- val = inb(SP5100_IO_PM_DATA_REG);
- val |= SP5100_PM_WATCHDOG_DISABLE;
- outb(val, SP5100_IO_PM_DATA_REG);
- }
-}
-
/*
* /dev/watchdog handling
*/
@@ -361,7 +317,7 @@
{
struct pci_dev *dev = NULL;
const char *dev_name = NULL;
- u32 val, tmp_val;
+ u32 val;
u32 index_reg, data_reg, base_addr;
/* Match the PCI device */
@@ -459,63 +415,8 @@
} else
pr_debug("SBResource_MMIO is disabled(0x%04x)\n", val);
- /*
- * Lastly re-programming the watchdog timer MMIO address,
- * This method is a last resort...
- *
- * Before re-programming, to ensure that the watchdog timer
- * is disabled, disable the watchdog timer.
- */
- tco_timer_disable();
-
- if (force_addr) {
- /*
- * Force the use of watchdog timer MMIO address, and aligned to
- * 8byte boundary.
- */
- force_addr &= ~0x7;
- val = force_addr;
-
- pr_info("Force the use of 0x%04x as MMIO address\n", val);
- } else {
- /*
- * Get empty slot into the resource tree for watchdog timer.
- */
- if (allocate_resource(&iomem_resource,
- &wdt_res,
- SP5100_WDT_MEM_MAP_SIZE,
- 0xf0000000,
- 0xfffffff8,
- 0x8,
- NULL,
- NULL)) {
- pr_err("MMIO allocation failed\n");
- goto unreg_region;
- }
-
- val = resbase_phys = wdt_res.start;
- pr_debug("Got 0x%04x from resource tree\n", val);
- }
-
- /* Restore to the low three bits */
- outb(base_addr+0, index_reg);
- tmp_val = val | (inb(data_reg) & 0x7);
-
- /* Re-programming the watchdog timer base address */
- outb(base_addr+0, index_reg);
- outb((tmp_val >> 0) & 0xff, data_reg);
- outb(base_addr+1, index_reg);
- outb((tmp_val >> 8) & 0xff, data_reg);
- outb(base_addr+2, index_reg);
- outb((tmp_val >> 16) & 0xff, data_reg);
- outb(base_addr+3, index_reg);
- outb((tmp_val >> 24) & 0xff, data_reg);
-
- if (!request_mem_region_exclusive(val, SP5100_WDT_MEM_MAP_SIZE,
- dev_name)) {
- pr_err("MMIO address 0x%04x already in use\n", val);
- goto unreg_resource;
- }
+ pr_notice("failed to find MMIO address, giving up.\n");
+ goto unreg_region;
setup_wdt:
tcobase_phys = val;
@@ -555,9 +456,6 @@
unreg_mem_region:
release_mem_region(tcobase_phys, SP5100_WDT_MEM_MAP_SIZE);
-unreg_resource:
- if (resbase_phys)
- release_resource(&wdt_res);
unreg_region:
release_region(pm_iobase, SP5100_PM_IOPORTS_SIZE);
exit:
@@ -567,7 +465,6 @@
static int sp5100_tco_init(struct platform_device *dev)
{
int ret;
- char addr_str[16];
/*
* Check whether or not the hardware watchdog is there. If found, then
@@ -599,23 +496,14 @@
clear_bit(0, &timer_alive);
/* Show module parameters */
- if (force_addr == tcobase_phys)
- /* The force_addr is vaild */
- sprintf(addr_str, "0x%04x", force_addr);
- else
- strcpy(addr_str, "none");
-
- pr_info("initialized (0x%p). heartbeat=%d sec (nowayout=%d, "
- "force_addr=%s)\n",
- tcobase, heartbeat, nowayout, addr_str);
+ pr_info("initialized (0x%p). heartbeat=%d sec (nowayout=%d)\n",
+ tcobase, heartbeat, nowayout);
return 0;
exit:
iounmap(tcobase);
release_mem_region(tcobase_phys, SP5100_WDT_MEM_MAP_SIZE);
- if (resbase_phys)
- release_resource(&wdt_res);
release_region(pm_iobase, SP5100_PM_IOPORTS_SIZE);
return ret;
}
@@ -630,8 +518,6 @@
misc_deregister(&sp5100_tco_miscdev);
iounmap(tcobase);
release_mem_region(tcobase_phys, SP5100_WDT_MEM_MAP_SIZE);
- if (resbase_phys)
- release_resource(&wdt_res);
release_region(pm_iobase, SP5100_PM_IOPORTS_SIZE);
}
diff --git a/drivers/watchdog/sp5100_tco.h b/drivers/watchdog/sp5100_tco.h
index 71594a0..2b28c00 100644
--- a/drivers/watchdog/sp5100_tco.h
+++ b/drivers/watchdog/sp5100_tco.h
@@ -57,7 +57,7 @@
#define SB800_PM_WATCHDOG_DISABLE (1 << 2)
#define SB800_PM_WATCHDOG_SECOND_RES (3 << 0)
#define SB800_ACPI_MMIO_DECODE_EN (1 << 0)
-#define SB800_ACPI_MMIO_SEL (1 << 2)
+#define SB800_ACPI_MMIO_SEL (1 << 1)
#define SB800_PM_WDT_MMIO_OFFSET 0xB00
diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig
index 5a32232..67af155 100644
--- a/drivers/xen/Kconfig
+++ b/drivers/xen/Kconfig
@@ -182,7 +182,7 @@
config XEN_STUB
bool "Xen stub drivers"
- depends on XEN && X86_64
+ depends on XEN && X86_64 && BROKEN
default n
help
Allow kernel to install stub drivers, to reserve space for Xen drivers,
diff --git a/drivers/xen/events.c b/drivers/xen/events.c
index d17aa41..aa85881 100644
--- a/drivers/xen/events.c
+++ b/drivers/xen/events.c
@@ -403,11 +403,23 @@
if (unlikely((cpu != cpu_from_evtchn(port))))
do_hypercall = 1;
- else
+ else {
+ /*
+ * Need to clear the mask before checking pending to
+ * avoid a race with an event becoming pending.
+ *
+ * EVTCHNOP_unmask will only trigger an upcall if the
+ * mask bit was set, so if a hypercall is needed
+ * remask the event.
+ */
+ sync_clear_bit(port, BM(&s->evtchn_mask[0]));
evtchn_pending = sync_test_bit(port, BM(&s->evtchn_pending[0]));
- if (unlikely(evtchn_pending && xen_hvm_domain()))
- do_hypercall = 1;
+ if (unlikely(evtchn_pending && xen_hvm_domain())) {
+ sync_set_bit(port, BM(&s->evtchn_mask[0]));
+ do_hypercall = 1;
+ }
+ }
/* Slow path (hypercall) if this is a non-local port or if this is
* an hvm domain and an event is pending (hvm domains don't have
@@ -418,8 +430,6 @@
} else {
struct vcpu_info *vcpu_info = __this_cpu_read(xen_vcpu);
- sync_clear_bit(port, BM(&s->evtchn_mask[0]));
-
/*
* The following is basically the equivalent of
* 'hw_resend_irq'. Just like a real IO-APIC we 'lose
diff --git a/drivers/xen/fallback.c b/drivers/xen/fallback.c
index 0ef7c4d..b04fb64 100644
--- a/drivers/xen/fallback.c
+++ b/drivers/xen/fallback.c
@@ -44,7 +44,7 @@
}
EXPORT_SYMBOL_GPL(xen_event_channel_op_compat);
-int HYPERVISOR_physdev_op_compat(int cmd, void *arg)
+int xen_physdev_op_compat(int cmd, void *arg)
{
struct physdev_op op;
int rc;
@@ -78,3 +78,4 @@
return rc;
}
+EXPORT_SYMBOL_GPL(xen_physdev_op_compat);
diff --git a/drivers/xen/xen-acpi-processor.c b/drivers/xen/xen-acpi-processor.c
index f3278a6..90e34ac 100644
--- a/drivers/xen/xen-acpi-processor.c
+++ b/drivers/xen/xen-acpi-processor.c
@@ -505,6 +505,9 @@
pr = per_cpu(processors, i);
perf = per_cpu_ptr(acpi_perf_data, i);
+ if (!pr)
+ continue;
+
pr->performance = perf;
rc = acpi_processor_get_performance_info(pr);
if (rc)
diff --git a/drivers/xen/xen-pciback/pci_stub.c b/drivers/xen/xen-pciback/pci_stub.c
index 9204126..a2278ba 100644
--- a/drivers/xen/xen-pciback/pci_stub.c
+++ b/drivers/xen/xen-pciback/pci_stub.c
@@ -17,6 +17,7 @@
#include <xen/events.h>
#include <asm/xen/pci.h>
#include <asm/xen/hypervisor.h>
+#include <xen/interface/physdev.h>
#include "pciback.h"
#include "conf_space.h"
#include "conf_space_quirks.h"
@@ -85,37 +86,52 @@
static void pcistub_device_release(struct kref *kref)
{
struct pcistub_device *psdev;
+ struct pci_dev *dev;
struct xen_pcibk_dev_data *dev_data;
psdev = container_of(kref, struct pcistub_device, kref);
- dev_data = pci_get_drvdata(psdev->dev);
+ dev = psdev->dev;
+ dev_data = pci_get_drvdata(dev);
- dev_dbg(&psdev->dev->dev, "pcistub_device_release\n");
+ dev_dbg(&dev->dev, "pcistub_device_release\n");
- xen_unregister_device_domain_owner(psdev->dev);
+ xen_unregister_device_domain_owner(dev);
/* Call the reset function which does not take lock as this
* is called from "unbind" which takes a device_lock mutex.
*/
- __pci_reset_function_locked(psdev->dev);
- if (pci_load_and_free_saved_state(psdev->dev,
- &dev_data->pci_saved_state)) {
- dev_dbg(&psdev->dev->dev, "Could not reload PCI state\n");
- } else
- pci_restore_state(psdev->dev);
+ __pci_reset_function_locked(dev);
+ if (pci_load_and_free_saved_state(dev, &dev_data->pci_saved_state))
+ dev_dbg(&dev->dev, "Could not reload PCI state\n");
+ else
+ pci_restore_state(dev);
+
+ if (pci_find_capability(dev, PCI_CAP_ID_MSIX)) {
+ struct physdev_pci_device ppdev = {
+ .seg = pci_domain_nr(dev->bus),
+ .bus = dev->bus->number,
+ .devfn = dev->devfn
+ };
+ int err = HYPERVISOR_physdev_op(PHYSDEVOP_release_msix,
+ &ppdev);
+
+ if (err)
+ dev_warn(&dev->dev, "MSI-X release failed (%d)\n",
+ err);
+ }
/* Disable the device */
- xen_pcibk_reset_device(psdev->dev);
+ xen_pcibk_reset_device(dev);
kfree(dev_data);
- pci_set_drvdata(psdev->dev, NULL);
+ pci_set_drvdata(dev, NULL);
/* Clean-up the device */
- xen_pcibk_config_free_dyn_fields(psdev->dev);
- xen_pcibk_config_free_dev(psdev->dev);
+ xen_pcibk_config_free_dyn_fields(dev);
+ xen_pcibk_config_free_dev(dev);
- psdev->dev->dev_flags &= ~PCI_DEV_FLAGS_ASSIGNED;
- pci_dev_put(psdev->dev);
+ dev->dev_flags &= ~PCI_DEV_FLAGS_ASSIGNED;
+ pci_dev_put(dev);
kfree(psdev);
}
@@ -355,6 +371,19 @@
if (err)
goto config_release;
+ if (pci_find_capability(dev, PCI_CAP_ID_MSIX)) {
+ struct physdev_pci_device ppdev = {
+ .seg = pci_domain_nr(dev->bus),
+ .bus = dev->bus->number,
+ .devfn = dev->devfn
+ };
+
+ err = HYPERVISOR_physdev_op(PHYSDEVOP_prepare_msix, &ppdev);
+ if (err)
+ dev_err(&dev->dev, "MSI-X preparation failed (%d)\n",
+ err);
+ }
+
/* We need the device active to save the state. */
dev_dbg(&dev->dev, "save state of device\n");
pci_save_state(dev);
diff --git a/firmware/Makefile b/firmware/Makefile
index cbb09ce..5d8ee13 100644
--- a/firmware/Makefile
+++ b/firmware/Makefile
@@ -82,7 +82,7 @@
fw-shipped-$(CONFIG_SCSI_QLOGIC_1280) += qlogic/1040.bin qlogic/1280.bin \
qlogic/12160.bin
fw-shipped-$(CONFIG_SCSI_QLOGICPTI) += qlogic/isp1000.bin
-fw-shipped-$(CONFIG_INFINIBAND_QIB) += qlogic/sd7220.fw
+fw-shipped-$(CONFIG_INFINIBAND_QIB) += intel/sd7220.fw
fw-shipped-$(CONFIG_SND_KORG1212) += korg/k1212.dsp
fw-shipped-$(CONFIG_SND_MAESTRO3) += ess/maestro3_assp_kernel.fw \
ess/maestro3_assp_minisrc.fw
diff --git a/firmware/qlogic/sd7220.fw.ihex b/firmware/intel/sd7220.fw.ihex
similarity index 100%
rename from firmware/qlogic/sd7220.fw.ihex
rename to firmware/intel/sd7220.fw.ihex
diff --git a/fs/cifs/asn1.c b/fs/cifs/asn1.c
index cfd1ce3..1d36db1 100644
--- a/fs/cifs/asn1.c
+++ b/fs/cifs/asn1.c
@@ -614,53 +614,10 @@
}
}
- /* mechlistMIC */
- if (asn1_header_decode(&ctx, &end, &cls, &con, &tag) == 0) {
- /* Check if we have reached the end of the blob, but with
- no mechListMic (e.g. NTLMSSP instead of KRB5) */
- if (ctx.error == ASN1_ERR_DEC_EMPTY)
- goto decode_negtoken_exit;
- cFYI(1, "Error decoding last part negTokenInit exit3");
- return 0;
- } else if ((cls != ASN1_CTX) || (con != ASN1_CON)) {
- /* tag = 3 indicating mechListMIC */
- cFYI(1, "Exit 4 cls = %d con = %d tag = %d end = %p (%d)",
- cls, con, tag, end, *end);
- return 0;
- }
-
- /* sequence */
- if (asn1_header_decode(&ctx, &end, &cls, &con, &tag) == 0) {
- cFYI(1, "Error decoding last part negTokenInit exit5");
- return 0;
- } else if ((cls != ASN1_UNI) || (con != ASN1_CON)
- || (tag != ASN1_SEQ)) {
- cFYI(1, "cls = %d con = %d tag = %d end = %p (%d)",
- cls, con, tag, end, *end);
- }
-
- /* sequence of */
- if (asn1_header_decode(&ctx, &end, &cls, &con, &tag) == 0) {
- cFYI(1, "Error decoding last part negTokenInit exit 7");
- return 0;
- } else if ((cls != ASN1_CTX) || (con != ASN1_CON)) {
- cFYI(1, "Exit 8 cls = %d con = %d tag = %d end = %p (%d)",
- cls, con, tag, end, *end);
- return 0;
- }
-
- /* general string */
- if (asn1_header_decode(&ctx, &end, &cls, &con, &tag) == 0) {
- cFYI(1, "Error decoding last part negTokenInit exit9");
- return 0;
- } else if ((cls != ASN1_UNI) || (con != ASN1_PRI)
- || (tag != ASN1_GENSTR)) {
- cFYI(1, "Exit10 cls = %d con = %d tag = %d end = %p (%d)",
- cls, con, tag, end, *end);
- return 0;
- }
- cFYI(1, "Need to call asn1_octets_decode() function for %s",
- ctx.pointer); /* is this UTF-8 or ASCII? */
-decode_negtoken_exit:
+ /*
+ * We currently ignore anything at the end of the SPNEGO blob after
+ * the mechTypes have been parsed, since none of that info is
+ * used at the moment.
+ */
return 1;
}
diff --git a/fs/cifs/cifsfs.c b/fs/cifs/cifsfs.c
index 3cf8a15..345fc89 100644
--- a/fs/cifs/cifsfs.c
+++ b/fs/cifs/cifsfs.c
@@ -91,6 +91,30 @@
__u8 cifs_client_guid[SMB2_CLIENT_GUID_SIZE];
#endif
+/*
+ * Bumps refcount for cifs super block.
+ * Note that it should be only called if a referece to VFS super block is
+ * already held, e.g. in open-type syscalls context. Otherwise it can race with
+ * atomic_dec_and_test in deactivate_locked_super.
+ */
+void
+cifs_sb_active(struct super_block *sb)
+{
+ struct cifs_sb_info *server = CIFS_SB(sb);
+
+ if (atomic_inc_return(&server->active) == 1)
+ atomic_inc(&sb->s_active);
+}
+
+void
+cifs_sb_deactive(struct super_block *sb)
+{
+ struct cifs_sb_info *server = CIFS_SB(sb);
+
+ if (atomic_dec_and_test(&server->active))
+ deactivate_super(sb);
+}
+
static int
cifs_read_super(struct super_block *sb)
{
diff --git a/fs/cifs/cifsfs.h b/fs/cifs/cifsfs.h
index 7163419..0e32c34 100644
--- a/fs/cifs/cifsfs.h
+++ b/fs/cifs/cifsfs.h
@@ -41,6 +41,10 @@
extern const struct address_space_operations cifs_addr_ops;
extern const struct address_space_operations cifs_addr_ops_smallbuf;
+/* Functions related to super block operations */
+extern void cifs_sb_active(struct super_block *sb);
+extern void cifs_sb_deactive(struct super_block *sb);
+
/* Functions related to inodes */
extern const struct inode_operations cifs_dir_inode_ops;
extern struct inode *cifs_root_iget(struct super_block *);
diff --git a/fs/cifs/file.c b/fs/cifs/file.c
index 8c0d855..7a0dd99 100644
--- a/fs/cifs/file.c
+++ b/fs/cifs/file.c
@@ -300,6 +300,8 @@
INIT_WORK(&cfile->oplock_break, cifs_oplock_break);
mutex_init(&cfile->fh_mutex);
+ cifs_sb_active(inode->i_sb);
+
/*
* If the server returned a read oplock and we have mandatory brlocks,
* set oplock level to None.
@@ -349,7 +351,8 @@
struct cifs_tcon *tcon = tlink_tcon(cifs_file->tlink);
struct TCP_Server_Info *server = tcon->ses->server;
struct cifsInodeInfo *cifsi = CIFS_I(inode);
- struct cifs_sb_info *cifs_sb = CIFS_SB(inode->i_sb);
+ struct super_block *sb = inode->i_sb;
+ struct cifs_sb_info *cifs_sb = CIFS_SB(sb);
struct cifsLockInfo *li, *tmp;
struct cifs_fid fid;
struct cifs_pending_open open;
@@ -414,6 +417,7 @@
cifs_put_tlink(cifs_file->tlink);
dput(cifs_file->dentry);
+ cifs_sb_deactive(sb);
kfree(cifs_file);
}
diff --git a/fs/cifs/inode.c b/fs/cifs/inode.c
index 0079696..20887bf 100644
--- a/fs/cifs/inode.c
+++ b/fs/cifs/inode.c
@@ -1043,7 +1043,7 @@
cifs_sb->mnt_cifs_flags &
CIFS_MOUNT_MAP_SPECIAL_CHR);
if (rc != 0) {
- rc = -ETXTBSY;
+ rc = -EBUSY;
goto undo_setattr;
}
@@ -1062,7 +1062,7 @@
if (rc == -ENOENT)
rc = 0;
else if (rc != 0) {
- rc = -ETXTBSY;
+ rc = -EBUSY;
goto undo_rename;
}
cifsInode->delete_pending = true;
@@ -1169,15 +1169,13 @@
cifs_drop_nlink(inode);
} else if (rc == -ENOENT) {
d_drop(dentry);
- } else if (rc == -ETXTBSY) {
+ } else if (rc == -EBUSY) {
if (server->ops->rename_pending_delete) {
rc = server->ops->rename_pending_delete(full_path,
dentry, xid);
if (rc == 0)
cifs_drop_nlink(inode);
}
- if (rc == -ETXTBSY)
- rc = -EBUSY;
} else if ((rc == -EACCES) && (dosattr == 0) && inode) {
attrs = kzalloc(sizeof(*attrs), GFP_KERNEL);
if (attrs == NULL) {
@@ -1518,7 +1516,7 @@
* source. Note that cross directory moves do not work with
* rename by filehandle to various Windows servers.
*/
- if (rc == 0 || rc != -ETXTBSY)
+ if (rc == 0 || rc != -EBUSY)
goto do_rename_exit;
/* open-file renames don't work across directories */
diff --git a/fs/cifs/netmisc.c b/fs/cifs/netmisc.c
index a82bc51..c0b25b2 100644
--- a/fs/cifs/netmisc.c
+++ b/fs/cifs/netmisc.c
@@ -62,7 +62,7 @@
{ERRdiffdevice, -EXDEV},
{ERRnofiles, -ENOENT},
{ERRwriteprot, -EROFS},
- {ERRbadshare, -ETXTBSY},
+ {ERRbadshare, -EBUSY},
{ERRlock, -EACCES},
{ERRunsup, -EINVAL},
{ERRnosuchshare, -ENXIO},
diff --git a/fs/dcache.c b/fs/dcache.c
index fbfae008..e8bc342 100644
--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -2542,7 +2542,6 @@
bool slash = false;
int error = 0;
- br_read_lock(&vfsmount_lock);
while (dentry != root->dentry || vfsmnt != root->mnt) {
struct dentry * parent;
@@ -2572,8 +2571,6 @@
if (!error && !slash)
error = prepend(buffer, buflen, "/", 1);
-out:
- br_read_unlock(&vfsmount_lock);
return error;
global_root:
@@ -2590,7 +2587,7 @@
error = prepend(buffer, buflen, "/", 1);
if (!error)
error = is_mounted(vfsmnt) ? 1 : 2;
- goto out;
+ return error;
}
/**
@@ -2617,9 +2614,11 @@
int error;
prepend(&res, &buflen, "\0", 1);
+ br_read_lock(&vfsmount_lock);
write_seqlock(&rename_lock);
error = prepend_path(path, root, &res, &buflen);
write_sequnlock(&rename_lock);
+ br_read_unlock(&vfsmount_lock);
if (error < 0)
return ERR_PTR(error);
@@ -2636,9 +2635,11 @@
int error;
prepend(&res, &buflen, "\0", 1);
+ br_read_lock(&vfsmount_lock);
write_seqlock(&rename_lock);
error = prepend_path(path, &root, &res, &buflen);
write_sequnlock(&rename_lock);
+ br_read_unlock(&vfsmount_lock);
if (error > 1)
error = -EINVAL;
@@ -2702,11 +2703,13 @@
return path->dentry->d_op->d_dname(path->dentry, buf, buflen);
get_fs_root(current->fs, &root);
+ br_read_lock(&vfsmount_lock);
write_seqlock(&rename_lock);
error = path_with_deleted(path, &root, &res, &buflen);
+ write_sequnlock(&rename_lock);
+ br_read_unlock(&vfsmount_lock);
if (error < 0)
res = ERR_PTR(error);
- write_sequnlock(&rename_lock);
path_put(&root);
return res;
}
@@ -2830,6 +2833,7 @@
get_fs_root_and_pwd(current->fs, &root, &pwd);
error = -ENOENT;
+ br_read_lock(&vfsmount_lock);
write_seqlock(&rename_lock);
if (!d_unlinked(pwd.dentry)) {
unsigned long len;
@@ -2839,6 +2843,7 @@
prepend(&cwd, &buflen, "\0", 1);
error = prepend_path(&pwd, &root, &cwd, &buflen);
write_sequnlock(&rename_lock);
+ br_read_unlock(&vfsmount_lock);
if (error < 0)
goto out;
@@ -2859,6 +2864,7 @@
}
} else {
write_sequnlock(&rename_lock);
+ br_read_unlock(&vfsmount_lock);
}
out:
diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
index 4a01ba3..3b83cd6 100644
--- a/fs/ext4/ext4.h
+++ b/fs/ext4/ext4.h
@@ -335,9 +335,9 @@
*/
struct flex_groups {
- atomic_t free_inodes;
- atomic_t free_clusters;
- atomic_t used_dirs;
+ atomic64_t free_clusters;
+ atomic_t free_inodes;
+ atomic_t used_dirs;
};
#define EXT4_BG_INODE_UNINIT 0x0001 /* Inode table/bitmap not in use */
@@ -2617,7 +2617,7 @@
extern int __init ext4_init_pageio(void);
extern void ext4_add_complete_io(ext4_io_end_t *io_end);
extern void ext4_exit_pageio(void);
-extern void ext4_ioend_wait(struct inode *);
+extern void ext4_ioend_shutdown(struct inode *);
extern void ext4_free_io_end(ext4_io_end_t *io);
extern ext4_io_end_t *ext4_init_io_end(struct inode *inode, gfp_t flags);
extern void ext4_end_io_work(struct work_struct *work);
diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
index 28dd8ee..56efcaa 100644
--- a/fs/ext4/extents.c
+++ b/fs/ext4/extents.c
@@ -1584,10 +1584,12 @@
unsigned short ext1_ee_len, ext2_ee_len, max_len;
/*
- * Make sure that either both extents are uninitialized, or
- * both are _not_.
+ * Make sure that both extents are initialized. We don't merge
+ * uninitialized extents so that we can be sure that end_io code has
+ * the extent that was written properly split out and conversion to
+ * initialized is trivial.
*/
- if (ext4_ext_is_uninitialized(ex1) ^ ext4_ext_is_uninitialized(ex2))
+ if (ext4_ext_is_uninitialized(ex1) || ext4_ext_is_uninitialized(ex2))
return 0;
if (ext4_ext_is_uninitialized(ex1))
@@ -2923,7 +2925,7 @@
{
ext4_fsblk_t newblock;
ext4_lblk_t ee_block;
- struct ext4_extent *ex, newex, orig_ex;
+ struct ext4_extent *ex, newex, orig_ex, zero_ex;
struct ext4_extent *ex2 = NULL;
unsigned int ee_len, depth;
int err = 0;
@@ -2943,6 +2945,10 @@
newblock = split - ee_block + ext4_ext_pblock(ex);
BUG_ON(split < ee_block || split >= (ee_block + ee_len));
+ BUG_ON(!ext4_ext_is_uninitialized(ex) &&
+ split_flag & (EXT4_EXT_MAY_ZEROOUT |
+ EXT4_EXT_MARK_UNINIT1 |
+ EXT4_EXT_MARK_UNINIT2));
err = ext4_ext_get_access(handle, inode, path + depth);
if (err)
@@ -2990,12 +2996,26 @@
err = ext4_ext_insert_extent(handle, inode, path, &newex, flags);
if (err == -ENOSPC && (EXT4_EXT_MAY_ZEROOUT & split_flag)) {
if (split_flag & (EXT4_EXT_DATA_VALID1|EXT4_EXT_DATA_VALID2)) {
- if (split_flag & EXT4_EXT_DATA_VALID1)
+ if (split_flag & EXT4_EXT_DATA_VALID1) {
err = ext4_ext_zeroout(inode, ex2);
- else
+ zero_ex.ee_block = ex2->ee_block;
+ zero_ex.ee_len = ext4_ext_get_actual_len(ex2);
+ ext4_ext_store_pblock(&zero_ex,
+ ext4_ext_pblock(ex2));
+ } else {
err = ext4_ext_zeroout(inode, ex);
- } else
+ zero_ex.ee_block = ex->ee_block;
+ zero_ex.ee_len = ext4_ext_get_actual_len(ex);
+ ext4_ext_store_pblock(&zero_ex,
+ ext4_ext_pblock(ex));
+ }
+ } else {
err = ext4_ext_zeroout(inode, &orig_ex);
+ zero_ex.ee_block = orig_ex.ee_block;
+ zero_ex.ee_len = ext4_ext_get_actual_len(&orig_ex);
+ ext4_ext_store_pblock(&zero_ex,
+ ext4_ext_pblock(&orig_ex));
+ }
if (err)
goto fix_extent_len;
@@ -3003,6 +3023,12 @@
ex->ee_len = cpu_to_le16(ee_len);
ext4_ext_try_to_merge(handle, inode, path, ex);
err = ext4_ext_dirty(handle, inode, path + path->p_depth);
+ if (err)
+ goto fix_extent_len;
+
+ /* update extent status tree */
+ err = ext4_es_zeroout(inode, &zero_ex);
+
goto out;
} else if (err)
goto fix_extent_len;
@@ -3041,6 +3067,7 @@
int err = 0;
int uninitialized;
int split_flag1, flags1;
+ int allocated = map->m_len;
depth = ext_depth(inode);
ex = path[depth].p_ext;
@@ -3060,20 +3087,29 @@
map->m_lblk + map->m_len, split_flag1, flags1);
if (err)
goto out;
+ } else {
+ allocated = ee_len - (map->m_lblk - ee_block);
}
-
+ /*
+ * Update path is required because previous ext4_split_extent_at() may
+ * result in split of original leaf or extent zeroout.
+ */
ext4_ext_drop_refs(path);
path = ext4_ext_find_extent(inode, map->m_lblk, path);
if (IS_ERR(path))
return PTR_ERR(path);
+ depth = ext_depth(inode);
+ ex = path[depth].p_ext;
+ uninitialized = ext4_ext_is_uninitialized(ex);
+ split_flag1 = 0;
if (map->m_lblk >= ee_block) {
- split_flag1 = split_flag & (EXT4_EXT_MAY_ZEROOUT |
- EXT4_EXT_DATA_VALID2);
- if (uninitialized)
+ split_flag1 = split_flag & EXT4_EXT_DATA_VALID2;
+ if (uninitialized) {
split_flag1 |= EXT4_EXT_MARK_UNINIT1;
- if (split_flag & EXT4_EXT_MARK_UNINIT2)
- split_flag1 |= EXT4_EXT_MARK_UNINIT2;
+ split_flag1 |= split_flag & (EXT4_EXT_MAY_ZEROOUT |
+ EXT4_EXT_MARK_UNINIT2);
+ }
err = ext4_split_extent_at(handle, inode, path,
map->m_lblk, split_flag1, flags);
if (err)
@@ -3082,7 +3118,7 @@
ext4_ext_show_leaf(inode, path);
out:
- return err ? err : map->m_len;
+ return err ? err : allocated;
}
/*
@@ -3137,6 +3173,7 @@
ee_block = le32_to_cpu(ex->ee_block);
ee_len = ext4_ext_get_actual_len(ex);
allocated = ee_len - (map->m_lblk - ee_block);
+ zero_ex.ee_len = 0;
trace_ext4_ext_convert_to_initialized_enter(inode, map, ex);
@@ -3227,13 +3264,16 @@
if (EXT4_EXT_MAY_ZEROOUT & split_flag)
max_zeroout = sbi->s_extent_max_zeroout_kb >>
- inode->i_sb->s_blocksize_bits;
+ (inode->i_sb->s_blocksize_bits - 10);
/* If extent is less than s_max_zeroout_kb, zeroout directly */
if (max_zeroout && (ee_len <= max_zeroout)) {
err = ext4_ext_zeroout(inode, ex);
if (err)
goto out;
+ zero_ex.ee_block = ex->ee_block;
+ zero_ex.ee_len = ext4_ext_get_actual_len(ex);
+ ext4_ext_store_pblock(&zero_ex, ext4_ext_pblock(ex));
err = ext4_ext_get_access(handle, inode, path + depth);
if (err)
@@ -3292,6 +3332,9 @@
err = allocated;
out:
+ /* If we have gotten a failure, don't zero out status tree */
+ if (!err)
+ err = ext4_es_zeroout(inode, &zero_ex);
return err ? err : allocated;
}
@@ -3374,8 +3417,19 @@
"block %llu, max_blocks %u\n", inode->i_ino,
(unsigned long long)ee_block, ee_len);
- /* If extent is larger than requested then split is required */
+ /* If extent is larger than requested it is a clear sign that we still
+ * have some extent state machine issues left. So extent_split is still
+ * required.
+ * TODO: Once all related issues will be fixed this situation should be
+ * illegal.
+ */
if (ee_block != map->m_lblk || ee_len > map->m_len) {
+#ifdef EXT4_DEBUG
+ ext4_warning("Inode (%ld) finished: extent logical block %llu,"
+ " len %u; IO logical block %llu, len %u\n",
+ inode->i_ino, (unsigned long long)ee_block, ee_len,
+ (unsigned long long)map->m_lblk, map->m_len);
+#endif
err = ext4_split_unwritten_extents(handle, inode, map, path,
EXT4_GET_BLOCKS_CONVERT);
if (err < 0)
@@ -3626,6 +3680,10 @@
path, map->m_len);
} else
err = ret;
+ map->m_flags |= EXT4_MAP_MAPPED;
+ if (allocated > map->m_len)
+ allocated = map->m_len;
+ map->m_len = allocated;
goto out2;
}
/* buffered IO case */
@@ -3675,6 +3733,7 @@
allocated - map->m_len);
allocated = map->m_len;
}
+ map->m_len = allocated;
/*
* If we have done fallocate with the offset that is already
@@ -4106,9 +4165,6 @@
}
} else {
BUG_ON(allocated_clusters < reserved_clusters);
- /* We will claim quota for all newly allocated blocks.*/
- ext4_da_update_reserve_space(inode, allocated_clusters,
- 1);
if (reserved_clusters < allocated_clusters) {
struct ext4_inode_info *ei = EXT4_I(inode);
int reservation = allocated_clusters -
@@ -4159,6 +4215,15 @@
ei->i_reserved_data_blocks += reservation;
spin_unlock(&ei->i_block_reservation_lock);
}
+ /*
+ * We will claim quota for all newly allocated blocks.
+ * We're updating the reserved space *after* the
+ * correction above so we do not accidentally free
+ * all the metadata reservation because we might
+ * actually need it later on.
+ */
+ ext4_da_update_reserve_space(inode, allocated_clusters,
+ 1);
}
}
@@ -4368,8 +4433,6 @@
if (len <= EXT_UNINIT_MAX_LEN << blkbits)
flags |= EXT4_GET_BLOCKS_NO_NORMALIZE;
- /* Prevent race condition between unwritten */
- ext4_flush_unwritten_io(inode);
retry:
while (ret >= 0 && ret < max_blocks) {
map.m_lblk = map.m_lblk + ret;
diff --git a/fs/ext4/extents_status.c b/fs/ext4/extents_status.c
index 95796a1..fe3337a 100644
--- a/fs/ext4/extents_status.c
+++ b/fs/ext4/extents_status.c
@@ -333,17 +333,27 @@
static int ext4_es_can_be_merged(struct extent_status *es1,
struct extent_status *es2)
{
- if (es1->es_lblk + es1->es_len != es2->es_lblk)
- return 0;
-
if (ext4_es_status(es1) != ext4_es_status(es2))
return 0;
- if ((ext4_es_is_written(es1) || ext4_es_is_unwritten(es1)) &&
- (ext4_es_pblock(es1) + es1->es_len != ext4_es_pblock(es2)))
+ if (((__u64) es1->es_len) + es2->es_len > 0xFFFFFFFFULL)
return 0;
- return 1;
+ if (((__u64) es1->es_lblk) + es1->es_len != es2->es_lblk)
+ return 0;
+
+ if ((ext4_es_is_written(es1) || ext4_es_is_unwritten(es1)) &&
+ (ext4_es_pblock(es1) + es1->es_len == ext4_es_pblock(es2)))
+ return 1;
+
+ if (ext4_es_is_hole(es1))
+ return 1;
+
+ /* we need to check delayed extent is without unwritten status */
+ if (ext4_es_is_delayed(es1) && !ext4_es_is_unwritten(es1))
+ return 1;
+
+ return 0;
}
static struct extent_status *
@@ -389,6 +399,179 @@
return es;
}
+#ifdef ES_AGGRESSIVE_TEST
+static void ext4_es_insert_extent_ext_check(struct inode *inode,
+ struct extent_status *es)
+{
+ struct ext4_ext_path *path = NULL;
+ struct ext4_extent *ex;
+ ext4_lblk_t ee_block;
+ ext4_fsblk_t ee_start;
+ unsigned short ee_len;
+ int depth, ee_status, es_status;
+
+ path = ext4_ext_find_extent(inode, es->es_lblk, NULL);
+ if (IS_ERR(path))
+ return;
+
+ depth = ext_depth(inode);
+ ex = path[depth].p_ext;
+
+ if (ex) {
+
+ ee_block = le32_to_cpu(ex->ee_block);
+ ee_start = ext4_ext_pblock(ex);
+ ee_len = ext4_ext_get_actual_len(ex);
+
+ ee_status = ext4_ext_is_uninitialized(ex) ? 1 : 0;
+ es_status = ext4_es_is_unwritten(es) ? 1 : 0;
+
+ /*
+ * Make sure ex and es are not overlap when we try to insert
+ * a delayed/hole extent.
+ */
+ if (!ext4_es_is_written(es) && !ext4_es_is_unwritten(es)) {
+ if (in_range(es->es_lblk, ee_block, ee_len)) {
+ pr_warn("ES insert assertation failed for "
+ "inode: %lu we can find an extent "
+ "at block [%d/%d/%llu/%c], but we "
+ "want to add an delayed/hole extent "
+ "[%d/%d/%llu/%llx]\n",
+ inode->i_ino, ee_block, ee_len,
+ ee_start, ee_status ? 'u' : 'w',
+ es->es_lblk, es->es_len,
+ ext4_es_pblock(es), ext4_es_status(es));
+ }
+ goto out;
+ }
+
+ /*
+ * We don't check ee_block == es->es_lblk, etc. because es
+ * might be a part of whole extent, vice versa.
+ */
+ if (es->es_lblk < ee_block ||
+ ext4_es_pblock(es) != ee_start + es->es_lblk - ee_block) {
+ pr_warn("ES insert assertation failed for inode: %lu "
+ "ex_status [%d/%d/%llu/%c] != "
+ "es_status [%d/%d/%llu/%c]\n", inode->i_ino,
+ ee_block, ee_len, ee_start,
+ ee_status ? 'u' : 'w', es->es_lblk, es->es_len,
+ ext4_es_pblock(es), es_status ? 'u' : 'w');
+ goto out;
+ }
+
+ if (ee_status ^ es_status) {
+ pr_warn("ES insert assertation failed for inode: %lu "
+ "ex_status [%d/%d/%llu/%c] != "
+ "es_status [%d/%d/%llu/%c]\n", inode->i_ino,
+ ee_block, ee_len, ee_start,
+ ee_status ? 'u' : 'w', es->es_lblk, es->es_len,
+ ext4_es_pblock(es), es_status ? 'u' : 'w');
+ }
+ } else {
+ /*
+ * We can't find an extent on disk. So we need to make sure
+ * that we don't want to add an written/unwritten extent.
+ */
+ if (!ext4_es_is_delayed(es) && !ext4_es_is_hole(es)) {
+ pr_warn("ES insert assertation failed for inode: %lu "
+ "can't find an extent at block %d but we want "
+ "to add an written/unwritten extent "
+ "[%d/%d/%llu/%llx]\n", inode->i_ino,
+ es->es_lblk, es->es_lblk, es->es_len,
+ ext4_es_pblock(es), ext4_es_status(es));
+ }
+ }
+out:
+ if (path) {
+ ext4_ext_drop_refs(path);
+ kfree(path);
+ }
+}
+
+static void ext4_es_insert_extent_ind_check(struct inode *inode,
+ struct extent_status *es)
+{
+ struct ext4_map_blocks map;
+ int retval;
+
+ /*
+ * Here we call ext4_ind_map_blocks to lookup a block mapping because
+ * 'Indirect' structure is defined in indirect.c. So we couldn't
+ * access direct/indirect tree from outside. It is too dirty to define
+ * this function in indirect.c file.
+ */
+
+ map.m_lblk = es->es_lblk;
+ map.m_len = es->es_len;
+
+ retval = ext4_ind_map_blocks(NULL, inode, &map, 0);
+ if (retval > 0) {
+ if (ext4_es_is_delayed(es) || ext4_es_is_hole(es)) {
+ /*
+ * We want to add a delayed/hole extent but this
+ * block has been allocated.
+ */
+ pr_warn("ES insert assertation failed for inode: %lu "
+ "We can find blocks but we want to add a "
+ "delayed/hole extent [%d/%d/%llu/%llx]\n",
+ inode->i_ino, es->es_lblk, es->es_len,
+ ext4_es_pblock(es), ext4_es_status(es));
+ return;
+ } else if (ext4_es_is_written(es)) {
+ if (retval != es->es_len) {
+ pr_warn("ES insert assertation failed for "
+ "inode: %lu retval %d != es_len %d\n",
+ inode->i_ino, retval, es->es_len);
+ return;
+ }
+ if (map.m_pblk != ext4_es_pblock(es)) {
+ pr_warn("ES insert assertation failed for "
+ "inode: %lu m_pblk %llu != "
+ "es_pblk %llu\n",
+ inode->i_ino, map.m_pblk,
+ ext4_es_pblock(es));
+ return;
+ }
+ } else {
+ /*
+ * We don't need to check unwritten extent because
+ * indirect-based file doesn't have it.
+ */
+ BUG_ON(1);
+ }
+ } else if (retval == 0) {
+ if (ext4_es_is_written(es)) {
+ pr_warn("ES insert assertation failed for inode: %lu "
+ "We can't find the block but we want to add "
+ "an written extent [%d/%d/%llu/%llx]\n",
+ inode->i_ino, es->es_lblk, es->es_len,
+ ext4_es_pblock(es), ext4_es_status(es));
+ return;
+ }
+ }
+}
+
+static inline void ext4_es_insert_extent_check(struct inode *inode,
+ struct extent_status *es)
+{
+ /*
+ * We don't need to worry about the race condition because
+ * caller takes i_data_sem locking.
+ */
+ BUG_ON(!rwsem_is_locked(&EXT4_I(inode)->i_data_sem));
+ if (ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS))
+ ext4_es_insert_extent_ext_check(inode, es);
+ else
+ ext4_es_insert_extent_ind_check(inode, es);
+}
+#else
+static inline void ext4_es_insert_extent_check(struct inode *inode,
+ struct extent_status *es)
+{
+}
+#endif
+
static int __es_insert_extent(struct inode *inode, struct extent_status *newes)
{
struct ext4_es_tree *tree = &EXT4_I(inode)->i_es_tree;
@@ -471,6 +654,8 @@
ext4_es_store_status(&newes, status);
trace_ext4_es_insert_extent(inode, &newes);
+ ext4_es_insert_extent_check(inode, &newes);
+
write_lock(&EXT4_I(inode)->i_es_lock);
err = __es_remove_extent(inode, lblk, end);
if (err != 0)
@@ -669,6 +854,23 @@
return err;
}
+int ext4_es_zeroout(struct inode *inode, struct ext4_extent *ex)
+{
+ ext4_lblk_t ee_block;
+ ext4_fsblk_t ee_pblock;
+ unsigned int ee_len;
+
+ ee_block = le32_to_cpu(ex->ee_block);
+ ee_len = ext4_ext_get_actual_len(ex);
+ ee_pblock = ext4_ext_pblock(ex);
+
+ if (ee_len == 0)
+ return 0;
+
+ return ext4_es_insert_extent(inode, ee_block, ee_len, ee_pblock,
+ EXTENT_STATUS_WRITTEN);
+}
+
static int ext4_es_shrink(struct shrinker *shrink, struct shrink_control *sc)
{
struct ext4_sb_info *sbi = container_of(shrink,
diff --git a/fs/ext4/extents_status.h b/fs/ext4/extents_status.h
index f190dfe..d8e2d4d 100644
--- a/fs/ext4/extents_status.h
+++ b/fs/ext4/extents_status.h
@@ -21,6 +21,12 @@
#endif
/*
+ * With ES_AGGRESSIVE_TEST defined, the result of es caching will be
+ * checked with old map_block's result.
+ */
+#define ES_AGGRESSIVE_TEST__
+
+/*
* These flags live in the high bits of extent_status.es_pblk
*/
#define EXTENT_STATUS_WRITTEN (1ULL << 63)
@@ -33,6 +39,8 @@
EXTENT_STATUS_DELAYED | \
EXTENT_STATUS_HOLE)
+struct ext4_extent;
+
struct extent_status {
struct rb_node rb_node;
ext4_lblk_t es_lblk; /* first logical block extent covers */
@@ -58,6 +66,7 @@
struct extent_status *es);
extern int ext4_es_lookup_extent(struct inode *inode, ext4_lblk_t lblk,
struct extent_status *es);
+extern int ext4_es_zeroout(struct inode *inode, struct ext4_extent *ex);
static inline int ext4_es_is_written(struct extent_status *es)
{
diff --git a/fs/ext4/ialloc.c b/fs/ext4/ialloc.c
index 32fd2b9..6c5bb8d 100644
--- a/fs/ext4/ialloc.c
+++ b/fs/ext4/ialloc.c
@@ -324,8 +324,8 @@
}
struct orlov_stats {
+ __u64 free_clusters;
__u32 free_inodes;
- __u32 free_clusters;
__u32 used_dirs;
};
@@ -342,7 +342,7 @@
if (flex_size > 1) {
stats->free_inodes = atomic_read(&flex_group[g].free_inodes);
- stats->free_clusters = atomic_read(&flex_group[g].free_clusters);
+ stats->free_clusters = atomic64_read(&flex_group[g].free_clusters);
stats->used_dirs = atomic_read(&flex_group[g].used_dirs);
return;
}
diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index 9ea0cde..b3a5213 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -185,8 +185,6 @@
trace_ext4_evict_inode(inode);
- ext4_ioend_wait(inode);
-
if (inode->i_nlink) {
/*
* When journalling data dirty buffers are tracked only in the
@@ -207,7 +205,8 @@
* don't use page cache.
*/
if (ext4_should_journal_data(inode) &&
- (S_ISLNK(inode->i_mode) || S_ISREG(inode->i_mode))) {
+ (S_ISLNK(inode->i_mode) || S_ISREG(inode->i_mode)) &&
+ inode->i_ino != EXT4_JOURNAL_INO) {
journal_t *journal = EXT4_SB(inode->i_sb)->s_journal;
tid_t commit_tid = EXT4_I(inode)->i_datasync_tid;
@@ -216,6 +215,7 @@
filemap_write_and_wait(&inode->i_data);
}
truncate_inode_pages(&inode->i_data, 0);
+ ext4_ioend_shutdown(inode);
goto no_delete;
}
@@ -225,6 +225,7 @@
if (ext4_should_order_data(inode))
ext4_begin_ordered_truncate(inode, 0);
truncate_inode_pages(&inode->i_data, 0);
+ ext4_ioend_shutdown(inode);
if (is_bad_inode(inode))
goto no_delete;
@@ -482,6 +483,58 @@
return num;
}
+#ifdef ES_AGGRESSIVE_TEST
+static void ext4_map_blocks_es_recheck(handle_t *handle,
+ struct inode *inode,
+ struct ext4_map_blocks *es_map,
+ struct ext4_map_blocks *map,
+ int flags)
+{
+ int retval;
+
+ map->m_flags = 0;
+ /*
+ * There is a race window that the result is not the same.
+ * e.g. xfstests #223 when dioread_nolock enables. The reason
+ * is that we lookup a block mapping in extent status tree with
+ * out taking i_data_sem. So at the time the unwritten extent
+ * could be converted.
+ */
+ if (!(flags & EXT4_GET_BLOCKS_NO_LOCK))
+ down_read((&EXT4_I(inode)->i_data_sem));
+ if (ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS)) {
+ retval = ext4_ext_map_blocks(handle, inode, map, flags &
+ EXT4_GET_BLOCKS_KEEP_SIZE);
+ } else {
+ retval = ext4_ind_map_blocks(handle, inode, map, flags &
+ EXT4_GET_BLOCKS_KEEP_SIZE);
+ }
+ if (!(flags & EXT4_GET_BLOCKS_NO_LOCK))
+ up_read((&EXT4_I(inode)->i_data_sem));
+ /*
+ * Clear EXT4_MAP_FROM_CLUSTER and EXT4_MAP_BOUNDARY flag
+ * because it shouldn't be marked in es_map->m_flags.
+ */
+ map->m_flags &= ~(EXT4_MAP_FROM_CLUSTER | EXT4_MAP_BOUNDARY);
+
+ /*
+ * We don't check m_len because extent will be collpased in status
+ * tree. So the m_len might not equal.
+ */
+ if (es_map->m_lblk != map->m_lblk ||
+ es_map->m_flags != map->m_flags ||
+ es_map->m_pblk != map->m_pblk) {
+ printk("ES cache assertation failed for inode: %lu "
+ "es_cached ex [%d/%d/%llu/%x] != "
+ "found ex [%d/%d/%llu/%x] retval %d flags %x\n",
+ inode->i_ino, es_map->m_lblk, es_map->m_len,
+ es_map->m_pblk, es_map->m_flags, map->m_lblk,
+ map->m_len, map->m_pblk, map->m_flags,
+ retval, flags);
+ }
+}
+#endif /* ES_AGGRESSIVE_TEST */
+
/*
* The ext4_map_blocks() function tries to look up the requested blocks,
* and returns if the blocks are already mapped.
@@ -509,6 +562,11 @@
{
struct extent_status es;
int retval;
+#ifdef ES_AGGRESSIVE_TEST
+ struct ext4_map_blocks orig_map;
+
+ memcpy(&orig_map, map, sizeof(*map));
+#endif
map->m_flags = 0;
ext_debug("ext4_map_blocks(): inode %lu, flag %d, max_blocks %u,"
@@ -531,6 +589,10 @@
} else {
BUG_ON(1);
}
+#ifdef ES_AGGRESSIVE_TEST
+ ext4_map_blocks_es_recheck(handle, inode, map,
+ &orig_map, flags);
+#endif
goto found;
}
@@ -551,6 +613,15 @@
int ret;
unsigned long long status;
+#ifdef ES_AGGRESSIVE_TEST
+ if (retval != map->m_len) {
+ printk("ES len assertation failed for inode: %lu "
+ "retval %d != map->m_len %d "
+ "in %s (lookup)\n", inode->i_ino, retval,
+ map->m_len, __func__);
+ }
+#endif
+
status = map->m_flags & EXT4_MAP_UNWRITTEN ?
EXTENT_STATUS_UNWRITTEN : EXTENT_STATUS_WRITTEN;
if (!(flags & EXT4_GET_BLOCKS_DELALLOC_RESERVE) &&
@@ -643,6 +714,24 @@
int ret;
unsigned long long status;
+#ifdef ES_AGGRESSIVE_TEST
+ if (retval != map->m_len) {
+ printk("ES len assertation failed for inode: %lu "
+ "retval %d != map->m_len %d "
+ "in %s (allocation)\n", inode->i_ino, retval,
+ map->m_len, __func__);
+ }
+#endif
+
+ /*
+ * If the extent has been zeroed out, we don't need to update
+ * extent status tree.
+ */
+ if ((flags & EXT4_GET_BLOCKS_PRE_IO) &&
+ ext4_es_lookup_extent(inode, map->m_lblk, &es)) {
+ if (ext4_es_is_written(&es))
+ goto has_zeroout;
+ }
status = map->m_flags & EXT4_MAP_UNWRITTEN ?
EXTENT_STATUS_UNWRITTEN : EXTENT_STATUS_WRITTEN;
if (!(flags & EXT4_GET_BLOCKS_DELALLOC_RESERVE) &&
@@ -655,6 +744,7 @@
retval = ret;
}
+has_zeroout:
up_write((&EXT4_I(inode)->i_data_sem));
if (retval > 0 && map->m_flags & EXT4_MAP_MAPPED) {
int ret = check_block_validity(inode, map);
@@ -1216,6 +1306,55 @@
}
/*
+ * Reserve a metadata for a single block located at lblock
+ */
+static int ext4_da_reserve_metadata(struct inode *inode, ext4_lblk_t lblock)
+{
+ int retries = 0;
+ struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
+ struct ext4_inode_info *ei = EXT4_I(inode);
+ unsigned int md_needed;
+ ext4_lblk_t save_last_lblock;
+ int save_len;
+
+ /*
+ * recalculate the amount of metadata blocks to reserve
+ * in order to allocate nrblocks
+ * worse case is one extent per block
+ */
+repeat:
+ spin_lock(&ei->i_block_reservation_lock);
+ /*
+ * ext4_calc_metadata_amount() has side effects, which we have
+ * to be prepared undo if we fail to claim space.
+ */
+ save_len = ei->i_da_metadata_calc_len;
+ save_last_lblock = ei->i_da_metadata_calc_last_lblock;
+ md_needed = EXT4_NUM_B2C(sbi,
+ ext4_calc_metadata_amount(inode, lblock));
+ trace_ext4_da_reserve_space(inode, md_needed);
+
+ /*
+ * We do still charge estimated metadata to the sb though;
+ * we cannot afford to run out of free blocks.
+ */
+ if (ext4_claim_free_clusters(sbi, md_needed, 0)) {
+ ei->i_da_metadata_calc_len = save_len;
+ ei->i_da_metadata_calc_last_lblock = save_last_lblock;
+ spin_unlock(&ei->i_block_reservation_lock);
+ if (ext4_should_retry_alloc(inode->i_sb, &retries)) {
+ cond_resched();
+ goto repeat;
+ }
+ return -ENOSPC;
+ }
+ ei->i_reserved_meta_blocks += md_needed;
+ spin_unlock(&ei->i_block_reservation_lock);
+
+ return 0; /* success */
+}
+
+/*
* Reserve a single cluster located at lblock
*/
static int ext4_da_reserve_space(struct inode *inode, ext4_lblk_t lblock)
@@ -1263,7 +1402,7 @@
ei->i_da_metadata_calc_last_lblock = save_last_lblock;
spin_unlock(&ei->i_block_reservation_lock);
if (ext4_should_retry_alloc(inode->i_sb, &retries)) {
- yield();
+ cond_resched();
goto repeat;
}
dquot_release_reservation_block(inode, EXT4_C2B(sbi, 1));
@@ -1768,6 +1907,11 @@
struct extent_status es;
int retval;
sector_t invalid_block = ~((sector_t) 0xffff);
+#ifdef ES_AGGRESSIVE_TEST
+ struct ext4_map_blocks orig_map;
+
+ memcpy(&orig_map, map, sizeof(*map));
+#endif
if (invalid_block < ext4_blocks_count(EXT4_SB(inode->i_sb)->s_es))
invalid_block = ~0;
@@ -1809,6 +1953,9 @@
else
BUG_ON(1);
+#ifdef ES_AGGRESSIVE_TEST
+ ext4_map_blocks_es_recheck(NULL, inode, map, &orig_map, 0);
+#endif
return retval;
}
@@ -1843,8 +1990,11 @@
* XXX: __block_prepare_write() unmaps passed block,
* is it OK?
*/
- /* If the block was allocated from previously allocated cluster,
- * then we dont need to reserve it again. */
+ /*
+ * If the block was allocated from previously allocated cluster,
+ * then we don't need to reserve it again. However we still need
+ * to reserve metadata for every block we're going to write.
+ */
if (!(map->m_flags & EXT4_MAP_FROM_CLUSTER)) {
ret = ext4_da_reserve_space(inode, iblock);
if (ret) {
@@ -1852,6 +2002,13 @@
retval = ret;
goto out_unlock;
}
+ } else {
+ ret = ext4_da_reserve_metadata(inode, iblock);
+ if (ret) {
+ /* not enough space to reserve */
+ retval = ret;
+ goto out_unlock;
+ }
}
ret = ext4_es_insert_extent(inode, map->m_lblk, map->m_len,
@@ -1873,6 +2030,15 @@
int ret;
unsigned long long status;
+#ifdef ES_AGGRESSIVE_TEST
+ if (retval != map->m_len) {
+ printk("ES len assertation failed for inode: %lu "
+ "retval %d != map->m_len %d "
+ "in %s (lookup)\n", inode->i_ino, retval,
+ map->m_len, __func__);
+ }
+#endif
+
status = map->m_flags & EXT4_MAP_UNWRITTEN ?
EXTENT_STATUS_UNWRITTEN : EXTENT_STATUS_WRITTEN;
ret = ext4_es_insert_extent(inode, map->m_lblk, map->m_len,
@@ -2908,8 +3074,8 @@
trace_ext4_releasepage(page);
- WARN_ON(PageChecked(page));
- if (!page_has_buffers(page))
+ /* Page has dirty journalled data -> cannot release */
+ if (PageChecked(page))
return 0;
if (journal)
return jbd2_journal_try_to_free_buffers(journal, page, wait);
diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
index 7bb713a..ee6614bd 100644
--- a/fs/ext4/mballoc.c
+++ b/fs/ext4/mballoc.c
@@ -2804,8 +2804,8 @@
if (sbi->s_log_groups_per_flex) {
ext4_group_t flex_group = ext4_flex_group(sbi,
ac->ac_b_ex.fe_group);
- atomic_sub(ac->ac_b_ex.fe_len,
- &sbi->s_flex_groups[flex_group].free_clusters);
+ atomic64_sub(ac->ac_b_ex.fe_len,
+ &sbi->s_flex_groups[flex_group].free_clusters);
}
err = ext4_handle_dirty_metadata(handle, NULL, bitmap_bh);
@@ -3692,11 +3692,7 @@
if (free < needed && busy) {
busy = 0;
ext4_unlock_group(sb, group);
- /*
- * Yield the CPU here so that we don't get soft lockup
- * in non preempt case.
- */
- yield();
+ cond_resched();
goto repeat;
}
@@ -4246,7 +4242,7 @@
ext4_claim_free_clusters(sbi, ar->len, ar->flags)) {
/* let others to free the space */
- yield();
+ cond_resched();
ar->len = ar->len >> 1;
}
if (!ar->len) {
@@ -4464,7 +4460,6 @@
struct buffer_head *bitmap_bh = NULL;
struct super_block *sb = inode->i_sb;
struct ext4_group_desc *gdp;
- unsigned long freed = 0;
unsigned int overflow;
ext4_grpblk_t bit;
struct buffer_head *gd_bh;
@@ -4666,14 +4661,12 @@
if (sbi->s_log_groups_per_flex) {
ext4_group_t flex_group = ext4_flex_group(sbi, block_group);
- atomic_add(count_clusters,
- &sbi->s_flex_groups[flex_group].free_clusters);
+ atomic64_add(count_clusters,
+ &sbi->s_flex_groups[flex_group].free_clusters);
}
ext4_mb_unload_buddy(&e4b);
- freed += count;
-
if (!(flags & EXT4_FREE_BLOCKS_NO_QUOT_UPDATE))
dquot_free_block(inode, EXT4_C2B(sbi, count_clusters));
@@ -4811,8 +4804,8 @@
if (sbi->s_log_groups_per_flex) {
ext4_group_t flex_group = ext4_flex_group(sbi, block_group);
- atomic_add(EXT4_NUM_B2C(sbi, blocks_freed),
- &sbi->s_flex_groups[flex_group].free_clusters);
+ atomic64_add(EXT4_NUM_B2C(sbi, blocks_freed),
+ &sbi->s_flex_groups[flex_group].free_clusters);
}
ext4_mb_unload_buddy(&e4b);
diff --git a/fs/ext4/move_extent.c b/fs/ext4/move_extent.c
index 4e81d47..33e1c08 100644
--- a/fs/ext4/move_extent.c
+++ b/fs/ext4/move_extent.c
@@ -32,16 +32,18 @@
*/
static inline int
get_ext_path(struct inode *inode, ext4_lblk_t lblock,
- struct ext4_ext_path **path)
+ struct ext4_ext_path **orig_path)
{
int ret = 0;
+ struct ext4_ext_path *path;
- *path = ext4_ext_find_extent(inode, lblock, *path);
- if (IS_ERR(*path)) {
- ret = PTR_ERR(*path);
- *path = NULL;
- } else if ((*path)[ext_depth(inode)].p_ext == NULL)
+ path = ext4_ext_find_extent(inode, lblock, *orig_path);
+ if (IS_ERR(path))
+ ret = PTR_ERR(path);
+ else if (path[ext_depth(inode)].p_ext == NULL)
ret = -ENODATA;
+ else
+ *orig_path = path;
return ret;
}
@@ -611,24 +613,25 @@
{
struct ext4_ext_path *path = NULL;
struct ext4_extent *ext;
+ int ret = 0;
ext4_lblk_t last = from + count;
while (from < last) {
*err = get_ext_path(inode, from, &path);
if (*err)
- return 0;
+ goto out;
ext = path[ext_depth(inode)].p_ext;
- if (!ext) {
- ext4_ext_drop_refs(path);
- return 0;
- }
- if (uninit != ext4_ext_is_uninitialized(ext)) {
- ext4_ext_drop_refs(path);
- return 0;
- }
+ if (uninit != ext4_ext_is_uninitialized(ext))
+ goto out;
from += ext4_ext_get_actual_len(ext);
ext4_ext_drop_refs(path);
}
- return 1;
+ ret = 1;
+out:
+ if (path) {
+ ext4_ext_drop_refs(path);
+ kfree(path);
+ }
+ return ret;
}
/**
@@ -666,6 +669,14 @@
int replaced_count = 0;
int dext_alen;
+ *err = ext4_es_remove_extent(orig_inode, from, count);
+ if (*err)
+ goto out;
+
+ *err = ext4_es_remove_extent(donor_inode, from, count);
+ if (*err)
+ goto out;
+
/* Get the original extent for the block "orig_off" */
*err = get_ext_path(orig_inode, orig_off, &orig_path);
if (*err)
diff --git a/fs/ext4/page-io.c b/fs/ext4/page-io.c
index 809b310..047a6de 100644
--- a/fs/ext4/page-io.c
+++ b/fs/ext4/page-io.c
@@ -50,11 +50,21 @@
kmem_cache_destroy(io_page_cachep);
}
-void ext4_ioend_wait(struct inode *inode)
+/*
+ * This function is called by ext4_evict_inode() to make sure there is
+ * no more pending I/O completion work left to do.
+ */
+void ext4_ioend_shutdown(struct inode *inode)
{
wait_queue_head_t *wq = ext4_ioend_wq(inode);
wait_event(*wq, (atomic_read(&EXT4_I(inode)->i_ioend_count) == 0));
+ /*
+ * We need to make sure the work structure is finished being
+ * used before we let the inode get destroyed.
+ */
+ if (work_pending(&EXT4_I(inode)->i_unwritten_work))
+ cancel_work_sync(&EXT4_I(inode)->i_unwritten_work);
}
static void put_io_page(struct ext4_io_page *io_page)
diff --git a/fs/ext4/resize.c b/fs/ext4/resize.c
index b2c8ee5..c169477 100644
--- a/fs/ext4/resize.c
+++ b/fs/ext4/resize.c
@@ -1360,8 +1360,8 @@
sbi->s_log_groups_per_flex) {
ext4_group_t flex_group;
flex_group = ext4_flex_group(sbi, group_data[0].group);
- atomic_add(EXT4_NUM_B2C(sbi, free_blocks),
- &sbi->s_flex_groups[flex_group].free_clusters);
+ atomic64_add(EXT4_NUM_B2C(sbi, free_blocks),
+ &sbi->s_flex_groups[flex_group].free_clusters);
atomic_add(EXT4_INODES_PER_GROUP(sb) * flex_gd->count,
&sbi->s_flex_groups[flex_group].free_inodes);
}
diff --git a/fs/ext4/super.c b/fs/ext4/super.c
index b3818b4..5d6d5357 100644
--- a/fs/ext4/super.c
+++ b/fs/ext4/super.c
@@ -1927,8 +1927,8 @@
flex_group = ext4_flex_group(sbi, i);
atomic_add(ext4_free_inodes_count(sb, gdp),
&sbi->s_flex_groups[flex_group].free_inodes);
- atomic_add(ext4_free_group_clusters(sb, gdp),
- &sbi->s_flex_groups[flex_group].free_clusters);
+ atomic64_add(ext4_free_group_clusters(sb, gdp),
+ &sbi->s_flex_groups[flex_group].free_clusters);
atomic_add(ext4_used_dirs_count(sb, gdp),
&sbi->s_flex_groups[flex_group].used_dirs);
}
diff --git a/fs/internal.h b/fs/internal.h
index 507141f..4be7823 100644
--- a/fs/internal.h
+++ b/fs/internal.h
@@ -125,3 +125,8 @@
* dcache.c
*/
extern struct dentry *__d_alloc(struct super_block *, const struct qstr *);
+
+/*
+ * read_write.c
+ */
+extern ssize_t __kernel_write(struct file *, const char *, size_t, loff_t *);
diff --git a/fs/jbd2/transaction.c b/fs/jbd2/transaction.c
index d6ee5ae..325bc01 100644
--- a/fs/jbd2/transaction.c
+++ b/fs/jbd2/transaction.c
@@ -1065,9 +1065,12 @@
void jbd2_journal_set_triggers(struct buffer_head *bh,
struct jbd2_buffer_trigger_type *type)
{
- struct journal_head *jh = bh2jh(bh);
+ struct journal_head *jh = jbd2_journal_grab_journal_head(bh);
+ if (WARN_ON(!jh))
+ return;
jh->b_triggers = type;
+ jbd2_journal_put_journal_head(jh);
}
void jbd2_buffer_frozen_trigger(struct journal_head *jh, void *mapped_data,
@@ -1119,17 +1122,18 @@
{
transaction_t *transaction = handle->h_transaction;
journal_t *journal = transaction->t_journal;
- struct journal_head *jh = bh2jh(bh);
+ struct journal_head *jh;
int ret = 0;
- jbd_debug(5, "journal_head %p\n", jh);
- JBUFFER_TRACE(jh, "entry");
if (is_handle_aborted(handle))
goto out;
- if (!buffer_jbd(bh)) {
+ jh = jbd2_journal_grab_journal_head(bh);
+ if (!jh) {
ret = -EUCLEAN;
goto out;
}
+ jbd_debug(5, "journal_head %p\n", jh);
+ JBUFFER_TRACE(jh, "entry");
jbd_lock_bh_state(bh);
@@ -1220,6 +1224,7 @@
spin_unlock(&journal->j_list_lock);
out_unlock_bh:
jbd_unlock_bh_state(bh);
+ jbd2_journal_put_journal_head(jh);
out:
JBUFFER_TRACE(jh, "exit");
WARN_ON(ret); /* All errors are bugs, so dump the stack */
diff --git a/fs/nfs/blocklayout/blocklayoutdm.c b/fs/nfs/blocklayout/blocklayoutdm.c
index 737d839..6fc7b5c 100644
--- a/fs/nfs/blocklayout/blocklayoutdm.c
+++ b/fs/nfs/blocklayout/blocklayoutdm.c
@@ -55,7 +55,8 @@
bl_pipe_msg.bl_wq = &nn->bl_wq;
memset(msg, 0, sizeof(*msg));
- msg->data = kzalloc(1 + sizeof(bl_umount_request), GFP_NOFS);
+ msg->len = sizeof(bl_msg) + bl_msg.totallen;
+ msg->data = kzalloc(msg->len, GFP_NOFS);
if (!msg->data)
goto out;
@@ -66,7 +67,6 @@
memcpy(msg->data, &bl_msg, sizeof(bl_msg));
dataptr = (uint8_t *) msg->data;
memcpy(&dataptr[sizeof(bl_msg)], &bl_umount_request, sizeof(bl_umount_request));
- msg->len = sizeof(bl_msg) + bl_msg.totallen;
add_wait_queue(&nn->bl_wq, &wq);
if (rpc_queue_upcall(nn->bl_device_pipe, msg) < 0) {
diff --git a/fs/nfs/idmap.c b/fs/nfs/idmap.c
index dc0f98d..c516da5 100644
--- a/fs/nfs/idmap.c
+++ b/fs/nfs/idmap.c
@@ -726,9 +726,9 @@
return ret;
}
-static int nfs_idmap_instantiate(struct key *key, struct key *authkey, char *data)
+static int nfs_idmap_instantiate(struct key *key, struct key *authkey, char *data, size_t datalen)
{
- return key_instantiate_and_link(key, data, strlen(data) + 1,
+ return key_instantiate_and_link(key, data, datalen,
id_resolver_cache->thread_keyring,
authkey);
}
@@ -738,6 +738,7 @@
struct key *key, struct key *authkey)
{
char id_str[NFS_UINT_MAXLEN];
+ size_t len;
int ret = -ENOKEY;
/* ret = -ENOKEY */
@@ -747,13 +748,15 @@
case IDMAP_CONV_NAMETOID:
if (strcmp(upcall->im_name, im->im_name) != 0)
break;
- sprintf(id_str, "%d", im->im_id);
- ret = nfs_idmap_instantiate(key, authkey, id_str);
+ /* Note: here we store the NUL terminator too */
+ len = sprintf(id_str, "%d", im->im_id) + 1;
+ ret = nfs_idmap_instantiate(key, authkey, id_str, len);
break;
case IDMAP_CONV_IDTONAME:
if (upcall->im_id != im->im_id)
break;
- ret = nfs_idmap_instantiate(key, authkey, im->im_name);
+ len = strlen(im->im_name);
+ ret = nfs_idmap_instantiate(key, authkey, im->im_name, len);
break;
default:
ret = -EINVAL;
diff --git a/fs/nfs/nfs4filelayout.c b/fs/nfs/nfs4filelayout.c
index 49eeb04..4fb234d 100644
--- a/fs/nfs/nfs4filelayout.c
+++ b/fs/nfs/nfs4filelayout.c
@@ -129,7 +129,6 @@
{
if (!test_and_clear_bit(NFS_LAYOUT_RETURN, &lo->plh_flags))
return;
- clear_bit(NFS_INO_LAYOUTCOMMIT, &NFS_I(inode)->flags);
pnfs_return_layout(inode);
}
diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
index b2671cb..26431cf 100644
--- a/fs/nfs/nfs4proc.c
+++ b/fs/nfs/nfs4proc.c
@@ -2632,7 +2632,7 @@
int status;
if (pnfs_ld_layoutret_on_setattr(inode))
- pnfs_return_layout(inode);
+ pnfs_commit_and_return_layout(inode);
nfs_fattr_init(fattr);
@@ -6416,22 +6416,8 @@
static void nfs4_layoutcommit_release(void *calldata)
{
struct nfs4_layoutcommit_data *data = calldata;
- struct pnfs_layout_segment *lseg, *tmp;
- unsigned long *bitlock = &NFS_I(data->args.inode)->flags;
pnfs_cleanup_layoutcommit(data);
- /* Matched by references in pnfs_set_layoutcommit */
- list_for_each_entry_safe(lseg, tmp, &data->lseg_list, pls_lc_list) {
- list_del_init(&lseg->pls_lc_list);
- if (test_and_clear_bit(NFS_LSEG_LAYOUTCOMMIT,
- &lseg->pls_flags))
- pnfs_put_lseg(lseg);
- }
-
- clear_bit_unlock(NFS_INO_LAYOUTCOMMITTING, bitlock);
- smp_mb__after_clear_bit();
- wake_up_bit(bitlock, NFS_INO_LAYOUTCOMMITTING);
-
put_rpccred(data->cred);
kfree(data);
}
diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c
index 48ac5aa..4bdffe0 100644
--- a/fs/nfs/pnfs.c
+++ b/fs/nfs/pnfs.c
@@ -417,6 +417,16 @@
lo_seg_intersecting(lseg_range, recall_range);
}
+static bool pnfs_lseg_dec_and_remove_zero(struct pnfs_layout_segment *lseg,
+ struct list_head *tmp_list)
+{
+ if (!atomic_dec_and_test(&lseg->pls_refcount))
+ return false;
+ pnfs_layout_remove_lseg(lseg->pls_layout, lseg);
+ list_add(&lseg->pls_list, tmp_list);
+ return true;
+}
+
/* Returns 1 if lseg is removed from list, 0 otherwise */
static int mark_lseg_invalid(struct pnfs_layout_segment *lseg,
struct list_head *tmp_list)
@@ -430,11 +440,8 @@
*/
dprintk("%s: lseg %p ref %d\n", __func__, lseg,
atomic_read(&lseg->pls_refcount));
- if (atomic_dec_and_test(&lseg->pls_refcount)) {
- pnfs_layout_remove_lseg(lseg->pls_layout, lseg);
- list_add(&lseg->pls_list, tmp_list);
+ if (pnfs_lseg_dec_and_remove_zero(lseg, tmp_list))
rv = 1;
- }
}
return rv;
}
@@ -777,6 +784,21 @@
return lseg;
}
+static void pnfs_clear_layoutcommit(struct inode *inode,
+ struct list_head *head)
+{
+ struct nfs_inode *nfsi = NFS_I(inode);
+ struct pnfs_layout_segment *lseg, *tmp;
+
+ if (!test_and_clear_bit(NFS_INO_LAYOUTCOMMIT, &nfsi->flags))
+ return;
+ list_for_each_entry_safe(lseg, tmp, &nfsi->layout->plh_segs, pls_list) {
+ if (!test_and_clear_bit(NFS_LSEG_LAYOUTCOMMIT, &lseg->pls_flags))
+ continue;
+ pnfs_lseg_dec_and_remove_zero(lseg, head);
+ }
+}
+
/*
* Initiates a LAYOUTRETURN(FILE), and removes the pnfs_layout_hdr
* when the layout segment list is empty.
@@ -808,6 +830,7 @@
/* Reference matched in nfs4_layoutreturn_release */
pnfs_get_layout_hdr(lo);
empty = list_empty(&lo->plh_segs);
+ pnfs_clear_layoutcommit(ino, &tmp_list);
pnfs_mark_matching_lsegs_invalid(lo, &tmp_list, NULL);
/* Don't send a LAYOUTRETURN if list was initially empty */
if (empty) {
@@ -820,8 +843,6 @@
spin_unlock(&ino->i_lock);
pnfs_free_lseg_list(&tmp_list);
- WARN_ON(test_bit(NFS_INO_LAYOUTCOMMIT, &nfsi->flags));
-
lrp = kzalloc(sizeof(*lrp), GFP_KERNEL);
if (unlikely(lrp == NULL)) {
status = -ENOMEM;
@@ -845,6 +866,33 @@
}
EXPORT_SYMBOL_GPL(_pnfs_return_layout);
+int
+pnfs_commit_and_return_layout(struct inode *inode)
+{
+ struct pnfs_layout_hdr *lo;
+ int ret;
+
+ spin_lock(&inode->i_lock);
+ lo = NFS_I(inode)->layout;
+ if (lo == NULL) {
+ spin_unlock(&inode->i_lock);
+ return 0;
+ }
+ pnfs_get_layout_hdr(lo);
+ /* Block new layoutgets and read/write to ds */
+ lo->plh_block_lgets++;
+ spin_unlock(&inode->i_lock);
+ filemap_fdatawait(inode->i_mapping);
+ ret = pnfs_layoutcommit_inode(inode, true);
+ if (ret == 0)
+ ret = _pnfs_return_layout(inode);
+ spin_lock(&inode->i_lock);
+ lo->plh_block_lgets--;
+ spin_unlock(&inode->i_lock);
+ pnfs_put_layout_hdr(lo);
+ return ret;
+}
+
bool pnfs_roc(struct inode *ino)
{
struct pnfs_layout_hdr *lo;
@@ -1458,7 +1506,6 @@
dprintk("pnfs write error = %d\n", hdr->pnfs_error);
if (NFS_SERVER(hdr->inode)->pnfs_curr_ld->flags &
PNFS_LAYOUTRET_ON_ERROR) {
- clear_bit(NFS_INO_LAYOUTCOMMIT, &NFS_I(hdr->inode)->flags);
pnfs_return_layout(hdr->inode);
}
if (!test_and_set_bit(NFS_IOHDR_REDO, &hdr->flags))
@@ -1613,7 +1660,6 @@
dprintk("pnfs read error = %d\n", hdr->pnfs_error);
if (NFS_SERVER(hdr->inode)->pnfs_curr_ld->flags &
PNFS_LAYOUTRET_ON_ERROR) {
- clear_bit(NFS_INO_LAYOUTCOMMIT, &NFS_I(hdr->inode)->flags);
pnfs_return_layout(hdr->inode);
}
if (!test_and_set_bit(NFS_IOHDR_REDO, &hdr->flags))
@@ -1746,11 +1792,27 @@
list_for_each_entry(lseg, &NFS_I(inode)->layout->plh_segs, pls_list) {
if (lseg->pls_range.iomode == IOMODE_RW &&
- test_bit(NFS_LSEG_LAYOUTCOMMIT, &lseg->pls_flags))
+ test_and_clear_bit(NFS_LSEG_LAYOUTCOMMIT, &lseg->pls_flags))
list_add(&lseg->pls_lc_list, listp);
}
}
+static void pnfs_list_write_lseg_done(struct inode *inode, struct list_head *listp)
+{
+ struct pnfs_layout_segment *lseg, *tmp;
+ unsigned long *bitlock = &NFS_I(inode)->flags;
+
+ /* Matched by references in pnfs_set_layoutcommit */
+ list_for_each_entry_safe(lseg, tmp, listp, pls_lc_list) {
+ list_del_init(&lseg->pls_lc_list);
+ pnfs_put_lseg(lseg);
+ }
+
+ clear_bit_unlock(NFS_INO_LAYOUTCOMMITTING, bitlock);
+ smp_mb__after_clear_bit();
+ wake_up_bit(bitlock, NFS_INO_LAYOUTCOMMITTING);
+}
+
void pnfs_set_lo_fail(struct pnfs_layout_segment *lseg)
{
pnfs_layout_io_set_failed(lseg->pls_layout, lseg->pls_range.iomode);
@@ -1795,6 +1857,7 @@
if (nfss->pnfs_curr_ld->cleanup_layoutcommit)
nfss->pnfs_curr_ld->cleanup_layoutcommit(data);
+ pnfs_list_write_lseg_done(data->args.inode, &data->lseg_list);
}
/*
diff --git a/fs/nfs/pnfs.h b/fs/nfs/pnfs.h
index 94ba804..f5f8a47 100644
--- a/fs/nfs/pnfs.h
+++ b/fs/nfs/pnfs.h
@@ -219,6 +219,7 @@
void pnfs_cleanup_layoutcommit(struct nfs4_layoutcommit_data *data);
int pnfs_layoutcommit_inode(struct inode *inode, bool sync);
int _pnfs_return_layout(struct inode *);
+int pnfs_commit_and_return_layout(struct inode *);
void pnfs_ld_write_done(struct nfs_write_data *);
void pnfs_ld_read_done(struct nfs_read_data *);
struct pnfs_layout_segment *pnfs_update_layout(struct inode *ino,
@@ -407,6 +408,11 @@
return 0;
}
+static inline int pnfs_commit_and_return_layout(struct inode *inode)
+{
+ return 0;
+}
+
static inline bool
pnfs_ld_layoutret_on_setattr(struct inode *inode)
{
diff --git a/fs/nfsd/nfscache.c b/fs/nfsd/nfscache.c
index 62c1ee1..ca05f6d 100644
--- a/fs/nfsd/nfscache.c
+++ b/fs/nfsd/nfscache.c
@@ -102,7 +102,8 @@
{
if (rp->c_type == RC_REPLBUFF)
kfree(rp->c_replvec.iov_base);
- hlist_del(&rp->c_hash);
+ if (!hlist_unhashed(&rp->c_hash))
+ hlist_del(&rp->c_hash);
list_del(&rp->c_lru);
--num_drc_entries;
kmem_cache_free(drc_slab, rp);
@@ -118,6 +119,10 @@
int nfsd_reply_cache_init(void)
{
+ INIT_LIST_HEAD(&lru_head);
+ max_drc_entries = nfsd_cache_size_limit();
+ num_drc_entries = 0;
+
register_shrinker(&nfsd_reply_cache_shrinker);
drc_slab = kmem_cache_create("nfsd_drc", sizeof(struct svc_cacherep),
0, 0, NULL);
@@ -128,10 +133,6 @@
if (!cache_hash)
goto out_nomem;
- INIT_LIST_HEAD(&lru_head);
- max_drc_entries = nfsd_cache_size_limit();
- num_drc_entries = 0;
-
return 0;
out_nomem:
printk(KERN_ERR "nfsd: failed to allocate reply cache\n");
diff --git a/fs/nfsd/vfs.c b/fs/nfsd/vfs.c
index 2a7eb53..2b2e2396 100644
--- a/fs/nfsd/vfs.c
+++ b/fs/nfsd/vfs.c
@@ -1013,6 +1013,7 @@
int host_err;
int stable = *stablep;
int use_wgather;
+ loff_t pos = offset;
dentry = file->f_path.dentry;
inode = dentry->d_inode;
@@ -1025,7 +1026,7 @@
/* Write the data. */
oldfs = get_fs(); set_fs(KERNEL_DS);
- host_err = vfs_writev(file, (struct iovec __user *)vec, vlen, &offset);
+ host_err = vfs_writev(file, (struct iovec __user *)vec, vlen, &pos);
set_fs(oldfs);
if (host_err < 0)
goto out_nfserr;
diff --git a/fs/proc/inode.c b/fs/proc/inode.c
index a86aebc..869116c 100644
--- a/fs/proc/inode.c
+++ b/fs/proc/inode.c
@@ -446,9 +446,10 @@
struct inode *proc_get_inode(struct super_block *sb, struct proc_dir_entry *de)
{
- struct inode *inode = iget_locked(sb, de->low_ino);
+ struct inode *inode = new_inode_pseudo(sb);
- if (inode && (inode->i_state & I_NEW)) {
+ if (inode) {
+ inode->i_ino = de->low_ino;
inode->i_mtime = inode->i_atime = inode->i_ctime = CURRENT_TIME;
PROC_I(inode)->pde = de;
@@ -476,7 +477,6 @@
inode->i_fop = de->proc_fops;
}
}
- unlock_new_inode(inode);
} else
pde_put(de);
return inode;
diff --git a/fs/read_write.c b/fs/read_write.c
index a698eff..e6ddc8d 100644
--- a/fs/read_write.c
+++ b/fs/read_write.c
@@ -17,6 +17,7 @@
#include <linux/splice.h>
#include <linux/compat.h>
#include "read_write.h"
+#include "internal.h"
#include <asm/uaccess.h>
#include <asm/unistd.h>
@@ -417,6 +418,33 @@
EXPORT_SYMBOL(do_sync_write);
+ssize_t __kernel_write(struct file *file, const char *buf, size_t count, loff_t *pos)
+{
+ mm_segment_t old_fs;
+ const char __user *p;
+ ssize_t ret;
+
+ if (!file->f_op || (!file->f_op->write && !file->f_op->aio_write))
+ return -EINVAL;
+
+ old_fs = get_fs();
+ set_fs(get_ds());
+ p = (__force const char __user *)buf;
+ if (count > MAX_RW_COUNT)
+ count = MAX_RW_COUNT;
+ if (file->f_op->write)
+ ret = file->f_op->write(file, p, count, pos);
+ else
+ ret = do_sync_write(file, p, count, pos);
+ set_fs(old_fs);
+ if (ret > 0) {
+ fsnotify_modify(file);
+ add_wchar(current, ret);
+ }
+ inc_syscw(current);
+ return ret;
+}
+
ssize_t vfs_write(struct file *file, const char __user *buf, size_t count, loff_t *pos)
{
ssize_t ret;
diff --git a/fs/splice.c b/fs/splice.c
index 718bd00..29e394e 100644
--- a/fs/splice.c
+++ b/fs/splice.c
@@ -31,6 +31,7 @@
#include <linux/security.h>
#include <linux/gfp.h>
#include <linux/socket.h>
+#include "internal.h"
/*
* Attempt to steal a page from a pipe buffer. This should perhaps go into
@@ -1048,9 +1049,10 @@
{
int ret;
void *data;
+ loff_t tmp = sd->pos;
data = buf->ops->map(pipe, buf, 0);
- ret = kernel_write(sd->u.file, data + buf->offset, sd->len, sd->pos);
+ ret = __kernel_write(sd->u.file, data + buf->offset, sd->len, &tmp);
buf->ops->unmap(pipe, buf, data);
return ret;
diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c
index 4e8f0df..8459b5d 100644
--- a/fs/xfs/xfs_buf.c
+++ b/fs/xfs/xfs_buf.c
@@ -1334,6 +1334,12 @@
int size;
int i;
+ /*
+ * Make sure we capture only current IO errors rather than stale errors
+ * left over from previous use of the buffer (e.g. failed readahead).
+ */
+ bp->b_error = 0;
+
if (bp->b_flags & XBF_WRITE) {
if (bp->b_flags & XBF_SYNCIO)
rw = WRITE_SYNC;
diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c
index 912d83d..5a30dd8 100644
--- a/fs/xfs/xfs_iomap.c
+++ b/fs/xfs/xfs_iomap.c
@@ -325,7 +325,7 @@
* rather than falling short due to things like stripe unit/width alignment of
* real extents.
*/
-STATIC int
+STATIC xfs_fsblock_t
xfs_iomap_eof_prealloc_initial_size(
struct xfs_mount *mp,
struct xfs_inode *ip,
@@ -413,7 +413,7 @@
* have a large file on a small filesystem and the above
* lowspace thresholds are smaller than MAXEXTLEN.
*/
- while (alloc_blocks >= freesp)
+ while (alloc_blocks && alloc_blocks >= freesp)
alloc_blocks >>= 4;
}
diff --git a/include/drm/drm_pciids.h b/include/drm/drm_pciids.h
index a386b0b..918e8fe 100644
--- a/include/drm/drm_pciids.h
+++ b/include/drm/drm_pciids.h
@@ -581,7 +581,11 @@
{0x1002, 0x9908, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x9909, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x990A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
- {0x1002, 0x990F, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
+ {0x1002, 0x990B, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
+ {0x1002, 0x990C, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
+ {0x1002, 0x990D, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
+ {0x1002, 0x990E, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
+ {0x1002, 0x990F, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x9910, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x9913, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x9917, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
@@ -592,6 +596,13 @@
{0x1002, 0x9992, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x9993, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x9994, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
+ {0x1002, 0x9995, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
+ {0x1002, 0x9996, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
+ {0x1002, 0x9997, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
+ {0x1002, 0x9998, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
+ {0x1002, 0x9999, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
+ {0x1002, 0x999A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
+ {0x1002, 0x999B, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x99A0, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x99A2, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x99A4, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ARUBA|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
diff --git a/include/linux/edac.h b/include/linux/edac.h
index 4fd4999..0b76327 100644
--- a/include/linux/edac.h
+++ b/include/linux/edac.h
@@ -561,7 +561,6 @@
u32 ue_count; /* Uncorrectable Errors for this csrow */
u32 ce_count; /* Correctable Errors for this csrow */
- u32 nr_pages; /* combined pages count of all channels */
struct mem_ctl_info *mci; /* the parent */
@@ -676,11 +675,11 @@
* sees memory sticks ("dimms"), and the ones that sees memory ranks.
* All old memory controllers enumerate memories per rank, but most
* of the recent drivers enumerate memories per DIMM, instead.
- * When the memory controller is per rank, mem_is_per_rank is true.
+ * When the memory controller is per rank, csbased is true.
*/
unsigned n_layers;
struct edac_mc_layer *layers;
- bool mem_is_per_rank;
+ bool csbased;
/*
* DIMM info. Will eventually remove the entire csrows_info some day
@@ -741,8 +740,6 @@
u32 fake_inject_ue;
u16 fake_inject_count;
#endif
- __u8 csbased : 1, /* csrow-based memory controller */
- __resv : 7;
};
#endif
diff --git a/include/linux/hash.h b/include/linux/hash.h
index 61c97ae..f09a0ae 100644
--- a/include/linux/hash.h
+++ b/include/linux/hash.h
@@ -15,6 +15,7 @@
*/
#include <asm/types.h>
+#include <linux/compiler.h>
/* 2^31 + 2^29 - 2^25 + 2^22 - 2^19 - 2^16 + 1 */
#define GOLDEN_RATIO_PRIME_32 0x9e370001UL
@@ -31,7 +32,7 @@
#error Wordsize not 32 or 64
#endif
-static inline u64 hash_64(u64 val, unsigned int bits)
+static __always_inline u64 hash_64(u64 val, unsigned int bits)
{
u64 hash = val;
diff --git a/include/linux/irq_work.h b/include/linux/irq_work.h
index f5dbce5..6601702 100644
--- a/include/linux/irq_work.h
+++ b/include/linux/irq_work.h
@@ -37,7 +37,7 @@
#ifdef CONFIG_IRQ_WORK
bool irq_work_needs_cpu(void);
#else
-static bool irq_work_needs_cpu(void) { return false; }
+static inline bool irq_work_needs_cpu(void) { return false; }
#endif
#endif /* _LINUX_IRQ_WORK_H */
diff --git a/include/linux/kernel.h b/include/linux/kernel.h
index 80d3687..79fdd80 100644
--- a/include/linux/kernel.h
+++ b/include/linux/kernel.h
@@ -390,7 +390,6 @@
unsigned long int_sqrt(unsigned long);
extern void bust_spinlocks(int yes);
-extern void wake_up_klogd(void);
extern int oops_in_progress; /* If set, an oops, panic(), BUG() or die() is in progress */
extern int panic_timeout;
extern int panic_on_oops;
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index ede2749..c74092e 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -527,7 +527,7 @@
return test_bit(ZONE_OOM_LOCKED, &zone->flags);
}
-static inline unsigned zone_end_pfn(const struct zone *zone)
+static inline unsigned long zone_end_pfn(const struct zone *zone)
{
return zone->zone_start_pfn + zone->spanned_pages;
}
diff --git a/include/linux/mtd/nand.h b/include/linux/mtd/nand.h
index 7ccb3c5..ef52d9c 100644
--- a/include/linux/mtd/nand.h
+++ b/include/linux/mtd/nand.h
@@ -187,6 +187,13 @@
* This happens with the Renesas AG-AND chips, possibly others.
*/
#define BBT_AUTO_REFRESH 0x00000080
+/*
+ * Chip requires ready check on read (for auto-incremented sequential read).
+ * True only for small page devices; large page devices do not support
+ * autoincrement.
+ */
+#define NAND_NEED_READRDY 0x00000100
+
/* Chip does not allow subpage writes */
#define NAND_NO_SUBPAGE_WRITE 0x00000200
diff --git a/include/linux/mxsfb.h b/include/linux/mxsfb.h
index f14943d..f80af86 100644
--- a/include/linux/mxsfb.h
+++ b/include/linux/mxsfb.h
@@ -24,8 +24,8 @@
#define STMLCDIF_18BIT 2 /** pixel data bus to the display is of 18 bit width */
#define STMLCDIF_24BIT 3 /** pixel data bus to the display is of 24 bit width */
-#define FB_SYNC_DATA_ENABLE_HIGH_ACT (1 << 6)
-#define FB_SYNC_DOTCLK_FAILING_ACT (1 << 7) /* failing/negtive edge sampling */
+#define MXSFB_SYNC_DATA_ENABLE_HIGH_ACT (1 << 6)
+#define MXSFB_SYNC_DOTCLK_FAILING_ACT (1 << 7) /* failing/negtive edge sampling */
struct mxsfb_platform_data {
struct fb_videomode *mode_list;
@@ -44,6 +44,9 @@
* allocated. If specified,fb_size must also be specified.
* fb_phys must be unused by Linux.
*/
+ u32 sync; /* sync mask, contains MXSFB specifics not
+ * carried in fb_info->var.sync
+ */
};
#endif /* __LINUX_MXSFB_H */
diff --git a/include/linux/nvme.h b/include/linux/nvme.h
index c25ccca..4fa3b0b 100644
--- a/include/linux/nvme.h
+++ b/include/linux/nvme.h
@@ -137,6 +137,34 @@
NVME_LBAF_RP_DEGRADED = 3,
};
+struct nvme_smart_log {
+ __u8 critical_warning;
+ __u8 temperature[2];
+ __u8 avail_spare;
+ __u8 spare_thresh;
+ __u8 percent_used;
+ __u8 rsvd6[26];
+ __u8 data_units_read[16];
+ __u8 data_units_written[16];
+ __u8 host_reads[16];
+ __u8 host_writes[16];
+ __u8 ctrl_busy_time[16];
+ __u8 power_cycles[16];
+ __u8 power_on_hours[16];
+ __u8 unsafe_shutdowns[16];
+ __u8 media_errors[16];
+ __u8 num_err_log_entries[16];
+ __u8 rsvd192[320];
+};
+
+enum {
+ NVME_SMART_CRIT_SPARE = 1 << 0,
+ NVME_SMART_CRIT_TEMPERATURE = 1 << 1,
+ NVME_SMART_CRIT_RELIABILITY = 1 << 2,
+ NVME_SMART_CRIT_MEDIA = 1 << 3,
+ NVME_SMART_CRIT_VOLATILE_MEMORY = 1 << 4,
+};
+
struct nvme_lba_range_type {
__u8 type;
__u8 attributes;
diff --git a/include/linux/printk.h b/include/linux/printk.h
index 1249a54..822171f 100644
--- a/include/linux/printk.h
+++ b/include/linux/printk.h
@@ -134,6 +134,8 @@
extern int dmesg_restrict;
extern int kptr_restrict;
+extern void wake_up_klogd(void);
+
void log_buf_kexec_setup(void);
void __init setup_log_buf(int early);
#else
@@ -162,6 +164,10 @@
return false;
}
+static inline void wake_up_klogd(void)
+{
+}
+
static inline void log_buf_kexec_setup(void)
{
}
diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index 821c7f4..441f5bf 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -500,7 +500,7 @@
union {
__u32 mark;
__u32 dropcount;
- __u32 avail_size;
+ __u32 reserved_tailroom;
};
sk_buff_data_t inner_transport_header;
@@ -1288,11 +1288,13 @@
* do not lose pfmemalloc information as the pages would not be
* allocated using __GFP_MEMALLOC.
*/
- if (page->pfmemalloc && !page->mapping)
- skb->pfmemalloc = true;
frag->page.p = page;
frag->page_offset = off;
skb_frag_size_set(frag, size);
+
+ page = compound_head(page);
+ if (page->pfmemalloc && !page->mapping)
+ skb->pfmemalloc = true;
}
/**
@@ -1447,7 +1449,10 @@
*/
static inline int skb_availroom(const struct sk_buff *skb)
{
- return skb_is_nonlinear(skb) ? 0 : skb->avail_size - skb->len;
+ if (skb_is_nonlinear(skb))
+ return 0;
+
+ return skb->end - skb->tail - skb->reserved_tailroom;
}
/**
diff --git a/include/linux/thermal.h b/include/linux/thermal.h
index f0bd7f9..e3c0ae9 100644
--- a/include/linux/thermal.h
+++ b/include/linux/thermal.h
@@ -44,7 +44,7 @@
/* Adding event notification support elements */
#define THERMAL_GENL_FAMILY_NAME "thermal_event"
#define THERMAL_GENL_VERSION 0x01
-#define THERMAL_GENL_MCAST_GROUP_NAME "thermal_mc_group"
+#define THERMAL_GENL_MCAST_GROUP_NAME "thermal_mc_grp"
/* Default Thermal Governor */
#if defined(CONFIG_THERMAL_DEFAULT_GOV_STEP_WISE)
diff --git a/include/linux/udp.h b/include/linux/udp.h
index 9d81de1..42278bb 100644
--- a/include/linux/udp.h
+++ b/include/linux/udp.h
@@ -68,6 +68,7 @@
* For encapsulation sockets.
*/
int (*encap_rcv)(struct sock *sk, struct sk_buff *skb);
+ void (*encap_destroy)(struct sock *sk);
};
static inline struct udp_sock *udp_sk(const struct sock *sk)
diff --git a/include/linux/usb/cdc_ncm.h b/include/linux/usb/cdc_ncm.h
index 3b8f9d4..cc25b70 100644
--- a/include/linux/usb/cdc_ncm.h
+++ b/include/linux/usb/cdc_ncm.h
@@ -127,6 +127,7 @@
u16 connected;
};
+extern u8 cdc_ncm_select_altsetting(struct usbnet *dev, struct usb_interface *intf);
extern int cdc_ncm_bind_common(struct usbnet *dev, struct usb_interface *intf, u8 data_altsetting);
extern void cdc_ncm_unbind(struct usbnet *dev, struct usb_interface *intf);
extern struct sk_buff *cdc_ncm_fill_tx_frame(struct cdc_ncm_ctx *ctx, struct sk_buff *skb, __le32 sign);
diff --git a/include/linux/usb/serial.h b/include/linux/usb/serial.h
index ef9be7e..1819b59 100644
--- a/include/linux/usb/serial.h
+++ b/include/linux/usb/serial.h
@@ -66,6 +66,7 @@
* port.
* @flags: usb serial port flags
* @write_wait: a wait_queue_head_t used by the port.
+ * @delta_msr_wait: modem-status-change wait queue
* @work: work queue entry for the line discipline waking up.
* @throttled: nonzero if the read urb is inactive to throttle the device
* @throttle_req: nonzero if the tty wants to throttle us
@@ -112,6 +113,7 @@
unsigned long flags;
wait_queue_head_t write_wait;
+ wait_queue_head_t delta_msr_wait;
struct work_struct work;
char throttled;
char throttle_req;
diff --git a/include/linux/usb/ulpi.h b/include/linux/usb/ulpi.h
index 6f033a4..5c295c2 100644
--- a/include/linux/usb/ulpi.h
+++ b/include/linux/usb/ulpi.h
@@ -181,8 +181,16 @@
/*-------------------------------------------------------------------------*/
+#if IS_ENABLED(CONFIG_USB_ULPI)
struct usb_phy *otg_ulpi_create(struct usb_phy_io_ops *ops,
unsigned int flags);
+#else
+static inline struct usb_phy *otg_ulpi_create(struct usb_phy_io_ops *ops,
+ unsigned int flags)
+{
+ return NULL;
+}
+#endif
#ifdef CONFIG_USB_ULPI_VIEWPORT
/* access ops for controllers with a viewport register */
diff --git a/include/net/dst.h b/include/net/dst.h
index 853cda1..1f8fd10 100644
--- a/include/net/dst.h
+++ b/include/net/dst.h
@@ -413,13 +413,15 @@
static inline struct neighbour *dst_neigh_lookup(const struct dst_entry *dst, const void *daddr)
{
- return dst->ops->neigh_lookup(dst, NULL, daddr);
+ struct neighbour *n = dst->ops->neigh_lookup(dst, NULL, daddr);
+ return IS_ERR(n) ? NULL : n;
}
static inline struct neighbour *dst_neigh_lookup_skb(const struct dst_entry *dst,
struct sk_buff *skb)
{
- return dst->ops->neigh_lookup(dst, skb, NULL);
+ struct neighbour *n = dst->ops->neigh_lookup(dst, skb, NULL);
+ return IS_ERR(n) ? NULL : n;
}
static inline void dst_link_failure(struct sk_buff *skb)
diff --git a/include/net/flow_keys.h b/include/net/flow_keys.h
index 80461c1..bb8271d 100644
--- a/include/net/flow_keys.h
+++ b/include/net/flow_keys.h
@@ -9,6 +9,7 @@
__be32 ports;
__be16 port16[2];
};
+ u16 thoff;
u8 ip_proto;
};
diff --git a/include/net/inet_frag.h b/include/net/inet_frag.h
index 76c3fe5..0a1dcc2 100644
--- a/include/net/inet_frag.h
+++ b/include/net/inet_frag.h
@@ -43,6 +43,13 @@
#define INETFRAGS_HASHSZ 64
+/* averaged:
+ * max_depth = default ipfrag_high_thresh / INETFRAGS_HASHSZ /
+ * rounded up (SKB_TRUELEN(0) + sizeof(struct ipq or
+ * struct frag_queue))
+ */
+#define INETFRAGS_MAXDEPTH 128
+
struct inet_frags {
struct hlist_head hash[INETFRAGS_HASHSZ];
/* This rwlock is a global lock (seperate per IPv4, IPv6 and
@@ -76,6 +83,8 @@
struct inet_frag_queue *inet_frag_find(struct netns_frags *nf,
struct inet_frags *f, void *key, unsigned int hash)
__releases(&f->lock);
+void inet_frag_maybe_warn_overflow(struct inet_frag_queue *q,
+ const char *prefix);
static inline void inet_frag_put(struct inet_frag_queue *q, struct inet_frags *f)
{
diff --git a/include/net/ip_fib.h b/include/net/ip_fib.h
index 9497be1..e49db91 100644
--- a/include/net/ip_fib.h
+++ b/include/net/ip_fib.h
@@ -152,19 +152,17 @@
};
#ifdef CONFIG_IP_ROUTE_MULTIPATH
-
#define FIB_RES_NH(res) ((res).fi->fib_nh[(res).nh_sel])
-
-#define FIB_TABLE_HASHSZ 2
-
#else /* CONFIG_IP_ROUTE_MULTIPATH */
-
#define FIB_RES_NH(res) ((res).fi->fib_nh[0])
-
-#define FIB_TABLE_HASHSZ 256
-
#endif /* CONFIG_IP_ROUTE_MULTIPATH */
+#ifdef CONFIG_IP_MULTIPLE_TABLES
+#define FIB_TABLE_HASHSZ 256
+#else
+#define FIB_TABLE_HASHSZ 2
+#endif
+
extern __be32 fib_info_update_nh_saddr(struct net *net, struct fib_nh *nh);
#define FIB_RES_SADDR(net, res) \
diff --git a/include/net/ip_vs.h b/include/net/ip_vs.h
index 68c69d5..fce8e6b 100644
--- a/include/net/ip_vs.h
+++ b/include/net/ip_vs.h
@@ -976,6 +976,7 @@
int sysctl_sync_retries;
int sysctl_nat_icmp_send;
int sysctl_pmtu_disc;
+ int sysctl_backup_only;
/* ip_vs_lblc */
int sysctl_lblc_expiration;
@@ -1067,6 +1068,12 @@
return ipvs->sysctl_pmtu_disc;
}
+static inline int sysctl_backup_only(struct netns_ipvs *ipvs)
+{
+ return ipvs->sync_state & IP_VS_STATE_BACKUP &&
+ ipvs->sysctl_backup_only;
+}
+
#else
static inline int sysctl_sync_threshold(struct netns_ipvs *ipvs)
@@ -1114,6 +1121,11 @@
return 1;
}
+static inline int sysctl_backup_only(struct netns_ipvs *ipvs)
+{
+ return 0;
+}
+
#endif
/*
diff --git a/include/net/ipip.h b/include/net/ipip.h
index fd19625..982141c1 100644
--- a/include/net/ipip.h
+++ b/include/net/ipip.h
@@ -77,15 +77,11 @@
{
struct iphdr *iph = ip_hdr(skb);
- if (iph->frag_off & htons(IP_DF))
- iph->id = 0;
- else {
- /* Use inner packet iph-id if possible. */
- if (skb->protocol == htons(ETH_P_IP) && old_iph->id)
- iph->id = old_iph->id;
- else
- __ip_select_ident(iph, dst,
- (skb_shinfo(skb)->gso_segs ?: 1) - 1);
- }
+ /* Use inner packet iph-id if possible. */
+ if (skb->protocol == htons(ETH_P_IP) && old_iph->id)
+ iph->id = old_iph->id;
+ else
+ __ip_select_ident(iph, dst,
+ (skb_shinfo(skb)->gso_segs ?: 1) - 1);
}
#endif
diff --git a/include/uapi/linux/packet_diag.h b/include/uapi/linux/packet_diag.h
index 93f5fa9..afafd70 100644
--- a/include/uapi/linux/packet_diag.h
+++ b/include/uapi/linux/packet_diag.h
@@ -33,9 +33,11 @@
PACKET_DIAG_TX_RING,
PACKET_DIAG_FANOUT,
- PACKET_DIAG_MAX,
+ __PACKET_DIAG_MAX,
};
+#define PACKET_DIAG_MAX (__PACKET_DIAG_MAX - 1)
+
struct packet_diag_info {
__u32 pdi_index;
__u32 pdi_version;
diff --git a/include/uapi/linux/unix_diag.h b/include/uapi/linux/unix_diag.h
index b8a2494..b9e2a6a 100644
--- a/include/uapi/linux/unix_diag.h
+++ b/include/uapi/linux/unix_diag.h
@@ -39,9 +39,11 @@
UNIX_DIAG_MEMINFO,
UNIX_DIAG_SHUTDOWN,
- UNIX_DIAG_MAX,
+ __UNIX_DIAG_MAX,
};
+#define UNIX_DIAG_MAX (__UNIX_DIAG_MAX - 1)
+
struct unix_diag_vfs {
__u32 udiag_vfs_ino;
__u32 udiag_vfs_dev;
diff --git a/include/video/atmel_lcdc.h b/include/video/atmel_lcdc.h
index 28447f1..8deb226 100644
--- a/include/video/atmel_lcdc.h
+++ b/include/video/atmel_lcdc.h
@@ -30,7 +30,6 @@
*/
#define ATMEL_LCDC_WIRING_BGR 0
#define ATMEL_LCDC_WIRING_RGB 1
-#define ATMEL_LCDC_WIRING_RGB555 2
/* LCD Controller info data structure, stored in device platform_data */
@@ -62,6 +61,7 @@
void (*atmel_lcdfb_power_control)(int on);
struct fb_monspecs *default_monspecs;
u32 pseudo_palette[16];
+ bool have_intensity_bit;
};
#define ATMEL_LCDC_DMABADDR1 0x00
diff --git a/include/xen/interface/physdev.h b/include/xen/interface/physdev.h
index 1844d31..7000bb1 100644
--- a/include/xen/interface/physdev.h
+++ b/include/xen/interface/physdev.h
@@ -251,6 +251,12 @@
#define PHYSDEVOP_pci_device_remove 26
#define PHYSDEVOP_restore_msi_ext 27
+/*
+ * Dom0 should use these two to announce MMIO resources assigned to
+ * MSI-X capable devices won't (prepare) or may (release) change.
+ */
+#define PHYSDEVOP_prepare_msix 30
+#define PHYSDEVOP_release_msix 31
struct physdev_pci_device {
/* IN */
uint16_t seg;
diff --git a/ipc/mqueue.c b/ipc/mqueue.c
index c4ae32e..e4e47f6 100644
--- a/ipc/mqueue.c
+++ b/ipc/mqueue.c
@@ -848,7 +848,8 @@
fd = error;
}
mutex_unlock(&root->d_inode->i_mutex);
- mnt_drop_write(mnt);
+ if (!ro)
+ mnt_drop_write(mnt);
out_putname:
putname(name);
return fd;
diff --git a/kernel/events/core.c b/kernel/events/core.c
index b0cd865..59412d0 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -4434,12 +4434,15 @@
if (ctxn < 0)
goto next;
ctx = rcu_dereference(current->perf_event_ctxp[ctxn]);
+ if (ctx)
+ perf_event_task_ctx(ctx, task_event);
}
- if (ctx)
- perf_event_task_ctx(ctx, task_event);
next:
put_cpu_ptr(pmu->pmu_cpu_context);
}
+ if (task_event->task_ctx)
+ perf_event_task_ctx(task_event->task_ctx, task_event);
+
rcu_read_unlock();
}
@@ -5647,6 +5650,7 @@
event->attr.sample_period = NSEC_PER_SEC / freq;
hwc->sample_period = event->attr.sample_period;
local64_set(&hwc->period_left, hwc->sample_period);
+ hwc->last_period = hwc->sample_period;
event->attr.freq = 0;
}
}
diff --git a/kernel/printk.c b/kernel/printk.c
index 0b31715..abbdd9e 100644
--- a/kernel/printk.c
+++ b/kernel/printk.c
@@ -63,8 +63,6 @@
#define MINIMUM_CONSOLE_LOGLEVEL 1 /* Minimum loglevel we let people use */
#define DEFAULT_CONSOLE_LOGLEVEL 7 /* anything MORE serious than KERN_DEBUG */
-DECLARE_WAIT_QUEUE_HEAD(log_wait);
-
int console_printk[4] = {
DEFAULT_CONSOLE_LOGLEVEL, /* console_loglevel */
DEFAULT_MESSAGE_LOGLEVEL, /* default_message_loglevel */
@@ -224,6 +222,7 @@
static DEFINE_RAW_SPINLOCK(logbuf_lock);
#ifdef CONFIG_PRINTK
+DECLARE_WAIT_QUEUE_HEAD(log_wait);
/* the next printk record to read by syslog(READ) or /proc/kmsg */
static u64 syslog_seq;
static u32 syslog_idx;
@@ -1957,45 +1956,6 @@
return console_locked;
}
-/*
- * Delayed printk version, for scheduler-internal messages:
- */
-#define PRINTK_BUF_SIZE 512
-
-#define PRINTK_PENDING_WAKEUP 0x01
-#define PRINTK_PENDING_SCHED 0x02
-
-static DEFINE_PER_CPU(int, printk_pending);
-static DEFINE_PER_CPU(char [PRINTK_BUF_SIZE], printk_sched_buf);
-
-static void wake_up_klogd_work_func(struct irq_work *irq_work)
-{
- int pending = __this_cpu_xchg(printk_pending, 0);
-
- if (pending & PRINTK_PENDING_SCHED) {
- char *buf = __get_cpu_var(printk_sched_buf);
- printk(KERN_WARNING "[sched_delayed] %s", buf);
- }
-
- if (pending & PRINTK_PENDING_WAKEUP)
- wake_up_interruptible(&log_wait);
-}
-
-static DEFINE_PER_CPU(struct irq_work, wake_up_klogd_work) = {
- .func = wake_up_klogd_work_func,
- .flags = IRQ_WORK_LAZY,
-};
-
-void wake_up_klogd(void)
-{
- preempt_disable();
- if (waitqueue_active(&log_wait)) {
- this_cpu_or(printk_pending, PRINTK_PENDING_WAKEUP);
- irq_work_queue(&__get_cpu_var(wake_up_klogd_work));
- }
- preempt_enable();
-}
-
static void console_cont_flush(char *text, size_t size)
{
unsigned long flags;
@@ -2458,6 +2418,44 @@
late_initcall(printk_late_init);
#if defined CONFIG_PRINTK
+/*
+ * Delayed printk version, for scheduler-internal messages:
+ */
+#define PRINTK_BUF_SIZE 512
+
+#define PRINTK_PENDING_WAKEUP 0x01
+#define PRINTK_PENDING_SCHED 0x02
+
+static DEFINE_PER_CPU(int, printk_pending);
+static DEFINE_PER_CPU(char [PRINTK_BUF_SIZE], printk_sched_buf);
+
+static void wake_up_klogd_work_func(struct irq_work *irq_work)
+{
+ int pending = __this_cpu_xchg(printk_pending, 0);
+
+ if (pending & PRINTK_PENDING_SCHED) {
+ char *buf = __get_cpu_var(printk_sched_buf);
+ printk(KERN_WARNING "[sched_delayed] %s", buf);
+ }
+
+ if (pending & PRINTK_PENDING_WAKEUP)
+ wake_up_interruptible(&log_wait);
+}
+
+static DEFINE_PER_CPU(struct irq_work, wake_up_klogd_work) = {
+ .func = wake_up_klogd_work_func,
+ .flags = IRQ_WORK_LAZY,
+};
+
+void wake_up_klogd(void)
+{
+ preempt_disable();
+ if (waitqueue_active(&log_wait)) {
+ this_cpu_or(printk_pending, PRINTK_PENDING_WAKEUP);
+ irq_work_queue(&__get_cpu_var(wake_up_klogd_work));
+ }
+ preempt_enable();
+}
int printk_sched(const char *fmt, ...)
{
diff --git a/kernel/sys.c b/kernel/sys.c
index 81f5644..39c9c4a 100644
--- a/kernel/sys.c
+++ b/kernel/sys.c
@@ -2185,9 +2185,8 @@
char poweroff_cmd[POWEROFF_CMD_PATH_LEN] = "/sbin/poweroff";
-static int __orderly_poweroff(void)
+static int __orderly_poweroff(bool force)
{
- int argc;
char **argv;
static char *envp[] = {
"HOME=/",
@@ -2196,35 +2195,19 @@
};
int ret;
- argv = argv_split(GFP_ATOMIC, poweroff_cmd, &argc);
- if (argv == NULL) {
+ argv = argv_split(GFP_KERNEL, poweroff_cmd, NULL);
+ if (argv) {
+ ret = call_usermodehelper(argv[0], argv, envp, UMH_WAIT_EXEC);
+ argv_free(argv);
+ } else {
printk(KERN_WARNING "%s failed to allocate memory for \"%s\"\n",
- __func__, poweroff_cmd);
- return -ENOMEM;
+ __func__, poweroff_cmd);
+ ret = -ENOMEM;
}
- ret = call_usermodehelper_fns(argv[0], argv, envp, UMH_WAIT_EXEC,
- NULL, NULL, NULL);
- argv_free(argv);
-
- return ret;
-}
-
-/**
- * orderly_poweroff - Trigger an orderly system poweroff
- * @force: force poweroff if command execution fails
- *
- * This may be called from any context to trigger a system shutdown.
- * If the orderly shutdown fails, it will force an immediate shutdown.
- */
-int orderly_poweroff(bool force)
-{
- int ret = __orderly_poweroff();
-
if (ret && force) {
printk(KERN_WARNING "Failed to start orderly shutdown: "
- "forcing the issue\n");
-
+ "forcing the issue\n");
/*
* I guess this should try to kick off some daemon to sync and
* poweroff asap. Or not even bother syncing if we're doing an
@@ -2236,4 +2219,28 @@
return ret;
}
+
+static bool poweroff_force;
+
+static void poweroff_work_func(struct work_struct *work)
+{
+ __orderly_poweroff(poweroff_force);
+}
+
+static DECLARE_WORK(poweroff_work, poweroff_work_func);
+
+/**
+ * orderly_poweroff - Trigger an orderly system poweroff
+ * @force: force poweroff if command execution fails
+ *
+ * This may be called from any context to trigger a system shutdown.
+ * If the orderly shutdown fails, it will force an immediate shutdown.
+ */
+int orderly_poweroff(bool force)
+{
+ if (force) /* do not override the pending "true" */
+ poweroff_force = true;
+ schedule_work(&poweroff_work);
+ return 0;
+}
EXPORT_SYMBOL_GPL(orderly_poweroff);
diff --git a/kernel/time/tick-broadcast.c b/kernel/time/tick-broadcast.c
index 2fb8cb8..7f32fe0 100644
--- a/kernel/time/tick-broadcast.c
+++ b/kernel/time/tick-broadcast.c
@@ -67,7 +67,8 @@
*/
int tick_check_broadcast_device(struct clock_event_device *dev)
{
- if ((tick_broadcast_device.evtdev &&
+ if ((dev->features & CLOCK_EVT_FEAT_DUMMY) ||
+ (tick_broadcast_device.evtdev &&
tick_broadcast_device.evtdev->rating >= dev->rating) ||
(dev->features & CLOCK_EVT_FEAT_C3STOP))
return 0;
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index ab25b88..6893d5a 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -3104,8 +3104,8 @@
continue;
}
- hlist_del(&entry->node);
- call_rcu(&entry->rcu, ftrace_free_entry_rcu);
+ hlist_del_rcu(&entry->node);
+ call_rcu_sched(&entry->rcu, ftrace_free_entry_rcu);
}
}
__disable_ftrace_function_probe();
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 1f835a8..4f1dade 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -704,7 +704,7 @@
void
update_max_tr(struct trace_array *tr, struct task_struct *tsk, int cpu)
{
- struct ring_buffer *buf = tr->buffer;
+ struct ring_buffer *buf;
if (trace_stop_count)
return;
@@ -719,6 +719,7 @@
arch_spin_lock(&ftrace_max_lock);
+ buf = tr->buffer;
tr->buffer = max_tr.buffer;
max_tr.buffer = buf;
@@ -2880,11 +2881,25 @@
return -EINVAL;
}
-static void set_tracer_flags(unsigned int mask, int enabled)
+/* Some tracers require overwrite to stay enabled */
+int trace_keep_overwrite(struct tracer *tracer, u32 mask, int set)
+{
+ if (tracer->enabled && (mask & TRACE_ITER_OVERWRITE) && !set)
+ return -1;
+
+ return 0;
+}
+
+int set_tracer_flag(unsigned int mask, int enabled)
{
/* do nothing if flag is already set */
if (!!(trace_flags & mask) == !!enabled)
- return;
+ return 0;
+
+ /* Give the tracer a chance to approve the change */
+ if (current_trace->flag_changed)
+ if (current_trace->flag_changed(current_trace, mask, !!enabled))
+ return -EINVAL;
if (enabled)
trace_flags |= mask;
@@ -2894,18 +2909,24 @@
if (mask == TRACE_ITER_RECORD_CMD)
trace_event_enable_cmd_record(enabled);
- if (mask == TRACE_ITER_OVERWRITE)
+ if (mask == TRACE_ITER_OVERWRITE) {
ring_buffer_change_overwrite(global_trace.buffer, enabled);
+#ifdef CONFIG_TRACER_MAX_TRACE
+ ring_buffer_change_overwrite(max_tr.buffer, enabled);
+#endif
+ }
if (mask == TRACE_ITER_PRINTK)
trace_printk_start_stop_comm(enabled);
+
+ return 0;
}
static int trace_set_options(char *option)
{
char *cmp;
int neg = 0;
- int ret = 0;
+ int ret = -ENODEV;
int i;
cmp = strstrip(option);
@@ -2915,19 +2936,20 @@
cmp += 2;
}
+ mutex_lock(&trace_types_lock);
+
for (i = 0; trace_options[i]; i++) {
if (strcmp(cmp, trace_options[i]) == 0) {
- set_tracer_flags(1 << i, !neg);
+ ret = set_tracer_flag(1 << i, !neg);
break;
}
}
/* If no option could be set, test the specific tracer options */
- if (!trace_options[i]) {
- mutex_lock(&trace_types_lock);
+ if (!trace_options[i])
ret = set_tracer_option(current_trace, cmp, neg);
- mutex_unlock(&trace_types_lock);
- }
+
+ mutex_unlock(&trace_types_lock);
return ret;
}
@@ -2937,6 +2959,7 @@
size_t cnt, loff_t *ppos)
{
char buf[64];
+ int ret;
if (cnt >= sizeof(buf))
return -EINVAL;
@@ -2946,7 +2969,9 @@
buf[cnt] = 0;
- trace_set_options(buf);
+ ret = trace_set_options(buf);
+ if (ret < 0)
+ return ret;
*ppos += cnt;
@@ -3250,6 +3275,9 @@
goto out;
trace_branch_disable();
+
+ current_trace->enabled = false;
+
if (current_trace->reset)
current_trace->reset(tr);
@@ -3294,6 +3322,7 @@
}
current_trace = t;
+ current_trace->enabled = true;
trace_branch_enable(tr);
out:
mutex_unlock(&trace_types_lock);
@@ -4780,7 +4809,13 @@
if (val != 0 && val != 1)
return -EINVAL;
- set_tracer_flags(1 << index, val);
+
+ mutex_lock(&trace_types_lock);
+ ret = set_tracer_flag(1 << index, val);
+ mutex_unlock(&trace_types_lock);
+
+ if (ret < 0)
+ return ret;
*ppos += cnt;
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index 57d7e53..2081971 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -283,11 +283,15 @@
enum print_line_t (*print_line)(struct trace_iterator *iter);
/* If you handled the flag setting, return 0 */
int (*set_flag)(u32 old_flags, u32 bit, int set);
+ /* Return 0 if OK with change, else return non-zero */
+ int (*flag_changed)(struct tracer *tracer,
+ u32 mask, int set);
struct tracer *next;
struct tracer_flags *flags;
bool print_max;
bool use_max_tr;
bool allocated_snapshot;
+ bool enabled;
};
@@ -943,6 +947,8 @@
void trace_printk_init_buffers(void);
void trace_printk_start_comm(void);
+int trace_keep_overwrite(struct tracer *tracer, u32 mask, int set);
+int set_tracer_flag(unsigned int mask, int enabled);
#undef FTRACE_ENTRY
#define FTRACE_ENTRY(call, struct_name, id, tstruct, print, filter) \
diff --git a/kernel/trace/trace_irqsoff.c b/kernel/trace/trace_irqsoff.c
index 713a2ca..443b25b 100644
--- a/kernel/trace/trace_irqsoff.c
+++ b/kernel/trace/trace_irqsoff.c
@@ -32,7 +32,7 @@
static int trace_type __read_mostly;
-static int save_lat_flag;
+static int save_flags;
static void stop_irqsoff_tracer(struct trace_array *tr, int graph);
static int start_irqsoff_tracer(struct trace_array *tr, int graph);
@@ -558,8 +558,11 @@
static void __irqsoff_tracer_init(struct trace_array *tr)
{
- save_lat_flag = trace_flags & TRACE_ITER_LATENCY_FMT;
- trace_flags |= TRACE_ITER_LATENCY_FMT;
+ save_flags = trace_flags;
+
+ /* non overwrite screws up the latency tracers */
+ set_tracer_flag(TRACE_ITER_OVERWRITE, 1);
+ set_tracer_flag(TRACE_ITER_LATENCY_FMT, 1);
tracing_max_latency = 0;
irqsoff_trace = tr;
@@ -573,10 +576,13 @@
static void irqsoff_tracer_reset(struct trace_array *tr)
{
+ int lat_flag = save_flags & TRACE_ITER_LATENCY_FMT;
+ int overwrite_flag = save_flags & TRACE_ITER_OVERWRITE;
+
stop_irqsoff_tracer(tr, is_graph());
- if (!save_lat_flag)
- trace_flags &= ~TRACE_ITER_LATENCY_FMT;
+ set_tracer_flag(TRACE_ITER_LATENCY_FMT, lat_flag);
+ set_tracer_flag(TRACE_ITER_OVERWRITE, overwrite_flag);
}
static void irqsoff_tracer_start(struct trace_array *tr)
@@ -609,6 +615,7 @@
.print_line = irqsoff_print_line,
.flags = &tracer_flags,
.set_flag = irqsoff_set_flag,
+ .flag_changed = trace_keep_overwrite,
#ifdef CONFIG_FTRACE_SELFTEST
.selftest = trace_selftest_startup_irqsoff,
#endif
@@ -642,6 +649,7 @@
.print_line = irqsoff_print_line,
.flags = &tracer_flags,
.set_flag = irqsoff_set_flag,
+ .flag_changed = trace_keep_overwrite,
#ifdef CONFIG_FTRACE_SELFTEST
.selftest = trace_selftest_startup_preemptoff,
#endif
@@ -677,6 +685,7 @@
.print_line = irqsoff_print_line,
.flags = &tracer_flags,
.set_flag = irqsoff_set_flag,
+ .flag_changed = trace_keep_overwrite,
#ifdef CONFIG_FTRACE_SELFTEST
.selftest = trace_selftest_startup_preemptirqsoff,
#endif
diff --git a/kernel/trace/trace_sched_wakeup.c b/kernel/trace/trace_sched_wakeup.c
index 75aa97f..fde652c 100644
--- a/kernel/trace/trace_sched_wakeup.c
+++ b/kernel/trace/trace_sched_wakeup.c
@@ -36,7 +36,7 @@
static int wakeup_graph_entry(struct ftrace_graph_ent *trace);
static void wakeup_graph_return(struct ftrace_graph_ret *trace);
-static int save_lat_flag;
+static int save_flags;
#define TRACE_DISPLAY_GRAPH 1
@@ -540,8 +540,11 @@
static int __wakeup_tracer_init(struct trace_array *tr)
{
- save_lat_flag = trace_flags & TRACE_ITER_LATENCY_FMT;
- trace_flags |= TRACE_ITER_LATENCY_FMT;
+ save_flags = trace_flags;
+
+ /* non overwrite screws up the latency tracers */
+ set_tracer_flag(TRACE_ITER_OVERWRITE, 1);
+ set_tracer_flag(TRACE_ITER_LATENCY_FMT, 1);
tracing_max_latency = 0;
wakeup_trace = tr;
@@ -563,12 +566,15 @@
static void wakeup_tracer_reset(struct trace_array *tr)
{
+ int lat_flag = save_flags & TRACE_ITER_LATENCY_FMT;
+ int overwrite_flag = save_flags & TRACE_ITER_OVERWRITE;
+
stop_wakeup_tracer(tr);
/* make sure we put back any tasks we are tracing */
wakeup_reset(tr);
- if (!save_lat_flag)
- trace_flags &= ~TRACE_ITER_LATENCY_FMT;
+ set_tracer_flag(TRACE_ITER_LATENCY_FMT, lat_flag);
+ set_tracer_flag(TRACE_ITER_OVERWRITE, overwrite_flag);
}
static void wakeup_tracer_start(struct trace_array *tr)
@@ -594,6 +600,7 @@
.print_line = wakeup_print_line,
.flags = &tracer_flags,
.set_flag = wakeup_set_flag,
+ .flag_changed = trace_keep_overwrite,
#ifdef CONFIG_FTRACE_SELFTEST
.selftest = trace_selftest_startup_wakeup,
#endif
@@ -615,6 +622,7 @@
.print_line = wakeup_print_line,
.flags = &tracer_flags,
.set_flag = wakeup_set_flag,
+ .flag_changed = trace_keep_overwrite,
#ifdef CONFIG_FTRACE_SELFTEST
.selftest = trace_selftest_startup_wakeup,
#endif
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 55fac5b..b48cd59 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -3447,28 +3447,34 @@
spin_unlock_irq(&pool->lock);
mutex_unlock(&pool->assoc_mutex);
- }
- /*
- * Call schedule() so that we cross rq->lock and thus can guarantee
- * sched callbacks see the %WORKER_UNBOUND flag. This is necessary
- * as scheduler callbacks may be invoked from other cpus.
- */
- schedule();
+ /*
+ * Call schedule() so that we cross rq->lock and thus can
+ * guarantee sched callbacks see the %WORKER_UNBOUND flag.
+ * This is necessary as scheduler callbacks may be invoked
+ * from other cpus.
+ */
+ schedule();
- /*
- * Sched callbacks are disabled now. Zap nr_running. After this,
- * nr_running stays zero and need_more_worker() and keep_working()
- * are always true as long as the worklist is not empty. Pools on
- * @cpu now behave as unbound (in terms of concurrency management)
- * pools which are served by workers tied to the CPU.
- *
- * On return from this function, the current worker would trigger
- * unbound chain execution of pending work items if other workers
- * didn't already.
- */
- for_each_std_worker_pool(pool, cpu)
+ /*
+ * Sched callbacks are disabled now. Zap nr_running.
+ * After this, nr_running stays zero and need_more_worker()
+ * and keep_working() are always true as long as the
+ * worklist is not empty. This pool now behaves as an
+ * unbound (in terms of concurrency management) pool which
+ * are served by workers tied to the pool.
+ */
atomic_set(&pool->nr_running, 0);
+
+ /*
+ * With concurrency management just turned off, a busy
+ * worker blocking could lead to lengthy stalls. Kick off
+ * unbound chain execution of currently pending work items.
+ */
+ spin_lock_irq(&pool->lock);
+ wake_up_worker(pool);
+ spin_unlock_irq(&pool->lock);
+ }
}
/*
diff --git a/lib/bust_spinlocks.c b/lib/bust_spinlocks.c
index 9681d54..f8e0e53 100644
--- a/lib/bust_spinlocks.c
+++ b/lib/bust_spinlocks.c
@@ -8,6 +8,7 @@
*/
#include <linux/kernel.h>
+#include <linux/printk.h>
#include <linux/spinlock.h>
#include <linux/tty.h>
#include <linux/wait.h>
@@ -28,5 +29,3 @@
wake_up_klogd();
}
}
-
-
diff --git a/lib/dma-debug.c b/lib/dma-debug.c
index 5e396ac..d87a17a 100644
--- a/lib/dma-debug.c
+++ b/lib/dma-debug.c
@@ -862,17 +862,21 @@
entry = bucket_find_exact(bucket, ref);
if (!entry) {
+ /* must drop lock before calling dma_mapping_error */
+ put_hash_bucket(bucket, &flags);
+
if (dma_mapping_error(ref->dev, ref->dev_addr)) {
err_printk(ref->dev, NULL,
- "DMA-API: device driver tries "
- "to free an invalid DMA memory address\n");
- return;
+ "DMA-API: device driver tries to free an "
+ "invalid DMA memory address\n");
+ } else {
+ err_printk(ref->dev, NULL,
+ "DMA-API: device driver tries to free DMA "
+ "memory it has not allocated [device "
+ "address=0x%016llx] [size=%llu bytes]\n",
+ ref->dev_addr, ref->size);
}
- err_printk(ref->dev, NULL, "DMA-API: device driver tries "
- "to free DMA memory it has not allocated "
- "[device address=0x%016llx] [size=%llu bytes]\n",
- ref->dev_addr, ref->size);
- goto out;
+ return;
}
if (ref->size != entry->size) {
@@ -936,7 +940,6 @@
hash_bucket_del(entry);
dma_entry_free(entry);
-out:
put_hash_bucket(bucket, &flags);
}
@@ -1082,13 +1085,27 @@
ref.dev = dev;
ref.dev_addr = dma_addr;
bucket = get_hash_bucket(&ref, &flags);
- entry = bucket_find_exact(bucket, &ref);
- if (!entry)
- goto out;
+ list_for_each_entry(entry, &bucket->list, list) {
+ if (!exact_match(&ref, entry))
+ continue;
- entry->map_err_type = MAP_ERR_CHECKED;
-out:
+ /*
+ * The same physical address can be mapped multiple
+ * times. Without a hardware IOMMU this results in the
+ * same device addresses being put into the dma-debug
+ * hash multiple times too. This can result in false
+ * positives being reported. Therefore we implement a
+ * best-fit algorithm here which updates the first entry
+ * from the hash which fits the reference value and is
+ * not currently listed as being checked.
+ */
+ if (entry->map_err_type == MAP_ERR_NOT_CHECKED) {
+ entry->map_err_type = MAP_ERR_CHECKED;
+ break;
+ }
+ }
+
put_hash_bucket(bucket, &flags);
}
EXPORT_SYMBOL(debug_dma_mapping_error);
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 0a0be33..ca9a7c6 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -2124,8 +2124,12 @@
/* Return the number pages of memory we physically have, in PAGE_SIZE units. */
unsigned long hugetlb_total_pages(void)
{
- struct hstate *h = &default_hstate;
- return h->nr_huge_pages * pages_per_huge_page(h);
+ struct hstate *h;
+ unsigned long nr_total_pages = 0;
+
+ for_each_hstate(h)
+ nr_total_pages += h->nr_huge_pages * pages_per_huge_page(h);
+ return nr_total_pages;
}
static int hugetlb_acct_memory(struct hstate *h, long delta)
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 9597eec..ee37657 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -1779,7 +1779,11 @@
for (i = 0; i < MAX_NR_ZONES; i++) {
struct zone *zone = pgdat->node_zones + i;
- if (zone->wait_table)
+ /*
+ * wait_table may be allocated from boot memory,
+ * here only free if it's allocated by vmalloc.
+ */
+ if (is_vmalloc_addr(zone->wait_table))
vfree(zone->wait_table);
}
diff --git a/net/8021q/vlan.c b/net/8021q/vlan.c
index a187144..85addcd 100644
--- a/net/8021q/vlan.c
+++ b/net/8021q/vlan.c
@@ -86,13 +86,6 @@
grp = &vlan_info->grp;
- /* Take it out of our own structures, but be sure to interlock with
- * HW accelerating devices or SW vlan input packet processing if
- * VLAN is not 0 (leave it there for 802.1p).
- */
- if (vlan_id)
- vlan_vid_del(real_dev, vlan_id);
-
grp->nr_vlan_devs--;
if (vlan->flags & VLAN_FLAG_MVRP)
@@ -114,6 +107,13 @@
vlan_gvrp_uninit_applicant(real_dev);
}
+ /* Take it out of our own structures, but be sure to interlock with
+ * HW accelerating devices or SW vlan input packet processing if
+ * VLAN is not 0 (leave it there for 802.1p).
+ */
+ if (vlan_id)
+ vlan_vid_del(real_dev, vlan_id);
+
/* Get rid of the vlan's reference to real_dev */
dev_put(real_dev);
}
diff --git a/net/batman-adv/bat_iv_ogm.c b/net/batman-adv/bat_iv_ogm.c
index a0b253e..a5bb0a76 100644
--- a/net/batman-adv/bat_iv_ogm.c
+++ b/net/batman-adv/bat_iv_ogm.c
@@ -1288,7 +1288,8 @@
batadv_ogm_packet = (struct batadv_ogm_packet *)packet_buff;
/* unpack the aggregated packets and process them one by one */
- do {
+ while (batadv_iv_ogm_aggr_packet(buff_pos, packet_len,
+ batadv_ogm_packet->tt_num_changes)) {
tt_buff = packet_buff + buff_pos + BATADV_OGM_HLEN;
batadv_iv_ogm_process(ethhdr, batadv_ogm_packet, tt_buff,
@@ -1299,8 +1300,7 @@
packet_pos = packet_buff + buff_pos;
batadv_ogm_packet = (struct batadv_ogm_packet *)packet_pos;
- } while (batadv_iv_ogm_aggr_packet(buff_pos, packet_len,
- batadv_ogm_packet->tt_num_changes));
+ }
kfree_skb(skb);
return NET_RX_SUCCESS;
diff --git a/net/bluetooth/sco.c b/net/bluetooth/sco.c
index 79d87d8..fad0302 100644
--- a/net/bluetooth/sco.c
+++ b/net/bluetooth/sco.c
@@ -359,6 +359,7 @@
sco_chan_del(sk, ECONNRESET);
break;
+ case BT_CONNECT2:
case BT_CONNECT:
case BT_DISCONN:
sco_chan_del(sk, ECONNRESET);
diff --git a/net/bridge/br_fdb.c b/net/bridge/br_fdb.c
index b0812c9..bab338e 100644
--- a/net/bridge/br_fdb.c
+++ b/net/bridge/br_fdb.c
@@ -423,7 +423,7 @@
return 0;
br_warn(br, "adding interface %s with same address "
"as a received packet\n",
- source->dev->name);
+ source ? source->dev->name : br->dev->name);
fdb_delete(br, fdb);
}
diff --git a/net/bridge/br_netlink.c b/net/bridge/br_netlink.c
index 27aa3ee..299fc5f 100644
--- a/net/bridge/br_netlink.c
+++ b/net/bridge/br_netlink.c
@@ -29,6 +29,7 @@
+ nla_total_size(1) /* IFLA_BRPORT_MODE */
+ nla_total_size(1) /* IFLA_BRPORT_GUARD */
+ nla_total_size(1) /* IFLA_BRPORT_PROTECT */
+ + nla_total_size(1) /* IFLA_BRPORT_FAST_LEAVE */
+ 0;
}
@@ -329,6 +330,7 @@
br_set_port_flag(p, tb, IFLA_BRPORT_MODE, BR_HAIRPIN_MODE);
br_set_port_flag(p, tb, IFLA_BRPORT_GUARD, BR_BPDU_GUARD);
br_set_port_flag(p, tb, IFLA_BRPORT_FAST_LEAVE, BR_MULTICAST_FAST_LEAVE);
+ br_set_port_flag(p, tb, IFLA_BRPORT_PROTECT, BR_ROOT_BLOCK);
if (tb[IFLA_BRPORT_COST]) {
err = br_stp_set_path_cost(p, nla_get_u32(tb[IFLA_BRPORT_COST]));
diff --git a/net/core/dev.c b/net/core/dev.c
index dffbef7..b13e5c7 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -1545,7 +1545,6 @@
return;
}
#endif
- WARN_ON(in_interrupt());
static_key_slow_inc(&netstamp_needed);
}
EXPORT_SYMBOL(net_enable_timestamp);
@@ -2219,9 +2218,9 @@
struct sk_buff *segs = ERR_PTR(-EPROTONOSUPPORT);
struct packet_offload *ptype;
__be16 type = skb->protocol;
+ int vlan_depth = ETH_HLEN;
while (type == htons(ETH_P_8021Q)) {
- int vlan_depth = ETH_HLEN;
struct vlan_hdr *vh;
if (unlikely(!pskb_may_pull(skb, vlan_depth + VLAN_HLEN)))
diff --git a/net/core/flow_dissector.c b/net/core/flow_dissector.c
index 9d4c720..e187bf0 100644
--- a/net/core/flow_dissector.c
+++ b/net/core/flow_dissector.c
@@ -140,6 +140,8 @@
flow->ports = *ports;
}
+ flow->thoff = (u16) nhoff;
+
return true;
}
EXPORT_SYMBOL(skb_flow_dissect);
diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
index a585d45..5fb8d7e 100644
--- a/net/core/rtnetlink.c
+++ b/net/core/rtnetlink.c
@@ -2621,7 +2621,7 @@
struct rtattr *attr = (void *)nlh + NLMSG_ALIGN(min_len);
while (RTA_OK(attr, attrlen)) {
- unsigned int flavor = attr->rta_type;
+ unsigned int flavor = attr->rta_type & NLA_TYPE_MASK;
if (flavor) {
if (flavor > rta_max[sz_idx])
return -EINVAL;
diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c
index 68f6a94..c929d9c 100644
--- a/net/ipv4/af_inet.c
+++ b/net/ipv4/af_inet.c
@@ -1333,8 +1333,7 @@
iph->frag_off |= htons(IP_MF);
offset += (skb->len - skb->mac_len - iph->ihl * 4);
} else {
- if (!(iph->frag_off & htons(IP_DF)))
- iph->id = htons(id++);
+ iph->id = htons(id++);
}
iph->tot_len = htons(skb->len - skb->mac_len);
iph->check = 0;
diff --git a/net/ipv4/inet_fragment.c b/net/ipv4/inet_fragment.c
index 245ae078a..f4fd23d 100644
--- a/net/ipv4/inet_fragment.c
+++ b/net/ipv4/inet_fragment.c
@@ -21,6 +21,7 @@
#include <linux/rtnetlink.h>
#include <linux/slab.h>
+#include <net/sock.h>
#include <net/inet_frag.h>
static void inet_frag_secret_rebuild(unsigned long dummy)
@@ -277,6 +278,7 @@
__releases(&f->lock)
{
struct inet_frag_queue *q;
+ int depth = 0;
hlist_for_each_entry(q, &f->hash[hash], list) {
if (q->net == nf && f->match(q, key)) {
@@ -284,9 +286,25 @@
read_unlock(&f->lock);
return q;
}
+ depth++;
}
read_unlock(&f->lock);
- return inet_frag_create(nf, f, key);
+ if (depth <= INETFRAGS_MAXDEPTH)
+ return inet_frag_create(nf, f, key);
+ else
+ return ERR_PTR(-ENOBUFS);
}
EXPORT_SYMBOL(inet_frag_find);
+
+void inet_frag_maybe_warn_overflow(struct inet_frag_queue *q,
+ const char *prefix)
+{
+ static const char msg[] = "inet_frag_find: Fragment hash bucket"
+ " list length grew over limit " __stringify(INETFRAGS_MAXDEPTH)
+ ". Dropping fragment.\n";
+
+ if (PTR_ERR(q) == -ENOBUFS)
+ LIMIT_NETDEBUG(KERN_WARNING "%s%s", prefix, msg);
+}
+EXPORT_SYMBOL(inet_frag_maybe_warn_overflow);
diff --git a/net/ipv4/ip_fragment.c b/net/ipv4/ip_fragment.c
index b6d30ac..a6445b8 100644
--- a/net/ipv4/ip_fragment.c
+++ b/net/ipv4/ip_fragment.c
@@ -292,14 +292,11 @@
hash = ipqhashfn(iph->id, iph->saddr, iph->daddr, iph->protocol);
q = inet_frag_find(&net->ipv4.frags, &ip4_frags, &arg, hash);
- if (q == NULL)
- goto out_nomem;
-
+ if (IS_ERR_OR_NULL(q)) {
+ inet_frag_maybe_warn_overflow(q, pr_fmt());
+ return NULL;
+ }
return container_of(q, struct ipq, q);
-
-out_nomem:
- LIMIT_NETDEBUG(KERN_ERR pr_fmt("ip_frag_create: no memory left !\n"));
- return NULL;
}
/* Is the fragment too far ahead to be part of ipq? */
diff --git a/net/ipv4/ip_gre.c b/net/ipv4/ip_gre.c
index d0ef0e6..91d66db 100644
--- a/net/ipv4/ip_gre.c
+++ b/net/ipv4/ip_gre.c
@@ -798,10 +798,7 @@
if (dev->header_ops && dev->type == ARPHRD_IPGRE) {
gre_hlen = 0;
- if (skb->protocol == htons(ETH_P_IP))
- tiph = (const struct iphdr *)skb->data;
- else
- tiph = &tunnel->parms.iph;
+ tiph = (const struct iphdr *)skb->data;
} else {
gre_hlen = tunnel->hlen;
tiph = &tunnel->parms.iph;
diff --git a/net/ipv4/ip_options.c b/net/ipv4/ip_options.c
index 310a364..ec72645 100644
--- a/net/ipv4/ip_options.c
+++ b/net/ipv4/ip_options.c
@@ -370,7 +370,6 @@
}
switch (optptr[3]&0xF) {
case IPOPT_TS_TSONLY:
- opt->ts = optptr - iph;
if (skb)
timeptr = &optptr[optptr[2]-1];
opt->ts_needtime = 1;
@@ -381,7 +380,6 @@
pp_ptr = optptr + 2;
goto error;
}
- opt->ts = optptr - iph;
if (rt) {
spec_dst_fill(&spec_dst, skb);
memcpy(&optptr[optptr[2]-1], &spec_dst, 4);
@@ -396,7 +394,6 @@
pp_ptr = optptr + 2;
goto error;
}
- opt->ts = optptr - iph;
{
__be32 addr;
memcpy(&addr, &optptr[optptr[2]-1], 4);
@@ -429,12 +426,12 @@
pp_ptr = optptr + 3;
goto error;
}
- opt->ts = optptr - iph;
if (skb) {
optptr[3] = (optptr[3]&0xF)|((overflow+1)<<4);
opt->is_changed = 1;
}
}
+ opt->ts = optptr - iph;
break;
case IPOPT_RA:
if (optlen < 4) {
diff --git a/net/ipv4/ipconfig.c b/net/ipv4/ipconfig.c
index 98cbc68..bf6c5cf 100644
--- a/net/ipv4/ipconfig.c
+++ b/net/ipv4/ipconfig.c
@@ -1522,7 +1522,8 @@
}
for (i++; i < CONF_NAMESERVERS_MAX; i++)
if (ic_nameservers[i] != NONE)
- pr_cont(", nameserver%u=%pI4\n", i, &ic_nameservers[i]);
+ pr_cont(", nameserver%u=%pI4", i, &ic_nameservers[i]);
+ pr_cont("\n");
#endif /* !SILENT */
return 0;
diff --git a/net/ipv4/netfilter/Kconfig b/net/ipv4/netfilter/Kconfig
index ce2d43e..0d755c5 100644
--- a/net/ipv4/netfilter/Kconfig
+++ b/net/ipv4/netfilter/Kconfig
@@ -36,19 +36,6 @@
If unsure, say Y.
-config IP_NF_QUEUE
- tristate "IP Userspace queueing via NETLINK (OBSOLETE)"
- depends on NETFILTER_ADVANCED
- help
- Netfilter has the ability to queue packets to user space: the
- netlink device can be used to access them using this driver.
-
- This option enables the old IPv4-only "ip_queue" implementation
- which has been obsoleted by the new "nfnetlink_queue" code (see
- CONFIG_NETFILTER_NETLINK_QUEUE).
-
- To compile it as a module, choose M here. If unsure, say N.
-
config IP_NF_IPTABLES
tristate "IP tables support (required for filtering/masq/NAT)"
default m if NETFILTER_ADVANCED=n
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index 47e854f..e220207 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -775,7 +775,7 @@
* Make sure that we have exactly size bytes
* available to the caller, no more, no less.
*/
- skb->avail_size = size;
+ skb->reserved_tailroom = skb->end - skb->tail - size;
return skb;
}
__kfree_skb(skb);
diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
index 0d9bdac..3bd55ba 100644
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -2059,11 +2059,8 @@
if (tcp_is_reno(tp))
tcp_reset_reno_sack(tp);
- if (!how) {
- /* Push undo marker, if it was plain RTO and nothing
- * was retransmitted. */
- tp->undo_marker = tp->snd_una;
- } else {
+ tp->undo_marker = tp->snd_una;
+ if (how) {
tp->sacked_out = 0;
tp->fackets_out = 0;
}
diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
index 4a8ec45..d09203c 100644
--- a/net/ipv4/tcp_ipv4.c
+++ b/net/ipv4/tcp_ipv4.c
@@ -274,13 +274,6 @@
struct inet_sock *inet = inet_sk(sk);
u32 mtu = tcp_sk(sk)->mtu_info;
- /* We are not interested in TCP_LISTEN and open_requests (SYN-ACKs
- * send out by Linux are always <576bytes so they should go through
- * unfragmented).
- */
- if (sk->sk_state == TCP_LISTEN)
- return;
-
dst = inet_csk_update_pmtu(sk, mtu);
if (!dst)
return;
@@ -408,6 +401,13 @@
goto out;
if (code == ICMP_FRAG_NEEDED) { /* PMTU discovery (RFC1191) */
+ /* We are not interested in TCP_LISTEN and open_requests
+ * (SYN-ACKs send out by Linux are always <576bytes so
+ * they should go through unfragmented).
+ */
+ if (sk->sk_state == TCP_LISTEN)
+ goto out;
+
tp->mtu_info = info;
if (!sock_owned_by_user(sk)) {
tcp_v4_mtu_reduced(sk);
diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
index e2b4461..5d0b438 100644
--- a/net/ipv4/tcp_output.c
+++ b/net/ipv4/tcp_output.c
@@ -1298,7 +1298,6 @@
eat = min_t(int, len, skb_headlen(skb));
if (eat) {
__skb_pull(skb, eat);
- skb->avail_size -= eat;
len -= eat;
if (!len)
return;
@@ -1810,8 +1809,11 @@
goto send_now;
}
- /* Ok, it looks like it is advisable to defer. */
- tp->tso_deferred = 1 | (jiffies << 1);
+ /* Ok, it looks like it is advisable to defer.
+ * Do not rearm the timer if already set to not break TCP ACK clocking.
+ */
+ if (!tp->tso_deferred)
+ tp->tso_deferred = 1 | (jiffies << 1);
return true;
diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
index 265c42c..0a073a2 100644
--- a/net/ipv4/udp.c
+++ b/net/ipv4/udp.c
@@ -1762,9 +1762,16 @@
void udp_destroy_sock(struct sock *sk)
{
+ struct udp_sock *up = udp_sk(sk);
bool slow = lock_sock_fast(sk);
udp_flush_pending_frames(sk);
unlock_sock_fast(sk, slow);
+ if (static_key_false(&udp_encap_needed) && up->encap_type) {
+ void (*encap_destroy)(struct sock *sk);
+ encap_destroy = ACCESS_ONCE(up->encap_destroy);
+ if (encap_destroy)
+ encap_destroy(sk);
+ }
}
/*
diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
index f2c7e61..26512250 100644
--- a/net/ipv6/addrconf.c
+++ b/net/ipv6/addrconf.c
@@ -4784,26 +4784,20 @@
static int __net_init addrconf_init_net(struct net *net)
{
- int err;
+ int err = -ENOMEM;
struct ipv6_devconf *all, *dflt;
- err = -ENOMEM;
- all = &ipv6_devconf;
- dflt = &ipv6_devconf_dflt;
+ all = kmemdup(&ipv6_devconf, sizeof(ipv6_devconf), GFP_KERNEL);
+ if (all == NULL)
+ goto err_alloc_all;
- if (!net_eq(net, &init_net)) {
- all = kmemdup(all, sizeof(ipv6_devconf), GFP_KERNEL);
- if (all == NULL)
- goto err_alloc_all;
+ dflt = kmemdup(&ipv6_devconf_dflt, sizeof(ipv6_devconf_dflt), GFP_KERNEL);
+ if (dflt == NULL)
+ goto err_alloc_dflt;
- dflt = kmemdup(dflt, sizeof(ipv6_devconf_dflt), GFP_KERNEL);
- if (dflt == NULL)
- goto err_alloc_dflt;
- } else {
- /* these will be inherited by all namespaces */
- dflt->autoconf = ipv6_defaults.autoconf;
- dflt->disable_ipv6 = ipv6_defaults.disable_ipv6;
- }
+ /* these will be inherited by all namespaces */
+ dflt->autoconf = ipv6_defaults.autoconf;
+ dflt->disable_ipv6 = ipv6_defaults.disable_ipv6;
net->ipv6.devconf_all = all;
net->ipv6.devconf_dflt = dflt;
diff --git a/net/ipv6/netfilter/ip6t_NPT.c b/net/ipv6/netfilter/ip6t_NPT.c
index 83acc14..33608c6 100644
--- a/net/ipv6/netfilter/ip6t_NPT.c
+++ b/net/ipv6/netfilter/ip6t_NPT.c
@@ -114,6 +114,7 @@
static struct xt_target ip6t_npt_target_reg[] __read_mostly = {
{
.name = "SNPT",
+ .table = "mangle",
.target = ip6t_snpt_tg,
.targetsize = sizeof(struct ip6t_npt_tginfo),
.checkentry = ip6t_npt_checkentry,
@@ -124,6 +125,7 @@
},
{
.name = "DNPT",
+ .table = "mangle",
.target = ip6t_dnpt_tg,
.targetsize = sizeof(struct ip6t_npt_tginfo),
.checkentry = ip6t_npt_checkentry,
diff --git a/net/ipv6/netfilter/nf_conntrack_reasm.c b/net/ipv6/netfilter/nf_conntrack_reasm.c
index 54087e9..6700069 100644
--- a/net/ipv6/netfilter/nf_conntrack_reasm.c
+++ b/net/ipv6/netfilter/nf_conntrack_reasm.c
@@ -14,6 +14,8 @@
* 2 of the License, or (at your option) any later version.
*/
+#define pr_fmt(fmt) "IPv6-nf: " fmt
+
#include <linux/errno.h>
#include <linux/types.h>
#include <linux/string.h>
@@ -180,13 +182,11 @@
q = inet_frag_find(&net->nf_frag.frags, &nf_frags, &arg, hash);
local_bh_enable();
- if (q == NULL)
- goto oom;
-
+ if (IS_ERR_OR_NULL(q)) {
+ inet_frag_maybe_warn_overflow(q, pr_fmt());
+ return NULL;
+ }
return container_of(q, struct frag_queue, q);
-
-oom:
- return NULL;
}
diff --git a/net/ipv6/reassembly.c b/net/ipv6/reassembly.c
index 3c6a772..196ab93 100644
--- a/net/ipv6/reassembly.c
+++ b/net/ipv6/reassembly.c
@@ -26,6 +26,9 @@
* YOSHIFUJI,H. @USAGI Always remove fragment header to
* calculate ICV correctly.
*/
+
+#define pr_fmt(fmt) "IPv6: " fmt
+
#include <linux/errno.h>
#include <linux/types.h>
#include <linux/string.h>
@@ -185,9 +188,10 @@
hash = inet6_hash_frag(id, src, dst, ip6_frags.rnd);
q = inet_frag_find(&net->ipv6.frags, &ip6_frags, &arg, hash);
- if (q == NULL)
+ if (IS_ERR_OR_NULL(q)) {
+ inet_frag_maybe_warn_overflow(q, pr_fmt());
return NULL;
-
+ }
return container_of(q, struct frag_queue, q);
}
diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
index 9b64600..f6d629f 100644
--- a/net/ipv6/tcp_ipv6.c
+++ b/net/ipv6/tcp_ipv6.c
@@ -389,6 +389,13 @@
}
if (type == ICMPV6_PKT_TOOBIG) {
+ /* We are not interested in TCP_LISTEN and open_requests
+ * (SYN-ACKs send out by Linux are always <576bytes so
+ * they should go through unfragmented).
+ */
+ if (sk->sk_state == TCP_LISTEN)
+ goto out;
+
tp->mtu_info = ntohl(info);
if (!sock_owned_by_user(sk))
tcp_v6_mtu_reduced(sk);
diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
index 599e1ba6..d8e5e85 100644
--- a/net/ipv6/udp.c
+++ b/net/ipv6/udp.c
@@ -1285,10 +1285,18 @@
void udpv6_destroy_sock(struct sock *sk)
{
+ struct udp_sock *up = udp_sk(sk);
lock_sock(sk);
udp_v6_flush_pending_frames(sk);
release_sock(sk);
+ if (static_key_false(&udpv6_encap_needed) && up->encap_type) {
+ void (*encap_destroy)(struct sock *sk);
+ encap_destroy = ACCESS_ONCE(up->encap_destroy);
+ if (encap_destroy)
+ encap_destroy(sk);
+ }
+
inet6_destroy_sock(sk);
}
diff --git a/net/irda/af_irda.c b/net/irda/af_irda.c
index d07e3a6..d28e7f0 100644
--- a/net/irda/af_irda.c
+++ b/net/irda/af_irda.c
@@ -2583,8 +2583,10 @@
NULL, NULL, NULL);
/* Check if the we got some results */
- if (!self->cachedaddr)
- return -EAGAIN; /* Didn't find any devices */
+ if (!self->cachedaddr) {
+ err = -EAGAIN; /* Didn't find any devices */
+ goto out;
+ }
daddr = self->cachedaddr;
/* Cleanup */
self->cachedaddr = 0;
diff --git a/net/l2tp/l2tp_core.c b/net/l2tp/l2tp_core.c
index d36875f..8aecf5d 100644
--- a/net/l2tp/l2tp_core.c
+++ b/net/l2tp/l2tp_core.c
@@ -114,7 +114,6 @@
static void l2tp_session_set_header_len(struct l2tp_session *session, int version);
static void l2tp_tunnel_free(struct l2tp_tunnel *tunnel);
-static void l2tp_tunnel_closeall(struct l2tp_tunnel *tunnel);
static inline struct l2tp_net *l2tp_pernet(struct net *net)
{
@@ -192,6 +191,7 @@
} else {
/* Socket is owned by kernelspace */
sk = tunnel->sock;
+ sock_hold(sk);
}
out:
@@ -210,6 +210,7 @@
}
sock_put(sk);
}
+ sock_put(sk);
}
EXPORT_SYMBOL_GPL(l2tp_tunnel_sock_put);
@@ -373,10 +374,8 @@
struct sk_buff *skbp;
struct sk_buff *tmp;
u32 ns = L2TP_SKB_CB(skb)->ns;
- struct l2tp_stats *sstats;
spin_lock_bh(&session->reorder_q.lock);
- sstats = &session->stats;
skb_queue_walk_safe(&session->reorder_q, skbp, tmp) {
if (L2TP_SKB_CB(skbp)->ns > ns) {
__skb_queue_before(&session->reorder_q, skbp, skb);
@@ -384,9 +383,7 @@
"%s: pkt %hu, inserted before %hu, reorder_q len=%d\n",
session->name, ns, L2TP_SKB_CB(skbp)->ns,
skb_queue_len(&session->reorder_q));
- u64_stats_update_begin(&sstats->syncp);
- sstats->rx_oos_packets++;
- u64_stats_update_end(&sstats->syncp);
+ atomic_long_inc(&session->stats.rx_oos_packets);
goto out;
}
}
@@ -403,23 +400,16 @@
{
struct l2tp_tunnel *tunnel = session->tunnel;
int length = L2TP_SKB_CB(skb)->length;
- struct l2tp_stats *tstats, *sstats;
/* We're about to requeue the skb, so return resources
* to its current owner (a socket receive buffer).
*/
skb_orphan(skb);
- tstats = &tunnel->stats;
- u64_stats_update_begin(&tstats->syncp);
- sstats = &session->stats;
- u64_stats_update_begin(&sstats->syncp);
- tstats->rx_packets++;
- tstats->rx_bytes += length;
- sstats->rx_packets++;
- sstats->rx_bytes += length;
- u64_stats_update_end(&tstats->syncp);
- u64_stats_update_end(&sstats->syncp);
+ atomic_long_inc(&tunnel->stats.rx_packets);
+ atomic_long_add(length, &tunnel->stats.rx_bytes);
+ atomic_long_inc(&session->stats.rx_packets);
+ atomic_long_add(length, &session->stats.rx_bytes);
if (L2TP_SKB_CB(skb)->has_seq) {
/* Bump our Nr */
@@ -450,7 +440,6 @@
{
struct sk_buff *skb;
struct sk_buff *tmp;
- struct l2tp_stats *sstats;
/* If the pkt at the head of the queue has the nr that we
* expect to send up next, dequeue it and any other
@@ -458,13 +447,10 @@
*/
start:
spin_lock_bh(&session->reorder_q.lock);
- sstats = &session->stats;
skb_queue_walk_safe(&session->reorder_q, skb, tmp) {
if (time_after(jiffies, L2TP_SKB_CB(skb)->expires)) {
- u64_stats_update_begin(&sstats->syncp);
- sstats->rx_seq_discards++;
- sstats->rx_errors++;
- u64_stats_update_end(&sstats->syncp);
+ atomic_long_inc(&session->stats.rx_seq_discards);
+ atomic_long_inc(&session->stats.rx_errors);
l2tp_dbg(session, L2TP_MSG_SEQ,
"%s: oos pkt %u len %d discarded (too old), waiting for %u, reorder_q_len=%d\n",
session->name, L2TP_SKB_CB(skb)->ns,
@@ -623,7 +609,6 @@
struct l2tp_tunnel *tunnel = session->tunnel;
int offset;
u32 ns, nr;
- struct l2tp_stats *sstats = &session->stats;
/* The ref count is increased since we now hold a pointer to
* the session. Take care to decrement the refcnt when exiting
@@ -640,9 +625,7 @@
"%s: cookie mismatch (%u/%u). Discarding.\n",
tunnel->name, tunnel->tunnel_id,
session->session_id);
- u64_stats_update_begin(&sstats->syncp);
- sstats->rx_cookie_discards++;
- u64_stats_update_end(&sstats->syncp);
+ atomic_long_inc(&session->stats.rx_cookie_discards);
goto discard;
}
ptr += session->peer_cookie_len;
@@ -711,9 +694,7 @@
l2tp_warn(session, L2TP_MSG_SEQ,
"%s: recv data has no seq numbers when required. Discarding.\n",
session->name);
- u64_stats_update_begin(&sstats->syncp);
- sstats->rx_seq_discards++;
- u64_stats_update_end(&sstats->syncp);
+ atomic_long_inc(&session->stats.rx_seq_discards);
goto discard;
}
@@ -732,9 +713,7 @@
l2tp_warn(session, L2TP_MSG_SEQ,
"%s: recv data has no seq numbers when required. Discarding.\n",
session->name);
- u64_stats_update_begin(&sstats->syncp);
- sstats->rx_seq_discards++;
- u64_stats_update_end(&sstats->syncp);
+ atomic_long_inc(&session->stats.rx_seq_discards);
goto discard;
}
}
@@ -788,9 +767,7 @@
* packets
*/
if (L2TP_SKB_CB(skb)->ns != session->nr) {
- u64_stats_update_begin(&sstats->syncp);
- sstats->rx_seq_discards++;
- u64_stats_update_end(&sstats->syncp);
+ atomic_long_inc(&session->stats.rx_seq_discards);
l2tp_dbg(session, L2TP_MSG_SEQ,
"%s: oos pkt %u len %d discarded, waiting for %u, reorder_q_len=%d\n",
session->name, L2TP_SKB_CB(skb)->ns,
@@ -816,9 +793,7 @@
return;
discard:
- u64_stats_update_begin(&sstats->syncp);
- sstats->rx_errors++;
- u64_stats_update_end(&sstats->syncp);
+ atomic_long_inc(&session->stats.rx_errors);
kfree_skb(skb);
if (session->deref)
@@ -828,6 +803,23 @@
}
EXPORT_SYMBOL(l2tp_recv_common);
+/* Drop skbs from the session's reorder_q
+ */
+int l2tp_session_queue_purge(struct l2tp_session *session)
+{
+ struct sk_buff *skb = NULL;
+ BUG_ON(!session);
+ BUG_ON(session->magic != L2TP_SESSION_MAGIC);
+ while ((skb = skb_dequeue(&session->reorder_q))) {
+ atomic_long_inc(&session->stats.rx_errors);
+ kfree_skb(skb);
+ if (session->deref)
+ (*session->deref)(session);
+ }
+ return 0;
+}
+EXPORT_SYMBOL_GPL(l2tp_session_queue_purge);
+
/* Internal UDP receive frame. Do the real work of receiving an L2TP data frame
* here. The skb is not on a list when we get here.
* Returns 0 if the packet was a data packet and was successfully passed on.
@@ -843,7 +835,6 @@
u32 tunnel_id, session_id;
u16 version;
int length;
- struct l2tp_stats *tstats;
if (tunnel->sock && l2tp_verify_udp_checksum(tunnel->sock, skb))
goto discard_bad_csum;
@@ -932,10 +923,7 @@
discard_bad_csum:
LIMIT_NETDEBUG("%s: UDP: bad checksum\n", tunnel->name);
UDP_INC_STATS_USER(tunnel->l2tp_net, UDP_MIB_INERRORS, 0);
- tstats = &tunnel->stats;
- u64_stats_update_begin(&tstats->syncp);
- tstats->rx_errors++;
- u64_stats_update_end(&tstats->syncp);
+ atomic_long_inc(&tunnel->stats.rx_errors);
kfree_skb(skb);
return 0;
@@ -1062,7 +1050,6 @@
struct l2tp_tunnel *tunnel = session->tunnel;
unsigned int len = skb->len;
int error;
- struct l2tp_stats *tstats, *sstats;
/* Debug */
if (session->send_seq)
@@ -1091,21 +1078,15 @@
error = ip_queue_xmit(skb, fl);
/* Update stats */
- tstats = &tunnel->stats;
- u64_stats_update_begin(&tstats->syncp);
- sstats = &session->stats;
- u64_stats_update_begin(&sstats->syncp);
if (error >= 0) {
- tstats->tx_packets++;
- tstats->tx_bytes += len;
- sstats->tx_packets++;
- sstats->tx_bytes += len;
+ atomic_long_inc(&tunnel->stats.tx_packets);
+ atomic_long_add(len, &tunnel->stats.tx_bytes);
+ atomic_long_inc(&session->stats.tx_packets);
+ atomic_long_add(len, &session->stats.tx_bytes);
} else {
- tstats->tx_errors++;
- sstats->tx_errors++;
+ atomic_long_inc(&tunnel->stats.tx_errors);
+ atomic_long_inc(&session->stats.tx_errors);
}
- u64_stats_update_end(&tstats->syncp);
- u64_stats_update_end(&sstats->syncp);
return 0;
}
@@ -1282,6 +1263,7 @@
/* No longer an encapsulation socket. See net/ipv4/udp.c */
(udp_sk(sk))->encap_type = 0;
(udp_sk(sk))->encap_rcv = NULL;
+ (udp_sk(sk))->encap_destroy = NULL;
break;
case L2TP_ENCAPTYPE_IP:
break;
@@ -1311,7 +1293,7 @@
/* When the tunnel is closed, all the attached sessions need to go too.
*/
-static void l2tp_tunnel_closeall(struct l2tp_tunnel *tunnel)
+void l2tp_tunnel_closeall(struct l2tp_tunnel *tunnel)
{
int hash;
struct hlist_node *walk;
@@ -1334,25 +1316,13 @@
hlist_del_init(&session->hlist);
- /* Since we should hold the sock lock while
- * doing any unbinding, we need to release the
- * lock we're holding before taking that lock.
- * Hold a reference to the sock so it doesn't
- * disappear as we're jumping between locks.
- */
if (session->ref != NULL)
(*session->ref)(session);
write_unlock_bh(&tunnel->hlist_lock);
- if (tunnel->version != L2TP_HDR_VER_2) {
- struct l2tp_net *pn = l2tp_pernet(tunnel->l2tp_net);
-
- spin_lock_bh(&pn->l2tp_session_hlist_lock);
- hlist_del_init_rcu(&session->global_hlist);
- spin_unlock_bh(&pn->l2tp_session_hlist_lock);
- synchronize_rcu();
- }
+ __l2tp_session_unhash(session);
+ l2tp_session_queue_purge(session);
if (session->session_close != NULL)
(*session->session_close)(session);
@@ -1360,6 +1330,8 @@
if (session->deref != NULL)
(*session->deref)(session);
+ l2tp_session_dec_refcount(session);
+
write_lock_bh(&tunnel->hlist_lock);
/* Now restart from the beginning of this hash
@@ -1372,6 +1344,17 @@
}
write_unlock_bh(&tunnel->hlist_lock);
}
+EXPORT_SYMBOL_GPL(l2tp_tunnel_closeall);
+
+/* Tunnel socket destroy hook for UDP encapsulation */
+static void l2tp_udp_encap_destroy(struct sock *sk)
+{
+ struct l2tp_tunnel *tunnel = l2tp_sock_to_tunnel(sk);
+ if (tunnel) {
+ l2tp_tunnel_closeall(tunnel);
+ sock_put(sk);
+ }
+}
/* Really kill the tunnel.
* Come here only when all sessions have been cleared from the tunnel.
@@ -1397,19 +1380,21 @@
return;
sock = sk->sk_socket;
- BUG_ON(!sock);
- /* If the tunnel socket was created directly by the kernel, use the
- * sk_* API to release the socket now. Otherwise go through the
- * inet_* layer to shut the socket down, and let userspace close it.
+ /* If the tunnel socket was created by userspace, then go through the
+ * inet layer to shut the socket down, and let userspace close it.
+ * Otherwise, if we created the socket directly within the kernel, use
+ * the sk API to release it here.
* In either case the tunnel resources are freed in the socket
* destructor when the tunnel socket goes away.
*/
- if (sock->file == NULL) {
- kernel_sock_shutdown(sock, SHUT_RDWR);
- sk_release_kernel(sk);
+ if (tunnel->fd >= 0) {
+ if (sock)
+ inet_shutdown(sock, 2);
} else {
- inet_shutdown(sock, 2);
+ if (sock)
+ kernel_sock_shutdown(sock, SHUT_RDWR);
+ sk_release_kernel(sk);
}
l2tp_tunnel_sock_put(sk);
@@ -1668,6 +1653,7 @@
/* Mark socket as an encapsulation socket. See net/ipv4/udp.c */
udp_sk(sk)->encap_type = UDP_ENCAP_L2TPINUDP;
udp_sk(sk)->encap_rcv = l2tp_udp_encap_recv;
+ udp_sk(sk)->encap_destroy = l2tp_udp_encap_destroy;
#if IS_ENABLED(CONFIG_IPV6)
if (sk->sk_family == PF_INET6)
udpv6_encap_enable();
@@ -1723,6 +1709,7 @@
*/
int l2tp_tunnel_delete(struct l2tp_tunnel *tunnel)
{
+ l2tp_tunnel_closeall(tunnel);
return (false == queue_work(l2tp_wq, &tunnel->del_work));
}
EXPORT_SYMBOL_GPL(l2tp_tunnel_delete);
@@ -1731,37 +1718,15 @@
*/
void l2tp_session_free(struct l2tp_session *session)
{
- struct l2tp_tunnel *tunnel;
+ struct l2tp_tunnel *tunnel = session->tunnel;
BUG_ON(atomic_read(&session->ref_count) != 0);
- tunnel = session->tunnel;
- if (tunnel != NULL) {
+ if (tunnel) {
BUG_ON(tunnel->magic != L2TP_TUNNEL_MAGIC);
-
- /* Delete the session from the hash */
- write_lock_bh(&tunnel->hlist_lock);
- hlist_del_init(&session->hlist);
- write_unlock_bh(&tunnel->hlist_lock);
-
- /* Unlink from the global hash if not L2TPv2 */
- if (tunnel->version != L2TP_HDR_VER_2) {
- struct l2tp_net *pn = l2tp_pernet(tunnel->l2tp_net);
-
- spin_lock_bh(&pn->l2tp_session_hlist_lock);
- hlist_del_init_rcu(&session->global_hlist);
- spin_unlock_bh(&pn->l2tp_session_hlist_lock);
- synchronize_rcu();
- }
-
if (session->session_id != 0)
atomic_dec(&l2tp_session_count);
-
sock_put(tunnel->sock);
-
- /* This will delete the tunnel context if this
- * is the last session on the tunnel.
- */
session->tunnel = NULL;
l2tp_tunnel_dec_refcount(tunnel);
}
@@ -1772,21 +1737,52 @@
}
EXPORT_SYMBOL_GPL(l2tp_session_free);
+/* Remove an l2tp session from l2tp_core's hash lists.
+ * Provides a tidyup interface for pseudowire code which can't just route all
+ * shutdown via. l2tp_session_delete and a pseudowire-specific session_close
+ * callback.
+ */
+void __l2tp_session_unhash(struct l2tp_session *session)
+{
+ struct l2tp_tunnel *tunnel = session->tunnel;
+
+ /* Remove the session from core hashes */
+ if (tunnel) {
+ /* Remove from the per-tunnel hash */
+ write_lock_bh(&tunnel->hlist_lock);
+ hlist_del_init(&session->hlist);
+ write_unlock_bh(&tunnel->hlist_lock);
+
+ /* For L2TPv3 we have a per-net hash: remove from there, too */
+ if (tunnel->version != L2TP_HDR_VER_2) {
+ struct l2tp_net *pn = l2tp_pernet(tunnel->l2tp_net);
+ spin_lock_bh(&pn->l2tp_session_hlist_lock);
+ hlist_del_init_rcu(&session->global_hlist);
+ spin_unlock_bh(&pn->l2tp_session_hlist_lock);
+ synchronize_rcu();
+ }
+ }
+}
+EXPORT_SYMBOL_GPL(__l2tp_session_unhash);
+
/* This function is used by the netlink SESSION_DELETE command and by
pseudowire modules.
*/
int l2tp_session_delete(struct l2tp_session *session)
{
+ if (session->ref)
+ (*session->ref)(session);
+ __l2tp_session_unhash(session);
+ l2tp_session_queue_purge(session);
if (session->session_close != NULL)
(*session->session_close)(session);
-
+ if (session->deref)
+ (*session->ref)(session);
l2tp_session_dec_refcount(session);
-
return 0;
}
EXPORT_SYMBOL_GPL(l2tp_session_delete);
-
/* We come here whenever a session's send_seq, cookie_len or
* l2specific_len parameters are set.
*/
diff --git a/net/l2tp/l2tp_core.h b/net/l2tp/l2tp_core.h
index 8eb8f1d..485a490 100644
--- a/net/l2tp/l2tp_core.h
+++ b/net/l2tp/l2tp_core.h
@@ -36,16 +36,15 @@
struct sk_buff;
struct l2tp_stats {
- u64 tx_packets;
- u64 tx_bytes;
- u64 tx_errors;
- u64 rx_packets;
- u64 rx_bytes;
- u64 rx_seq_discards;
- u64 rx_oos_packets;
- u64 rx_errors;
- u64 rx_cookie_discards;
- struct u64_stats_sync syncp;
+ atomic_long_t tx_packets;
+ atomic_long_t tx_bytes;
+ atomic_long_t tx_errors;
+ atomic_long_t rx_packets;
+ atomic_long_t rx_bytes;
+ atomic_long_t rx_seq_discards;
+ atomic_long_t rx_oos_packets;
+ atomic_long_t rx_errors;
+ atomic_long_t rx_cookie_discards;
};
struct l2tp_tunnel;
@@ -240,11 +239,14 @@
extern struct l2tp_tunnel *l2tp_tunnel_find_nth(struct net *net, int nth);
extern int l2tp_tunnel_create(struct net *net, int fd, int version, u32 tunnel_id, u32 peer_tunnel_id, struct l2tp_tunnel_cfg *cfg, struct l2tp_tunnel **tunnelp);
+extern void l2tp_tunnel_closeall(struct l2tp_tunnel *tunnel);
extern int l2tp_tunnel_delete(struct l2tp_tunnel *tunnel);
extern struct l2tp_session *l2tp_session_create(int priv_size, struct l2tp_tunnel *tunnel, u32 session_id, u32 peer_session_id, struct l2tp_session_cfg *cfg);
+extern void __l2tp_session_unhash(struct l2tp_session *session);
extern int l2tp_session_delete(struct l2tp_session *session);
extern void l2tp_session_free(struct l2tp_session *session);
extern void l2tp_recv_common(struct l2tp_session *session, struct sk_buff *skb, unsigned char *ptr, unsigned char *optr, u16 hdrflags, int length, int (*payload_hook)(struct sk_buff *skb));
+extern int l2tp_session_queue_purge(struct l2tp_session *session);
extern int l2tp_udp_encap_recv(struct sock *sk, struct sk_buff *skb);
extern int l2tp_xmit_skb(struct l2tp_session *session, struct sk_buff *skb, int hdr_len);
diff --git a/net/l2tp/l2tp_debugfs.c b/net/l2tp/l2tp_debugfs.c
index c3813bc..072d720 100644
--- a/net/l2tp/l2tp_debugfs.c
+++ b/net/l2tp/l2tp_debugfs.c
@@ -146,14 +146,14 @@
tunnel->sock ? atomic_read(&tunnel->sock->sk_refcnt) : 0,
atomic_read(&tunnel->ref_count));
- seq_printf(m, " %08x rx %llu/%llu/%llu rx %llu/%llu/%llu\n",
+ seq_printf(m, " %08x rx %ld/%ld/%ld rx %ld/%ld/%ld\n",
tunnel->debug,
- (unsigned long long)tunnel->stats.tx_packets,
- (unsigned long long)tunnel->stats.tx_bytes,
- (unsigned long long)tunnel->stats.tx_errors,
- (unsigned long long)tunnel->stats.rx_packets,
- (unsigned long long)tunnel->stats.rx_bytes,
- (unsigned long long)tunnel->stats.rx_errors);
+ atomic_long_read(&tunnel->stats.tx_packets),
+ atomic_long_read(&tunnel->stats.tx_bytes),
+ atomic_long_read(&tunnel->stats.tx_errors),
+ atomic_long_read(&tunnel->stats.rx_packets),
+ atomic_long_read(&tunnel->stats.rx_bytes),
+ atomic_long_read(&tunnel->stats.rx_errors));
if (tunnel->show != NULL)
tunnel->show(m, tunnel);
@@ -203,14 +203,14 @@
seq_printf(m, "\n");
}
- seq_printf(m, " %hu/%hu tx %llu/%llu/%llu rx %llu/%llu/%llu\n",
+ seq_printf(m, " %hu/%hu tx %ld/%ld/%ld rx %ld/%ld/%ld\n",
session->nr, session->ns,
- (unsigned long long)session->stats.tx_packets,
- (unsigned long long)session->stats.tx_bytes,
- (unsigned long long)session->stats.tx_errors,
- (unsigned long long)session->stats.rx_packets,
- (unsigned long long)session->stats.rx_bytes,
- (unsigned long long)session->stats.rx_errors);
+ atomic_long_read(&session->stats.tx_packets),
+ atomic_long_read(&session->stats.tx_bytes),
+ atomic_long_read(&session->stats.tx_errors),
+ atomic_long_read(&session->stats.rx_packets),
+ atomic_long_read(&session->stats.rx_bytes),
+ atomic_long_read(&session->stats.rx_errors));
if (session->show != NULL)
session->show(m, session);
diff --git a/net/l2tp/l2tp_ip.c b/net/l2tp/l2tp_ip.c
index 7f41b70..571db8d 100644
--- a/net/l2tp/l2tp_ip.c
+++ b/net/l2tp/l2tp_ip.c
@@ -228,10 +228,16 @@
static void l2tp_ip_destroy_sock(struct sock *sk)
{
struct sk_buff *skb;
+ struct l2tp_tunnel *tunnel = l2tp_sock_to_tunnel(sk);
while ((skb = __skb_dequeue_tail(&sk->sk_write_queue)) != NULL)
kfree_skb(skb);
+ if (tunnel) {
+ l2tp_tunnel_closeall(tunnel);
+ sock_put(sk);
+ }
+
sk_refcnt_debug_dec(sk);
}
diff --git a/net/l2tp/l2tp_ip6.c b/net/l2tp/l2tp_ip6.c
index 41f2f81..c74f5a9 100644
--- a/net/l2tp/l2tp_ip6.c
+++ b/net/l2tp/l2tp_ip6.c
@@ -241,10 +241,17 @@
static void l2tp_ip6_destroy_sock(struct sock *sk)
{
+ struct l2tp_tunnel *tunnel = l2tp_sock_to_tunnel(sk);
+
lock_sock(sk);
ip6_flush_pending_frames(sk);
release_sock(sk);
+ if (tunnel) {
+ l2tp_tunnel_closeall(tunnel);
+ sock_put(sk);
+ }
+
inet6_destroy_sock(sk);
}
diff --git a/net/l2tp/l2tp_netlink.c b/net/l2tp/l2tp_netlink.c
index c1bab22..0825ff2 100644
--- a/net/l2tp/l2tp_netlink.c
+++ b/net/l2tp/l2tp_netlink.c
@@ -246,8 +246,6 @@
#if IS_ENABLED(CONFIG_IPV6)
struct ipv6_pinfo *np = NULL;
#endif
- struct l2tp_stats stats;
- unsigned int start;
hdr = genlmsg_put(skb, portid, seq, &l2tp_nl_family, flags,
L2TP_CMD_TUNNEL_GET);
@@ -265,28 +263,22 @@
if (nest == NULL)
goto nla_put_failure;
- do {
- start = u64_stats_fetch_begin(&tunnel->stats.syncp);
- stats.tx_packets = tunnel->stats.tx_packets;
- stats.tx_bytes = tunnel->stats.tx_bytes;
- stats.tx_errors = tunnel->stats.tx_errors;
- stats.rx_packets = tunnel->stats.rx_packets;
- stats.rx_bytes = tunnel->stats.rx_bytes;
- stats.rx_errors = tunnel->stats.rx_errors;
- stats.rx_seq_discards = tunnel->stats.rx_seq_discards;
- stats.rx_oos_packets = tunnel->stats.rx_oos_packets;
- } while (u64_stats_fetch_retry(&tunnel->stats.syncp, start));
-
- if (nla_put_u64(skb, L2TP_ATTR_TX_PACKETS, stats.tx_packets) ||
- nla_put_u64(skb, L2TP_ATTR_TX_BYTES, stats.tx_bytes) ||
- nla_put_u64(skb, L2TP_ATTR_TX_ERRORS, stats.tx_errors) ||
- nla_put_u64(skb, L2TP_ATTR_RX_PACKETS, stats.rx_packets) ||
- nla_put_u64(skb, L2TP_ATTR_RX_BYTES, stats.rx_bytes) ||
+ if (nla_put_u64(skb, L2TP_ATTR_TX_PACKETS,
+ atomic_long_read(&tunnel->stats.tx_packets)) ||
+ nla_put_u64(skb, L2TP_ATTR_TX_BYTES,
+ atomic_long_read(&tunnel->stats.tx_bytes)) ||
+ nla_put_u64(skb, L2TP_ATTR_TX_ERRORS,
+ atomic_long_read(&tunnel->stats.tx_errors)) ||
+ nla_put_u64(skb, L2TP_ATTR_RX_PACKETS,
+ atomic_long_read(&tunnel->stats.rx_packets)) ||
+ nla_put_u64(skb, L2TP_ATTR_RX_BYTES,
+ atomic_long_read(&tunnel->stats.rx_bytes)) ||
nla_put_u64(skb, L2TP_ATTR_RX_SEQ_DISCARDS,
- stats.rx_seq_discards) ||
+ atomic_long_read(&tunnel->stats.rx_seq_discards)) ||
nla_put_u64(skb, L2TP_ATTR_RX_OOS_PACKETS,
- stats.rx_oos_packets) ||
- nla_put_u64(skb, L2TP_ATTR_RX_ERRORS, stats.rx_errors))
+ atomic_long_read(&tunnel->stats.rx_oos_packets)) ||
+ nla_put_u64(skb, L2TP_ATTR_RX_ERRORS,
+ atomic_long_read(&tunnel->stats.rx_errors)))
goto nla_put_failure;
nla_nest_end(skb, nest);
@@ -612,8 +604,6 @@
struct nlattr *nest;
struct l2tp_tunnel *tunnel = session->tunnel;
struct sock *sk = NULL;
- struct l2tp_stats stats;
- unsigned int start;
sk = tunnel->sock;
@@ -656,28 +646,22 @@
if (nest == NULL)
goto nla_put_failure;
- do {
- start = u64_stats_fetch_begin(&session->stats.syncp);
- stats.tx_packets = session->stats.tx_packets;
- stats.tx_bytes = session->stats.tx_bytes;
- stats.tx_errors = session->stats.tx_errors;
- stats.rx_packets = session->stats.rx_packets;
- stats.rx_bytes = session->stats.rx_bytes;
- stats.rx_errors = session->stats.rx_errors;
- stats.rx_seq_discards = session->stats.rx_seq_discards;
- stats.rx_oos_packets = session->stats.rx_oos_packets;
- } while (u64_stats_fetch_retry(&session->stats.syncp, start));
-
- if (nla_put_u64(skb, L2TP_ATTR_TX_PACKETS, stats.tx_packets) ||
- nla_put_u64(skb, L2TP_ATTR_TX_BYTES, stats.tx_bytes) ||
- nla_put_u64(skb, L2TP_ATTR_TX_ERRORS, stats.tx_errors) ||
- nla_put_u64(skb, L2TP_ATTR_RX_PACKETS, stats.rx_packets) ||
- nla_put_u64(skb, L2TP_ATTR_RX_BYTES, stats.rx_bytes) ||
+ if (nla_put_u64(skb, L2TP_ATTR_TX_PACKETS,
+ atomic_long_read(&session->stats.tx_packets)) ||
+ nla_put_u64(skb, L2TP_ATTR_TX_BYTES,
+ atomic_long_read(&session->stats.tx_bytes)) ||
+ nla_put_u64(skb, L2TP_ATTR_TX_ERRORS,
+ atomic_long_read(&session->stats.tx_errors)) ||
+ nla_put_u64(skb, L2TP_ATTR_RX_PACKETS,
+ atomic_long_read(&session->stats.rx_packets)) ||
+ nla_put_u64(skb, L2TP_ATTR_RX_BYTES,
+ atomic_long_read(&session->stats.rx_bytes)) ||
nla_put_u64(skb, L2TP_ATTR_RX_SEQ_DISCARDS,
- stats.rx_seq_discards) ||
+ atomic_long_read(&session->stats.rx_seq_discards)) ||
nla_put_u64(skb, L2TP_ATTR_RX_OOS_PACKETS,
- stats.rx_oos_packets) ||
- nla_put_u64(skb, L2TP_ATTR_RX_ERRORS, stats.rx_errors))
+ atomic_long_read(&session->stats.rx_oos_packets)) ||
+ nla_put_u64(skb, L2TP_ATTR_RX_ERRORS,
+ atomic_long_read(&session->stats.rx_errors)))
goto nla_put_failure;
nla_nest_end(skb, nest);
diff --git a/net/l2tp/l2tp_ppp.c b/net/l2tp/l2tp_ppp.c
index 6a53371..637a341 100644
--- a/net/l2tp/l2tp_ppp.c
+++ b/net/l2tp/l2tp_ppp.c
@@ -97,6 +97,7 @@
#include <net/ip.h>
#include <net/udp.h>
#include <net/xfrm.h>
+#include <net/inet_common.h>
#include <asm/byteorder.h>
#include <linux/atomic.h>
@@ -259,7 +260,7 @@
session->name);
/* Not bound. Nothing we can do, so discard. */
- session->stats.rx_errors++;
+ atomic_long_inc(&session->stats.rx_errors);
kfree_skb(skb);
}
@@ -447,34 +448,16 @@
{
struct pppol2tp_session *ps = l2tp_session_priv(session);
struct sock *sk = ps->sock;
- struct sk_buff *skb;
+ struct socket *sock = sk->sk_socket;
BUG_ON(session->magic != L2TP_SESSION_MAGIC);
- if (session->session_id == 0)
- goto out;
- if (sk != NULL) {
- lock_sock(sk);
-
- if (sk->sk_state & (PPPOX_CONNECTED | PPPOX_BOUND)) {
- pppox_unbind_sock(sk);
- sk->sk_state = PPPOX_DEAD;
- sk->sk_state_change(sk);
- }
-
- /* Purge any queued data */
- skb_queue_purge(&sk->sk_receive_queue);
- skb_queue_purge(&sk->sk_write_queue);
- while ((skb = skb_dequeue(&session->reorder_q))) {
- kfree_skb(skb);
- sock_put(sk);
- }
-
- release_sock(sk);
+ if (sock) {
+ inet_shutdown(sock, 2);
+ /* Don't let the session go away before our socket does */
+ l2tp_session_inc_refcount(session);
}
-
-out:
return;
}
@@ -483,19 +466,12 @@
*/
static void pppol2tp_session_destruct(struct sock *sk)
{
- struct l2tp_session *session;
-
- if (sk->sk_user_data != NULL) {
- session = sk->sk_user_data;
- if (session == NULL)
- goto out;
-
+ struct l2tp_session *session = sk->sk_user_data;
+ if (session) {
sk->sk_user_data = NULL;
BUG_ON(session->magic != L2TP_SESSION_MAGIC);
l2tp_session_dec_refcount(session);
}
-
-out:
return;
}
@@ -525,16 +501,13 @@
session = pppol2tp_sock_to_session(sk);
/* Purge any queued data */
- skb_queue_purge(&sk->sk_receive_queue);
- skb_queue_purge(&sk->sk_write_queue);
if (session != NULL) {
- struct sk_buff *skb;
- while ((skb = skb_dequeue(&session->reorder_q))) {
- kfree_skb(skb);
- sock_put(sk);
- }
+ __l2tp_session_unhash(session);
+ l2tp_session_queue_purge(session);
sock_put(sk);
}
+ skb_queue_purge(&sk->sk_receive_queue);
+ skb_queue_purge(&sk->sk_write_queue);
release_sock(sk);
@@ -880,18 +853,6 @@
return error;
}
-/* Called when deleting sessions via the netlink interface.
- */
-static int pppol2tp_session_delete(struct l2tp_session *session)
-{
- struct pppol2tp_session *ps = l2tp_session_priv(session);
-
- if (ps->sock == NULL)
- l2tp_session_dec_refcount(session);
-
- return 0;
-}
-
#endif /* CONFIG_L2TP_V3 */
/* getname() support.
@@ -1025,14 +986,14 @@
static void pppol2tp_copy_stats(struct pppol2tp_ioc_stats *dest,
struct l2tp_stats *stats)
{
- dest->tx_packets = stats->tx_packets;
- dest->tx_bytes = stats->tx_bytes;
- dest->tx_errors = stats->tx_errors;
- dest->rx_packets = stats->rx_packets;
- dest->rx_bytes = stats->rx_bytes;
- dest->rx_seq_discards = stats->rx_seq_discards;
- dest->rx_oos_packets = stats->rx_oos_packets;
- dest->rx_errors = stats->rx_errors;
+ dest->tx_packets = atomic_long_read(&stats->tx_packets);
+ dest->tx_bytes = atomic_long_read(&stats->tx_bytes);
+ dest->tx_errors = atomic_long_read(&stats->tx_errors);
+ dest->rx_packets = atomic_long_read(&stats->rx_packets);
+ dest->rx_bytes = atomic_long_read(&stats->rx_bytes);
+ dest->rx_seq_discards = atomic_long_read(&stats->rx_seq_discards);
+ dest->rx_oos_packets = atomic_long_read(&stats->rx_oos_packets);
+ dest->rx_errors = atomic_long_read(&stats->rx_errors);
}
/* Session ioctl helper.
@@ -1666,14 +1627,14 @@
tunnel->name,
(tunnel == tunnel->sock->sk_user_data) ? 'Y' : 'N',
atomic_read(&tunnel->ref_count) - 1);
- seq_printf(m, " %08x %llu/%llu/%llu %llu/%llu/%llu\n",
+ seq_printf(m, " %08x %ld/%ld/%ld %ld/%ld/%ld\n",
tunnel->debug,
- (unsigned long long)tunnel->stats.tx_packets,
- (unsigned long long)tunnel->stats.tx_bytes,
- (unsigned long long)tunnel->stats.tx_errors,
- (unsigned long long)tunnel->stats.rx_packets,
- (unsigned long long)tunnel->stats.rx_bytes,
- (unsigned long long)tunnel->stats.rx_errors);
+ atomic_long_read(&tunnel->stats.tx_packets),
+ atomic_long_read(&tunnel->stats.tx_bytes),
+ atomic_long_read(&tunnel->stats.tx_errors),
+ atomic_long_read(&tunnel->stats.rx_packets),
+ atomic_long_read(&tunnel->stats.rx_bytes),
+ atomic_long_read(&tunnel->stats.rx_errors));
}
static void pppol2tp_seq_session_show(struct seq_file *m, void *v)
@@ -1708,14 +1669,14 @@
session->lns_mode ? "LNS" : "LAC",
session->debug,
jiffies_to_msecs(session->reorder_timeout));
- seq_printf(m, " %hu/%hu %llu/%llu/%llu %llu/%llu/%llu\n",
+ seq_printf(m, " %hu/%hu %ld/%ld/%ld %ld/%ld/%ld\n",
session->nr, session->ns,
- (unsigned long long)session->stats.tx_packets,
- (unsigned long long)session->stats.tx_bytes,
- (unsigned long long)session->stats.tx_errors,
- (unsigned long long)session->stats.rx_packets,
- (unsigned long long)session->stats.rx_bytes,
- (unsigned long long)session->stats.rx_errors);
+ atomic_long_read(&session->stats.tx_packets),
+ atomic_long_read(&session->stats.tx_bytes),
+ atomic_long_read(&session->stats.tx_errors),
+ atomic_long_read(&session->stats.rx_packets),
+ atomic_long_read(&session->stats.rx_bytes),
+ atomic_long_read(&session->stats.rx_errors));
if (po)
seq_printf(m, " interface %s\n", ppp_dev_name(&po->chan));
@@ -1839,7 +1800,7 @@
static const struct l2tp_nl_cmd_ops pppol2tp_nl_cmd_ops = {
.session_create = pppol2tp_session_create,
- .session_delete = pppol2tp_session_delete,
+ .session_delete = l2tp_session_delete,
};
#endif /* CONFIG_L2TP_V3 */
diff --git a/net/netfilter/ipvs/ip_vs_core.c b/net/netfilter/ipvs/ip_vs_core.c
index 47edf5a..61f49d2 100644
--- a/net/netfilter/ipvs/ip_vs_core.c
+++ b/net/netfilter/ipvs/ip_vs_core.c
@@ -1394,10 +1394,8 @@
skb_reset_network_header(skb);
IP_VS_DBG(12, "ICMP for IPIP %pI4->%pI4: mtu=%u\n",
&ip_hdr(skb)->saddr, &ip_hdr(skb)->daddr, mtu);
- rcu_read_lock();
ipv4_update_pmtu(skb, dev_net(skb->dev),
mtu, 0, 0, 0, 0);
- rcu_read_unlock();
/* Client uses PMTUD? */
if (!(cih->frag_off & htons(IP_DF)))
goto ignore_ipip;
@@ -1577,7 +1575,8 @@
}
/* ipvs enabled in this netns ? */
net = skb_net(skb);
- if (!net_ipvs(net)->enable)
+ ipvs = net_ipvs(net);
+ if (unlikely(sysctl_backup_only(ipvs) || !ipvs->enable))
return NF_ACCEPT;
ip_vs_fill_iph_skb(af, skb, &iph);
@@ -1654,7 +1653,6 @@
}
IP_VS_DBG_PKT(11, af, pp, skb, 0, "Incoming packet");
- ipvs = net_ipvs(net);
/* Check the server status */
if (cp->dest && !(cp->dest->flags & IP_VS_DEST_F_AVAILABLE)) {
/* the destination server is not available */
@@ -1815,13 +1813,15 @@
{
int r;
struct net *net;
+ struct netns_ipvs *ipvs;
if (ip_hdr(skb)->protocol != IPPROTO_ICMP)
return NF_ACCEPT;
/* ipvs enabled in this netns ? */
net = skb_net(skb);
- if (!net_ipvs(net)->enable)
+ ipvs = net_ipvs(net);
+ if (unlikely(sysctl_backup_only(ipvs) || !ipvs->enable))
return NF_ACCEPT;
return ip_vs_in_icmp(skb, &r, hooknum);
@@ -1835,6 +1835,7 @@
{
int r;
struct net *net;
+ struct netns_ipvs *ipvs;
struct ip_vs_iphdr iphdr;
ip_vs_fill_iph_skb(AF_INET6, skb, &iphdr);
@@ -1843,7 +1844,8 @@
/* ipvs enabled in this netns ? */
net = skb_net(skb);
- if (!net_ipvs(net)->enable)
+ ipvs = net_ipvs(net);
+ if (unlikely(sysctl_backup_only(ipvs) || !ipvs->enable))
return NF_ACCEPT;
return ip_vs_in_icmp_v6(skb, &r, hooknum, &iphdr);
diff --git a/net/netfilter/ipvs/ip_vs_ctl.c b/net/netfilter/ipvs/ip_vs_ctl.c
index c68198b..9e2d1cc 100644
--- a/net/netfilter/ipvs/ip_vs_ctl.c
+++ b/net/netfilter/ipvs/ip_vs_ctl.c
@@ -1808,6 +1808,12 @@
.mode = 0644,
.proc_handler = proc_dointvec,
},
+ {
+ .procname = "backup_only",
+ .maxlen = sizeof(int),
+ .mode = 0644,
+ .proc_handler = proc_dointvec,
+ },
#ifdef CONFIG_IP_VS_DEBUG
{
.procname = "debug_level",
@@ -3741,6 +3747,7 @@
tbl[idx++].data = &ipvs->sysctl_nat_icmp_send;
ipvs->sysctl_pmtu_disc = 1;
tbl[idx++].data = &ipvs->sysctl_pmtu_disc;
+ tbl[idx++].data = &ipvs->sysctl_backup_only;
ipvs->sysctl_hdr = register_net_sysctl(net, "net/ipv4/vs", tbl);
diff --git a/net/netfilter/ipvs/ip_vs_proto_sctp.c b/net/netfilter/ipvs/ip_vs_proto_sctp.c
index ae8ec6f..cd1d729 100644
--- a/net/netfilter/ipvs/ip_vs_proto_sctp.c
+++ b/net/netfilter/ipvs/ip_vs_proto_sctp.c
@@ -906,7 +906,7 @@
sctp_chunkhdr_t _sctpch, *sch;
unsigned char chunk_type;
int event, next_state;
- int ihl;
+ int ihl, cofs;
#ifdef CONFIG_IP_VS_IPV6
ihl = cp->af == AF_INET ? ip_hdrlen(skb) : sizeof(struct ipv6hdr);
@@ -914,8 +914,8 @@
ihl = ip_hdrlen(skb);
#endif
- sch = skb_header_pointer(skb, ihl + sizeof(sctp_sctphdr_t),
- sizeof(_sctpch), &_sctpch);
+ cofs = ihl + sizeof(sctp_sctphdr_t);
+ sch = skb_header_pointer(skb, cofs, sizeof(_sctpch), &_sctpch);
if (sch == NULL)
return;
@@ -933,10 +933,12 @@
*/
if ((sch->type == SCTP_CID_COOKIE_ECHO) ||
(sch->type == SCTP_CID_COOKIE_ACK)) {
- sch = skb_header_pointer(skb, (ihl + sizeof(sctp_sctphdr_t) +
- sch->length), sizeof(_sctpch), &_sctpch);
- if (sch) {
- if (sch->type == SCTP_CID_ABORT)
+ int clen = ntohs(sch->length);
+
+ if (clen >= sizeof(sctp_chunkhdr_t)) {
+ sch = skb_header_pointer(skb, cofs + ALIGN(clen, 4),
+ sizeof(_sctpch), &_sctpch);
+ if (sch && sch->type == SCTP_CID_ABORT)
chunk_type = sch->type;
}
}
diff --git a/net/netfilter/nf_conntrack_proto_dccp.c b/net/netfilter/nf_conntrack_proto_dccp.c
index 432f957..ba65b20 100644
--- a/net/netfilter/nf_conntrack_proto_dccp.c
+++ b/net/netfilter/nf_conntrack_proto_dccp.c
@@ -969,6 +969,10 @@
{
int ret;
+ ret = register_pernet_subsys(&dccp_net_ops);
+ if (ret < 0)
+ goto out_pernet;
+
ret = nf_ct_l4proto_register(&dccp_proto4);
if (ret < 0)
goto out_dccp4;
@@ -977,16 +981,12 @@
if (ret < 0)
goto out_dccp6;
- ret = register_pernet_subsys(&dccp_net_ops);
- if (ret < 0)
- goto out_pernet;
-
return 0;
-out_pernet:
- nf_ct_l4proto_unregister(&dccp_proto6);
out_dccp6:
nf_ct_l4proto_unregister(&dccp_proto4);
out_dccp4:
+ unregister_pernet_subsys(&dccp_net_ops);
+out_pernet:
return ret;
}
diff --git a/net/netfilter/nf_conntrack_proto_gre.c b/net/netfilter/nf_conntrack_proto_gre.c
index bd7d01d..155ce9f 100644
--- a/net/netfilter/nf_conntrack_proto_gre.c
+++ b/net/netfilter/nf_conntrack_proto_gre.c
@@ -420,18 +420,18 @@
{
int ret;
- ret = nf_ct_l4proto_register(&nf_conntrack_l4proto_gre4);
- if (ret < 0)
- goto out_gre4;
-
ret = register_pernet_subsys(&proto_gre_net_ops);
if (ret < 0)
goto out_pernet;
+ ret = nf_ct_l4proto_register(&nf_conntrack_l4proto_gre4);
+ if (ret < 0)
+ goto out_gre4;
+
return 0;
-out_pernet:
- nf_ct_l4proto_unregister(&nf_conntrack_l4proto_gre4);
out_gre4:
+ unregister_pernet_subsys(&proto_gre_net_ops);
+out_pernet:
return ret;
}
diff --git a/net/netfilter/nf_conntrack_proto_sctp.c b/net/netfilter/nf_conntrack_proto_sctp.c
index 480f616..ec83536 100644
--- a/net/netfilter/nf_conntrack_proto_sctp.c
+++ b/net/netfilter/nf_conntrack_proto_sctp.c
@@ -888,6 +888,10 @@
{
int ret;
+ ret = register_pernet_subsys(&sctp_net_ops);
+ if (ret < 0)
+ goto out_pernet;
+
ret = nf_ct_l4proto_register(&nf_conntrack_l4proto_sctp4);
if (ret < 0)
goto out_sctp4;
@@ -896,16 +900,12 @@
if (ret < 0)
goto out_sctp6;
- ret = register_pernet_subsys(&sctp_net_ops);
- if (ret < 0)
- goto out_pernet;
-
return 0;
-out_pernet:
- nf_ct_l4proto_unregister(&nf_conntrack_l4proto_sctp6);
out_sctp6:
nf_ct_l4proto_unregister(&nf_conntrack_l4proto_sctp4);
out_sctp4:
+ unregister_pernet_subsys(&sctp_net_ops);
+out_pernet:
return ret;
}
diff --git a/net/netfilter/nf_conntrack_proto_udplite.c b/net/netfilter/nf_conntrack_proto_udplite.c
index 1574895..ca969f6 100644
--- a/net/netfilter/nf_conntrack_proto_udplite.c
+++ b/net/netfilter/nf_conntrack_proto_udplite.c
@@ -371,6 +371,10 @@
{
int ret;
+ ret = register_pernet_subsys(&udplite_net_ops);
+ if (ret < 0)
+ goto out_pernet;
+
ret = nf_ct_l4proto_register(&nf_conntrack_l4proto_udplite4);
if (ret < 0)
goto out_udplite4;
@@ -379,16 +383,12 @@
if (ret < 0)
goto out_udplite6;
- ret = register_pernet_subsys(&udplite_net_ops);
- if (ret < 0)
- goto out_pernet;
-
return 0;
-out_pernet:
- nf_ct_l4proto_unregister(&nf_conntrack_l4proto_udplite6);
out_udplite6:
nf_ct_l4proto_unregister(&nf_conntrack_l4proto_udplite4);
out_udplite4:
+ unregister_pernet_subsys(&udplite_net_ops);
+out_pernet:
return ret;
}
diff --git a/net/netfilter/nfnetlink_queue_core.c b/net/netfilter/nfnetlink_queue_core.c
index 858fd52..1cb4854 100644
--- a/net/netfilter/nfnetlink_queue_core.c
+++ b/net/netfilter/nfnetlink_queue_core.c
@@ -112,7 +112,7 @@
inst->queue_num = queue_num;
inst->peer_portid = portid;
inst->queue_maxlen = NFQNL_QMAX_DEFAULT;
- inst->copy_range = 0xfffff;
+ inst->copy_range = 0xffff;
inst->copy_mode = NFQNL_COPY_NONE;
spin_lock_init(&inst->lock);
INIT_LIST_HEAD(&inst->queue_list);
diff --git a/net/netlink/genetlink.c b/net/netlink/genetlink.c
index f2aabb6..5a55be3 100644
--- a/net/netlink/genetlink.c
+++ b/net/netlink/genetlink.c
@@ -142,6 +142,7 @@
int err = 0;
BUG_ON(grp->name[0] == '\0');
+ BUG_ON(memchr(grp->name, '\0', GENL_NAMSIZ) == NULL);
genl_lock();
diff --git a/net/nfc/llcp/llcp.c b/net/nfc/llcp/llcp.c
index 7f8266d..b530afa 100644
--- a/net/nfc/llcp/llcp.c
+++ b/net/nfc/llcp/llcp.c
@@ -68,7 +68,8 @@
}
}
-static void nfc_llcp_socket_release(struct nfc_llcp_local *local, bool listen)
+static void nfc_llcp_socket_release(struct nfc_llcp_local *local, bool listen,
+ int err)
{
struct sock *sk;
struct hlist_node *tmp;
@@ -100,7 +101,10 @@
nfc_llcp_accept_unlink(accept_sk);
+ if (err)
+ accept_sk->sk_err = err;
accept_sk->sk_state = LLCP_CLOSED;
+ accept_sk->sk_state_change(sk);
bh_unlock_sock(accept_sk);
@@ -123,7 +127,10 @@
continue;
}
+ if (err)
+ sk->sk_err = err;
sk->sk_state = LLCP_CLOSED;
+ sk->sk_state_change(sk);
bh_unlock_sock(sk);
@@ -133,6 +140,36 @@
}
write_unlock(&local->sockets.lock);
+
+ /*
+ * If we want to keep the listening sockets alive,
+ * we don't touch the RAW ones.
+ */
+ if (listen == true)
+ return;
+
+ write_lock(&local->raw_sockets.lock);
+
+ sk_for_each_safe(sk, tmp, &local->raw_sockets.head) {
+ llcp_sock = nfc_llcp_sock(sk);
+
+ bh_lock_sock(sk);
+
+ nfc_llcp_socket_purge(llcp_sock);
+
+ if (err)
+ sk->sk_err = err;
+ sk->sk_state = LLCP_CLOSED;
+ sk->sk_state_change(sk);
+
+ bh_unlock_sock(sk);
+
+ sock_orphan(sk);
+
+ sk_del_node_init(sk);
+ }
+
+ write_unlock(&local->raw_sockets.lock);
}
struct nfc_llcp_local *nfc_llcp_local_get(struct nfc_llcp_local *local)
@@ -142,6 +179,17 @@
return local;
}
+static void local_cleanup(struct nfc_llcp_local *local, bool listen)
+{
+ nfc_llcp_socket_release(local, listen, ENXIO);
+ del_timer_sync(&local->link_timer);
+ skb_queue_purge(&local->tx_queue);
+ cancel_work_sync(&local->tx_work);
+ cancel_work_sync(&local->rx_work);
+ cancel_work_sync(&local->timeout_work);
+ kfree_skb(local->rx_pending);
+}
+
static void local_release(struct kref *ref)
{
struct nfc_llcp_local *local;
@@ -149,13 +197,7 @@
local = container_of(ref, struct nfc_llcp_local, ref);
list_del(&local->list);
- nfc_llcp_socket_release(local, false);
- del_timer_sync(&local->link_timer);
- skb_queue_purge(&local->tx_queue);
- cancel_work_sync(&local->tx_work);
- cancel_work_sync(&local->rx_work);
- cancel_work_sync(&local->timeout_work);
- kfree_skb(local->rx_pending);
+ local_cleanup(local, false);
kfree(local);
}
@@ -1348,7 +1390,7 @@
return;
/* Close and purge all existing sockets */
- nfc_llcp_socket_release(local, true);
+ nfc_llcp_socket_release(local, true, 0);
}
void nfc_llcp_mac_is_up(struct nfc_dev *dev, u32 target_idx,
@@ -1427,6 +1469,8 @@
return;
}
+ local_cleanup(local, false);
+
nfc_llcp_local_put(local);
}
diff --git a/net/nfc/llcp/sock.c b/net/nfc/llcp/sock.c
index 5332751..5c7cdf3f 100644
--- a/net/nfc/llcp/sock.c
+++ b/net/nfc/llcp/sock.c
@@ -278,6 +278,8 @@
pr_debug("Returning sk state %d\n", sk->sk_state);
+ sk_acceptq_removed(parent);
+
return sk;
}
diff --git a/net/openvswitch/actions.c b/net/openvswitch/actions.c
index ac2defe..d4d5363 100644
--- a/net/openvswitch/actions.c
+++ b/net/openvswitch/actions.c
@@ -58,7 +58,7 @@
if (skb->ip_summed == CHECKSUM_COMPLETE)
skb->csum = csum_sub(skb->csum, csum_partial(skb->data
- + ETH_HLEN, VLAN_HLEN, 0));
+ + (2 * ETH_ALEN), VLAN_HLEN, 0));
vhdr = (struct vlan_hdr *)(skb->data + ETH_HLEN);
*current_tci = vhdr->h_vlan_TCI;
@@ -115,7 +115,7 @@
if (skb->ip_summed == CHECKSUM_COMPLETE)
skb->csum = csum_add(skb->csum, csum_partial(skb->data
- + ETH_HLEN, VLAN_HLEN, 0));
+ + (2 * ETH_ALEN), VLAN_HLEN, 0));
}
__vlan_hwaccel_put_tag(skb, ntohs(vlan->vlan_tci) & ~VLAN_TAG_PRESENT);
diff --git a/net/openvswitch/datapath.c b/net/openvswitch/datapath.c
index e87a265..a4b7247 100644
--- a/net/openvswitch/datapath.c
+++ b/net/openvswitch/datapath.c
@@ -394,6 +394,7 @@
skb_copy_and_csum_dev(skb, nla_data(nla));
+ genlmsg_end(user_skb, upcall);
err = genlmsg_unicast(net, user_skb, upcall_info->portid);
out:
@@ -1690,6 +1691,7 @@
if (IS_ERR(vport))
goto exit_unlock;
+ err = 0;
reply = ovs_vport_cmd_build_info(vport, info->snd_portid, info->snd_seq,
OVS_VPORT_CMD_NEW);
if (IS_ERR(reply)) {
@@ -1771,6 +1773,7 @@
if (IS_ERR(reply))
goto exit_unlock;
+ err = 0;
ovs_dp_detach_port(vport);
genl_notify(reply, genl_info_net(info), info->snd_portid,
diff --git a/net/openvswitch/flow.c b/net/openvswitch/flow.c
index 20605ec..fe0e421 100644
--- a/net/openvswitch/flow.c
+++ b/net/openvswitch/flow.c
@@ -482,7 +482,11 @@
return htons(ETH_P_802_2);
__skb_pull(skb, sizeof(struct llc_snap_hdr));
- return llc->ethertype;
+
+ if (ntohs(llc->ethertype) >= 1536)
+ return llc->ethertype;
+
+ return htons(ETH_P_802_2);
}
static int parse_icmpv6(struct sk_buff *skb, struct sw_flow_key *key,
diff --git a/net/openvswitch/vport-netdev.c b/net/openvswitch/vport-netdev.c
index 670cbc3..2130d61 100644
--- a/net/openvswitch/vport-netdev.c
+++ b/net/openvswitch/vport-netdev.c
@@ -43,8 +43,7 @@
/* Make our own copy of the packet. Otherwise we will mangle the
* packet for anyone who came before us (e.g. tcpdump via AF_PACKET).
- * (No one comes after us, since we tell handle_bridge() that we took
- * the packet.) */
+ */
skb = skb_share_check(skb, GFP_ATOMIC);
if (unlikely(!skb))
return;
diff --git a/net/openvswitch/vport.c b/net/openvswitch/vport.c
index ba717cc..f6b8132 100644
--- a/net/openvswitch/vport.c
+++ b/net/openvswitch/vport.c
@@ -325,8 +325,7 @@
* @skb: skb that was received
*
* Must be called with rcu_read_lock. The packet cannot be shared and
- * skb->data should point to the Ethernet header. The caller must have already
- * called compute_ip_summed() to initialize the checksumming fields.
+ * skb->data should point to the Ethernet header.
*/
void ovs_vport_receive(struct vport *vport, struct sk_buff *skb)
{
diff --git a/net/sctp/associola.c b/net/sctp/associola.c
index 43cd0dd..d2709e2 100644
--- a/net/sctp/associola.c
+++ b/net/sctp/associola.c
@@ -1079,7 +1079,7 @@
transports) {
if (transport == active)
- break;
+ continue;
list_for_each_entry(chunk, &transport->transmitted,
transmitted_list) {
if (key == chunk->subh.data_hdr->tsn) {
diff --git a/net/sctp/sm_statefuns.c b/net/sctp/sm_statefuns.c
index 5131fcf..de1a013 100644
--- a/net/sctp/sm_statefuns.c
+++ b/net/sctp/sm_statefuns.c
@@ -2082,7 +2082,7 @@
}
/* Delete the tempory new association. */
- sctp_add_cmd_sf(commands, SCTP_CMD_NEW_ASOC, SCTP_ASOC(new_asoc));
+ sctp_add_cmd_sf(commands, SCTP_CMD_SET_ASOC, SCTP_ASOC(new_asoc));
sctp_add_cmd_sf(commands, SCTP_CMD_DELETE_TCB, SCTP_NULL());
/* Restore association pointer to provide SCTP command interpeter
diff --git a/net/sunrpc/sched.c b/net/sunrpc/sched.c
index fb20f25..f8529fc 100644
--- a/net/sunrpc/sched.c
+++ b/net/sunrpc/sched.c
@@ -180,6 +180,8 @@
list_add_tail(&task->u.tk_wait.list, &queue->tasks[0]);
task->tk_waitqueue = queue;
queue->qlen++;
+ /* barrier matches the read in rpc_wake_up_task_queue_locked() */
+ smp_wmb();
rpc_set_queued(task);
dprintk("RPC: %5u added to queue %p \"%s\"\n",
@@ -430,8 +432,11 @@
*/
static void rpc_wake_up_task_queue_locked(struct rpc_wait_queue *queue, struct rpc_task *task)
{
- if (RPC_IS_QUEUED(task) && task->tk_waitqueue == queue)
- __rpc_do_wake_up_task(queue, task);
+ if (RPC_IS_QUEUED(task)) {
+ smp_rmb();
+ if (task->tk_waitqueue == queue)
+ __rpc_do_wake_up_task(queue, task);
+ }
}
/*
diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
index 51be64f..971282b 100644
--- a/net/unix/af_unix.c
+++ b/net/unix/af_unix.c
@@ -382,7 +382,7 @@
#endif
}
-static int unix_release_sock(struct sock *sk, int embrion)
+static void unix_release_sock(struct sock *sk, int embrion)
{
struct unix_sock *u = unix_sk(sk);
struct path path;
@@ -451,8 +451,6 @@
if (unix_tot_inflight)
unix_gc(); /* Garbage collect fds */
-
- return 0;
}
static void init_peercred(struct sock *sk)
@@ -699,9 +697,10 @@
if (!sk)
return 0;
+ unix_release_sock(sk, 0);
sock->sk = NULL;
- return unix_release_sock(sk, 0);
+ return 0;
}
static int unix_autobind(struct socket *sock)
@@ -1413,8 +1412,8 @@
if (UNIXCB(skb).cred)
return;
if (test_bit(SOCK_PASSCRED, &sock->flags) ||
- !other->sk_socket ||
- test_bit(SOCK_PASSCRED, &other->sk_socket->flags)) {
+ (other->sk_socket &&
+ test_bit(SOCK_PASSCRED, &other->sk_socket->flags))) {
UNIXCB(skb).pid = get_pid(task_tgid(current));
UNIXCB(skb).cred = get_current_cred();
}
diff --git a/security/selinux/xfrm.c b/security/selinux/xfrm.c
index 48665ec..8ab2951 100644
--- a/security/selinux/xfrm.c
+++ b/security/selinux/xfrm.c
@@ -310,7 +310,7 @@
if (old_ctx) {
new_ctx = kmalloc(sizeof(*old_ctx) + old_ctx->ctx_len,
- GFP_KERNEL);
+ GFP_ATOMIC);
if (!new_ctx)
return -ENOMEM;
diff --git a/sound/pci/hda/hda_codec.c b/sound/pci/hda/hda_codec.c
index a9ebcf9..ecdf30e 100644
--- a/sound/pci/hda/hda_codec.c
+++ b/sound/pci/hda/hda_codec.c
@@ -3144,7 +3144,7 @@
if (val & AC_DIG1_PROFESSIONAL)
sbits |= IEC958_AES0_PROFESSIONAL;
if (sbits & IEC958_AES0_PROFESSIONAL) {
- if (sbits & AC_DIG1_EMPHASIS)
+ if (val & AC_DIG1_EMPHASIS)
sbits |= IEC958_AES0_PRO_EMPHASIS_5015;
} else {
if (val & AC_DIG1_EMPHASIS)
diff --git a/sound/pci/hda/hda_generic.c b/sound/pci/hda/hda_generic.c
index 78897d0..43c2ea5 100644
--- a/sound/pci/hda/hda_generic.c
+++ b/sound/pci/hda/hda_generic.c
@@ -995,6 +995,8 @@
BAD_NO_EXTRA_SURR_DAC = 0x101,
/* Primary DAC shared with main surrounds */
BAD_SHARED_SURROUND = 0x100,
+ /* No independent HP possible */
+ BAD_NO_INDEP_HP = 0x40,
/* Primary DAC shared with main CLFE */
BAD_SHARED_CLFE = 0x10,
/* Primary DAC shared with extra surrounds */
@@ -1392,6 +1394,43 @@
return snd_hda_get_path_idx(codec, path);
}
+/* check whether the independent HP is available with the current config */
+static bool indep_hp_possible(struct hda_codec *codec)
+{
+ struct hda_gen_spec *spec = codec->spec;
+ struct auto_pin_cfg *cfg = &spec->autocfg;
+ struct nid_path *path;
+ int i, idx;
+
+ if (cfg->line_out_type == AUTO_PIN_HP_OUT)
+ idx = spec->out_paths[0];
+ else
+ idx = spec->hp_paths[0];
+ path = snd_hda_get_path_from_idx(codec, idx);
+ if (!path)
+ return false;
+
+ /* assume no path conflicts unless aamix is involved */
+ if (!spec->mixer_nid || !is_nid_contained(path, spec->mixer_nid))
+ return true;
+
+ /* check whether output paths contain aamix */
+ for (i = 0; i < cfg->line_outs; i++) {
+ if (spec->out_paths[i] == idx)
+ break;
+ path = snd_hda_get_path_from_idx(codec, spec->out_paths[i]);
+ if (path && is_nid_contained(path, spec->mixer_nid))
+ return false;
+ }
+ for (i = 0; i < cfg->speaker_outs; i++) {
+ path = snd_hda_get_path_from_idx(codec, spec->speaker_paths[i]);
+ if (path && is_nid_contained(path, spec->mixer_nid))
+ return false;
+ }
+
+ return true;
+}
+
/* fill the empty entries in the dac array for speaker/hp with the
* shared dac pointed by the paths
*/
@@ -1545,6 +1584,9 @@
badness += BAD_MULTI_IO;
}
+ if (spec->indep_hp && !indep_hp_possible(codec))
+ badness += BAD_NO_INDEP_HP;
+
/* re-fill the shared DAC for speaker / headphone */
if (cfg->line_out_type != AUTO_PIN_HP_OUT)
refill_shared_dacs(codec, cfg->hp_outs,
@@ -1758,6 +1800,10 @@
cfg->speaker_pins, val);
}
+ /* clear indep_hp flag if not available */
+ if (spec->indep_hp && !indep_hp_possible(codec))
+ spec->indep_hp = 0;
+
kfree(best_cfg);
return 0;
}
diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
index 4cea6bb6..418bfc0 100644
--- a/sound/pci/hda/hda_intel.c
+++ b/sound/pci/hda/hda_intel.c
@@ -415,6 +415,8 @@
unsigned int opened :1;
unsigned int running :1;
unsigned int irq_pending :1;
+ unsigned int prepared:1;
+ unsigned int locked:1;
/*
* For VIA:
* A flag to ensure DMA position is 0
@@ -426,8 +428,25 @@
struct timecounter azx_tc;
struct cyclecounter azx_cc;
+
+#ifdef CONFIG_SND_HDA_DSP_LOADER
+ struct mutex dsp_mutex;
+#endif
};
+/* DSP lock helpers */
+#ifdef CONFIG_SND_HDA_DSP_LOADER
+#define dsp_lock_init(dev) mutex_init(&(dev)->dsp_mutex)
+#define dsp_lock(dev) mutex_lock(&(dev)->dsp_mutex)
+#define dsp_unlock(dev) mutex_unlock(&(dev)->dsp_mutex)
+#define dsp_is_locked(dev) ((dev)->locked)
+#else
+#define dsp_lock_init(dev) do {} while (0)
+#define dsp_lock(dev) do {} while (0)
+#define dsp_unlock(dev) do {} while (0)
+#define dsp_is_locked(dev) 0
+#endif
+
/* CORB/RIRB */
struct azx_rb {
u32 *buf; /* CORB/RIRB buffer
@@ -527,6 +546,10 @@
/* card list (for power_save trigger) */
struct list_head list;
+
+#ifdef CONFIG_SND_HDA_DSP_LOADER
+ struct azx_dev saved_azx_dev;
+#endif
};
#define CREATE_TRACE_POINTS
@@ -1793,15 +1816,25 @@
dev = chip->capture_index_offset;
nums = chip->capture_streams;
}
- for (i = 0; i < nums; i++, dev++)
- if (!chip->azx_dev[dev].opened) {
- res = &chip->azx_dev[dev];
- if (res->assigned_key == key)
- break;
+ for (i = 0; i < nums; i++, dev++) {
+ struct azx_dev *azx_dev = &chip->azx_dev[dev];
+ dsp_lock(azx_dev);
+ if (!azx_dev->opened && !dsp_is_locked(azx_dev)) {
+ res = azx_dev;
+ if (res->assigned_key == key) {
+ res->opened = 1;
+ res->assigned_key = key;
+ dsp_unlock(azx_dev);
+ return azx_dev;
+ }
}
+ dsp_unlock(azx_dev);
+ }
if (res) {
+ dsp_lock(res);
res->opened = 1;
res->assigned_key = key;
+ dsp_unlock(res);
}
return res;
}
@@ -2009,6 +2042,12 @@
struct azx_dev *azx_dev = get_azx_dev(substream);
int ret;
+ dsp_lock(azx_dev);
+ if (dsp_is_locked(azx_dev)) {
+ ret = -EBUSY;
+ goto unlock;
+ }
+
mark_runtime_wc(chip, azx_dev, substream, false);
azx_dev->bufsize = 0;
azx_dev->period_bytes = 0;
@@ -2016,8 +2055,10 @@
ret = snd_pcm_lib_malloc_pages(substream,
params_buffer_bytes(hw_params));
if (ret < 0)
- return ret;
+ goto unlock;
mark_runtime_wc(chip, azx_dev, substream, true);
+ unlock:
+ dsp_unlock(azx_dev);
return ret;
}
@@ -2029,16 +2070,21 @@
struct hda_pcm_stream *hinfo = apcm->hinfo[substream->stream];
/* reset BDL address */
- azx_sd_writel(azx_dev, SD_BDLPL, 0);
- azx_sd_writel(azx_dev, SD_BDLPU, 0);
- azx_sd_writel(azx_dev, SD_CTL, 0);
- azx_dev->bufsize = 0;
- azx_dev->period_bytes = 0;
- azx_dev->format_val = 0;
+ dsp_lock(azx_dev);
+ if (!dsp_is_locked(azx_dev)) {
+ azx_sd_writel(azx_dev, SD_BDLPL, 0);
+ azx_sd_writel(azx_dev, SD_BDLPU, 0);
+ azx_sd_writel(azx_dev, SD_CTL, 0);
+ azx_dev->bufsize = 0;
+ azx_dev->period_bytes = 0;
+ azx_dev->format_val = 0;
+ }
snd_hda_codec_cleanup(apcm->codec, hinfo, substream);
mark_runtime_wc(chip, azx_dev, substream, false);
+ azx_dev->prepared = 0;
+ dsp_unlock(azx_dev);
return snd_pcm_lib_free_pages(substream);
}
@@ -2055,6 +2101,12 @@
snd_hda_spdif_out_of_nid(apcm->codec, hinfo->nid);
unsigned short ctls = spdif ? spdif->ctls : 0;
+ dsp_lock(azx_dev);
+ if (dsp_is_locked(azx_dev)) {
+ err = -EBUSY;
+ goto unlock;
+ }
+
azx_stream_reset(chip, azx_dev);
format_val = snd_hda_calc_stream_format(runtime->rate,
runtime->channels,
@@ -2065,7 +2117,8 @@
snd_printk(KERN_ERR SFX
"%s: invalid format_val, rate=%d, ch=%d, format=%d\n",
pci_name(chip->pci), runtime->rate, runtime->channels, runtime->format);
- return -EINVAL;
+ err = -EINVAL;
+ goto unlock;
}
bufsize = snd_pcm_lib_buffer_bytes(substream);
@@ -2084,7 +2137,7 @@
azx_dev->no_period_wakeup = runtime->no_period_wakeup;
err = azx_setup_periods(chip, substream, azx_dev);
if (err < 0)
- return err;
+ goto unlock;
}
/* wallclk has 24Mhz clock source */
@@ -2101,8 +2154,14 @@
if ((chip->driver_caps & AZX_DCAPS_CTX_WORKAROUND) &&
stream_tag > chip->capture_streams)
stream_tag -= chip->capture_streams;
- return snd_hda_codec_prepare(apcm->codec, hinfo, stream_tag,
+ err = snd_hda_codec_prepare(apcm->codec, hinfo, stream_tag,
azx_dev->format_val, substream);
+
+ unlock:
+ if (!err)
+ azx_dev->prepared = 1;
+ dsp_unlock(azx_dev);
+ return err;
}
static int azx_pcm_trigger(struct snd_pcm_substream *substream, int cmd)
@@ -2117,6 +2176,9 @@
azx_dev = get_azx_dev(substream);
trace_azx_pcm_trigger(chip, azx_dev, cmd);
+ if (dsp_is_locked(azx_dev) || !azx_dev->prepared)
+ return -EPIPE;
+
switch (cmd) {
case SNDRV_PCM_TRIGGER_START:
rstart = 1;
@@ -2621,17 +2683,27 @@
struct azx_dev *azx_dev;
int err;
- if (snd_hda_lock_devices(bus))
- return -EBUSY;
+ azx_dev = azx_get_dsp_loader_dev(chip);
+
+ dsp_lock(azx_dev);
+ spin_lock_irq(&chip->reg_lock);
+ if (azx_dev->running || azx_dev->locked) {
+ spin_unlock_irq(&chip->reg_lock);
+ err = -EBUSY;
+ goto unlock;
+ }
+ azx_dev->prepared = 0;
+ chip->saved_azx_dev = *azx_dev;
+ azx_dev->locked = 1;
+ spin_unlock_irq(&chip->reg_lock);
err = snd_dma_alloc_pages(SNDRV_DMA_TYPE_DEV_SG,
snd_dma_pci_data(chip->pci),
byte_size, bufp);
if (err < 0)
- goto unlock;
+ goto err_alloc;
mark_pages_wc(chip, bufp, true);
- azx_dev = azx_get_dsp_loader_dev(chip);
azx_dev->bufsize = byte_size;
azx_dev->period_bytes = byte_size;
azx_dev->format_val = format;
@@ -2649,13 +2721,20 @@
goto error;
azx_setup_controller(chip, azx_dev);
+ dsp_unlock(azx_dev);
return azx_dev->stream_tag;
error:
mark_pages_wc(chip, bufp, false);
snd_dma_free_pages(bufp);
-unlock:
- snd_hda_unlock_devices(bus);
+ err_alloc:
+ spin_lock_irq(&chip->reg_lock);
+ if (azx_dev->opened)
+ *azx_dev = chip->saved_azx_dev;
+ azx_dev->locked = 0;
+ spin_unlock_irq(&chip->reg_lock);
+ unlock:
+ dsp_unlock(azx_dev);
return err;
}
@@ -2677,9 +2756,10 @@
struct azx *chip = bus->private_data;
struct azx_dev *azx_dev = azx_get_dsp_loader_dev(chip);
- if (!dmab->area)
+ if (!dmab->area || !azx_dev->locked)
return;
+ dsp_lock(azx_dev);
/* reset BDL address */
azx_sd_writel(azx_dev, SD_BDLPL, 0);
azx_sd_writel(azx_dev, SD_BDLPU, 0);
@@ -2692,7 +2772,12 @@
snd_dma_free_pages(dmab);
dmab->area = NULL;
- snd_hda_unlock_devices(bus);
+ spin_lock_irq(&chip->reg_lock);
+ if (azx_dev->opened)
+ *azx_dev = chip->saved_azx_dev;
+ azx_dev->locked = 0;
+ spin_unlock_irq(&chip->reg_lock);
+ dsp_unlock(azx_dev);
}
#endif /* CONFIG_SND_HDA_DSP_LOADER */
@@ -3481,6 +3566,7 @@
}
for (i = 0; i < chip->num_streams; i++) {
+ dsp_lock_init(&chip->azx_dev[i]);
/* allocate memory for the BDL for each stream */
err = snd_dma_alloc_pages(SNDRV_DMA_TYPE_DEV,
snd_dma_pci_data(chip->pci),
diff --git a/sound/pci/hda/patch_cirrus.c b/sound/pci/hda/patch_cirrus.c
index 60d08f6..0d9c58f 100644
--- a/sound/pci/hda/patch_cirrus.c
+++ b/sound/pci/hda/patch_cirrus.c
@@ -168,10 +168,10 @@
snd_hda_gen_update_outputs(codec);
if (spec->gpio_eapd_hp) {
- unsigned int gpio = spec->gen.hp_jack_present ?
+ spec->gpio_data = spec->gen.hp_jack_present ?
spec->gpio_eapd_hp : spec->gpio_eapd_speaker;
snd_hda_codec_write(codec, 0x01, 0,
- AC_VERB_SET_GPIO_DATA, gpio);
+ AC_VERB_SET_GPIO_DATA, spec->gpio_data);
}
}
diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c
index 941bf6c..2a89d1ee 100644
--- a/sound/pci/hda/patch_conexant.c
+++ b/sound/pci/hda/patch_conexant.c
@@ -1142,7 +1142,7 @@
}
if (spec->beep_amp)
- snd_hda_attach_beep_device(codec, spec->beep_amp);
+ snd_hda_attach_beep_device(codec, get_amp_nid_(spec->beep_amp));
return 0;
}
@@ -1921,7 +1921,7 @@
}
if (spec->beep_amp)
- snd_hda_attach_beep_device(codec, spec->beep_amp);
+ snd_hda_attach_beep_device(codec, get_amp_nid_(spec->beep_amp));
return 0;
}
@@ -3099,7 +3099,7 @@
}
if (spec->beep_amp)
- snd_hda_attach_beep_device(codec, spec->beep_amp);
+ snd_hda_attach_beep_device(codec, get_amp_nid_(spec->beep_amp));
return 0;
}
@@ -3191,11 +3191,17 @@
return 0;
}
+static void cx_auto_free(struct hda_codec *codec)
+{
+ snd_hda_detach_beep_device(codec);
+ snd_hda_gen_free(codec);
+}
+
static const struct hda_codec_ops cx_auto_patch_ops = {
.build_controls = cx_auto_build_controls,
.build_pcms = snd_hda_gen_build_pcms,
.init = snd_hda_gen_init,
- .free = snd_hda_gen_free,
+ .free = cx_auto_free,
.unsol_event = snd_hda_jack_unsol_event,
#ifdef CONFIG_PM
.check_power_status = snd_hda_gen_check_power_status,
@@ -3391,7 +3397,7 @@
codec->patch_ops = cx_auto_patch_ops;
if (spec->beep_amp)
- snd_hda_attach_beep_device(codec, spec->beep_amp);
+ snd_hda_attach_beep_device(codec, get_amp_nid_(spec->beep_amp));
/* Some laptops with Conexant chips show stalls in S3 resume,
* which falls into the single-cmd mode.
diff --git a/sound/usb/mixer.c b/sound/usb/mixer.c
index 638e7f73..ca4739c 100644
--- a/sound/usb/mixer.c
+++ b/sound/usb/mixer.c
@@ -715,8 +715,9 @@
case UAC2_CLOCK_SELECTOR: {
struct uac_selector_unit_descriptor *d = p1;
/* call recursively to retrieve the channel info */
- if (check_input_term(state, d->baSourceID[0], term) < 0)
- return -ENODEV;
+ err = check_input_term(state, d->baSourceID[0], term);
+ if (err < 0)
+ return err;
term->type = d->bDescriptorSubtype << 16; /* virtual type */
term->id = id;
term->name = uac_selector_unit_iSelector(d);
@@ -725,7 +726,8 @@
case UAC1_PROCESSING_UNIT:
case UAC1_EXTENSION_UNIT:
/* UAC2_PROCESSING_UNIT_V2 */
- /* UAC2_EFFECT_UNIT */ {
+ /* UAC2_EFFECT_UNIT */
+ case UAC2_EXTENSION_UNIT_V2: {
struct uac_processing_unit_descriptor *d = p1;
if (state->mixer->protocol == UAC_VERSION_2 &&
@@ -1356,8 +1358,9 @@
return err;
/* determine the input source type and name */
- if (check_input_term(state, hdr->bSourceID, &iterm) < 0)
- return -EINVAL;
+ err = check_input_term(state, hdr->bSourceID, &iterm);
+ if (err < 0)
+ return err;
master_bits = snd_usb_combine_bytes(bmaControls, csize);
/* master configuration quirks */
@@ -2052,6 +2055,8 @@
return parse_audio_extension_unit(state, unitid, p1);
else /* UAC_VERSION_2 */
return parse_audio_processing_unit(state, unitid, p1);
+ case UAC2_EXTENSION_UNIT_V2:
+ return parse_audio_extension_unit(state, unitid, p1);
default:
snd_printk(KERN_ERR "usbaudio: unit %u: unexpected type 0x%02x\n", unitid, p1[2]);
return -EINVAL;
@@ -2118,7 +2123,7 @@
state.oterm.type = le16_to_cpu(desc->wTerminalType);
state.oterm.name = desc->iTerminal;
err = parse_audio_unit(&state, desc->bSourceID);
- if (err < 0)
+ if (err < 0 && err != -EINVAL)
return err;
} else { /* UAC_VERSION_2 */
struct uac2_output_terminal_descriptor *desc = p;
@@ -2130,12 +2135,12 @@
state.oterm.type = le16_to_cpu(desc->wTerminalType);
state.oterm.name = desc->iTerminal;
err = parse_audio_unit(&state, desc->bSourceID);
- if (err < 0)
+ if (err < 0 && err != -EINVAL)
return err;
/* for UAC2, use the same approach to also add the clock selectors */
err = parse_audio_unit(&state, desc->bCSourceID);
- if (err < 0)
+ if (err < 0 && err != -EINVAL)
return err;
}
}
diff --git a/tools/lib/traceevent/Makefile b/tools/lib/traceevent/Makefile
index a20e320..0b0a907 100644
--- a/tools/lib/traceevent/Makefile
+++ b/tools/lib/traceevent/Makefile
@@ -122,7 +122,7 @@
EVENT_PARSE_VERSION = $(EP_VERSION).$(EP_PATCHLEVEL).$(EP_EXTRAVERSION)
-INCLUDES = -I. -I/usr/local/include $(CONFIG_INCLUDES)
+INCLUDES = -I. $(CONFIG_INCLUDES)
# Set compile option CFLAGS if not set elsewhere
CFLAGS ?= -g -Wall
diff --git a/tools/perf/Makefile b/tools/perf/Makefile
index a2108ca..bb74c79 100644
--- a/tools/perf/Makefile
+++ b/tools/perf/Makefile
@@ -95,7 +95,7 @@
PERF_DEBUG = $(DEBUG)
endif
ifndef PERF_DEBUG
- CFLAGS_OPTIMIZE = -O6 -D_FORTIFY_SOURCE=2
+ CFLAGS_OPTIMIZE = -O6
endif
ifdef PARSER_DEBUG
@@ -180,6 +180,12 @@
CFLAGS := $(CFLAGS) -Wvolatile-register-var
endif
+ifndef PERF_DEBUG
+ ifeq ($(call try-cc,$(SOURCE_HELLO),$(CFLAGS) -D_FORTIFY_SOURCE=2,-D_FORTIFY_SOURCE=2),y)
+ CFLAGS := $(CFLAGS) -D_FORTIFY_SOURCE=2
+ endif
+endif
+
### --- END CONFIGURATION SECTION ---
ifeq ($(srctree),)
diff --git a/tools/perf/bench/bench.h b/tools/perf/bench/bench.h
index a5223e6..0fdc852 100644
--- a/tools/perf/bench/bench.h
+++ b/tools/perf/bench/bench.h
@@ -1,6 +1,30 @@
#ifndef BENCH_H
#define BENCH_H
+/*
+ * The madvise transparent hugepage constants were added in glibc
+ * 2.13. For compatibility with older versions of glibc, define these
+ * tokens if they are not already defined.
+ *
+ * PA-RISC uses different madvise values from other architectures and
+ * needs to be special-cased.
+ */
+#ifdef __hppa__
+# ifndef MADV_HUGEPAGE
+# define MADV_HUGEPAGE 67
+# endif
+# ifndef MADV_NOHUGEPAGE
+# define MADV_NOHUGEPAGE 68
+# endif
+#else
+# ifndef MADV_HUGEPAGE
+# define MADV_HUGEPAGE 14
+# endif
+# ifndef MADV_NOHUGEPAGE
+# define MADV_NOHUGEPAGE 15
+# endif
+#endif
+
extern int bench_numa(int argc, const char **argv, const char *prefix);
extern int bench_sched_messaging(int argc, const char **argv, const char *prefix);
extern int bench_sched_pipe(int argc, const char **argv, const char *prefix);
diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
index 774c907..f1a939e 100644
--- a/tools/perf/builtin-record.c
+++ b/tools/perf/builtin-record.c
@@ -573,13 +573,15 @@
perf_event__synthesize_guest_os, tool);
}
- if (!opts->target.system_wide)
+ if (perf_target__has_task(&opts->target))
err = perf_event__synthesize_thread_map(tool, evsel_list->threads,
process_synthesized_event,
machine);
- else
+ else if (perf_target__has_cpu(&opts->target))
err = perf_event__synthesize_threads(tool, process_synthesized_event,
machine);
+ else /* command specified */
+ err = 0;
if (err != 0)
goto out_delete_session;
diff --git a/tools/perf/util/hist.h b/tools/perf/util/hist.h
index 3862468..226a4ae 100644
--- a/tools/perf/util/hist.h
+++ b/tools/perf/util/hist.h
@@ -208,8 +208,9 @@
return 0;
}
-#define K_LEFT -1
-#define K_RIGHT -2
+#define K_LEFT -1000
+#define K_RIGHT -2000
+#define K_SWITCH_INPUT_DATA -3000
#endif
#ifdef GTK2_SUPPORT
diff --git a/tools/perf/util/strlist.c b/tools/perf/util/strlist.c
index 55433aa4..eabdce0 100644
--- a/tools/perf/util/strlist.c
+++ b/tools/perf/util/strlist.c
@@ -143,7 +143,7 @@
slist->rblist.node_delete = strlist__node_delete;
slist->dupstr = dupstr;
- if (slist && strlist__parse_list(slist, list) != 0)
+ if (list && strlist__parse_list(slist, list) != 0)
goto out_error;
}
diff --git a/virt/kvm/ioapic.c b/virt/kvm/ioapic.c
index ce82b94..5ba005c 100644
--- a/virt/kvm/ioapic.c
+++ b/virt/kvm/ioapic.c
@@ -74,9 +74,12 @@
u32 redir_index = (ioapic->ioregsel - 0x10) >> 1;
u64 redir_content;
- ASSERT(redir_index < IOAPIC_NUM_PINS);
+ if (redir_index < IOAPIC_NUM_PINS)
+ redir_content =
+ ioapic->redirtbl[redir_index].bits;
+ else
+ redir_content = ~0ULL;
- redir_content = ioapic->redirtbl[redir_index].bits;
result = (ioapic->ioregsel & 0x1) ?
(redir_content >> 32) & 0xffffffff :
redir_content & 0xffffffff;